Taming spring boot to work with HashiCorp Vault in production

An in-depth analysis of how spring boot can be integrated with HashiCorp Vault

Understanding the problem

As an increasing number of businesses lean towards gradual replacement of costly, monolithic applications with light-weight services that loosely couple with each other and as open-source frameworks become mature enough to take on production responsibilities, the micro service architecture emerges as a favourite among the community responsible for designing modern IT solutions or re-engineering existing ones. Small components, maybe running in virtualized containers like dockers, dedicated to achieving a single task and chatting with other small components in the cloud is perhaps a simplistic description of micro services but does outline the overall trend these days. In the java world, micro services can conveniently be written as spring boot based applications running in single JVMs and deployed as docker images for ease of management, security and sacling. Of particular interest, from the perspective of this write-up, are micro services that closely interact with databases like PostgreSQL or Mongodb. Regardless of whether a database has been provisioned by administrators as static instances in the cloud or is hosted in-premise, it is very common for a microservice to maintain a constant chatter with a database to achieve what it is meant to. The intergration that spring boot offers with databases of various flavours, therefore, is quite central to how much can be achieved with micro services.

Spring Boot is immensely popular with the developer community for the amazing ease with which it allows creation of stand-alone, production-grade applications that one can “just run”. It provides production-ready features such as metrics, health checks and externalized configuration besides a large number of convenient plugins that offer super-easy integration with other systems in a typical IT landscape. Configuring the spring boot application to connect and talk to PostgreSQL is usually a rather plain-vanilla affair with this awesome framework having taken care much of the boiler-plate mess involved in creating a data source or providing a JDBC template for shoving all CRUD operations to. Assuming credentials have been assigned to spring-specific keys in an application properties file, spring boot will load them in at bootstrap, prepare a datasource and a JDBC template as singletons and make them available for use. Talking to the database from a java class in such a set-up is usually just down to auto-wiring the JDBC template into that class and calling the relevant operation on the template. With connection management out of the way, the rest is a breeze for an average developer. Of course, this requires credentials to be saved in the application property files but they can easily be encrypted to provide an additional layer of defence.

However, as security continues its unabated progress up the ladder of priorities, some organizations have begun to mandate that all credentials necessary to access sensitive resources like databases be stored in dedicated “vaults” or encrypted repositories. Parties with a “need to know” these secrets are allowed to make requests to such systems, typically over RESTful APIs, subject to successful validation of secret tokens presented with the requests. HashiCorp Vault a piece of specialized software available and gaining popularity in this category. In these days of cloud computing, Vault is typically made available the DevOps teams as a service to be used on a cloud platform across all projects. For the sake of simplicity and for those who have not already guessed by its name, Vault is simply a software that stores secrets. Passwords, tokens, SSH keys, certificates, API secrets etc can all be stored in Vault which encrypts this data before saving it to a backend system on the disk. A client that needs any of this data can make a request to a running instance of Vault which usually exposes a set of RESTful API that allow management of such secrets. All such calls for data must have a valid alpha-numeric token in the HTTP request headers. Every such token is unique to and mapped to an identity, thereby allowing full audit and traceback to callers. Tokens are generated by Vault system administrators and given to authorised parties for use as part of BAU DevOps operations. Given such a token, a caller can request Vault for a set of credentials (user name and password) that can be used to access a given resource (postgreSQL database in our case). Vault also has a dynamic angle to it. These credentials have a finite lease time. Upon expiry of the lease (or actually a little before that), the party using the credentials must request Vault to renew the lease, effectively extending its life time. However, even this renewal can only be requested N number of times where N is configurable. Furthermore, the token (using which credentials were requested) will become invalid if Vault is not explicitly requested to renew it before it expires. Vault is actually much more capable and complex than this but for the purpose of this write up, such a simplistic view should suffice.

Unfortunately, a Vault token does not live for ever. Once generated, it has a finite lifetime of N days where N is configurable AND subject to a system-enforced maximum. Using the token, one can request Vault for valid postgreSQL credentials and the JSON response that Vault returns contains, besides the precious postgreSQL user name and password, a finite lease time and a finite max lease time for the supplied credentials. This means one can get Vault to grant additional lifetime to these dynamic credentials but only so many times. As soon as the max lease time is breached, the credentials will stop working and postgreSQL will unsympathetically return a permissions-related exception. Researching through documentation, I have also found out that spring boot offers a convenient out-of-the box life-cycle management feature for Vault integration of this kind. Vault integration can be incorporated into a spring boot project by adding the following dependency into the project’s pom file:

<dependencies>
    <!-- other dependency not shown -->
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-vault-config-databases</artifactId>
<version>1.0.1.RELEASE</version>
</dependency>
</dependencies>


With this in place, spring will scan its bootstrap property file at startup and assuming the file contains some basic vault-related key-value pairs, it will request credentials off Vault, configure the application’s datasource and be ready with a singleton jdbc template for use. Interestingly, spring also has Vault lifecycle management enabled by default with effectively means that spring will automatically figure out a set of credentials is about to expire and will request Vault to extend its life in time. Spring’s behind-the-scenes management of vault-supplied credentials can be seen in the logs by increasing the log level on the package org.springframework.vault to DEBUG.

However, this is where the utility of spring boot’s out-of-the-box vault integration features pretty much comes to an end. I was pained to observe a running application crash pathetically upon expiry of the max lease time prescribed by Vault. The reason is clear – Vault, as expected, refuses to extend the life of credentials supplied at startup beyond the max lease time. More worryingly, spring’s life-cycle management seems oblivious to this, effectively letting the application jump off a cliff. The only way out is to bounce the application, forcing spring to fetch fresh, new credentials from Vault at start up. Furthermore, spring also allows the root token to expire which is not unexpected, since that is what one can expect to happen if the root token is not explicitly renewed. Vault offers an API for root token renewal but I have never found spring’s lifecycle manager use it.

For production systems, this poses a serious problem. An application left running alone for long enough will become dysfunctional after some time. It may be bounced a few times by IT support teams but beyond a point even that will not work since the root token itself would have expired by then.

Solving the problem

The first step to solving the problem above is to stop using the spring-vault integration module by way of removing the above dependency definition from the application’s pom file. In a situation of this kind, it makes sense to take full programmatic control of Vault lifecycle management. At a very high level, this can be achieved with the following design:

  1. Enable spring’s scheduler so that methods written with logic to invoke Vault APIs can be triggered at configured intervals. Use spring’s @EnableScheduling and @Scheduled annotations to activate the scheduler
  2. Rather than letting spring create a JDBC datasource at startup, write an explicit method to create a custom datasource bean that spring should use for interacting with the database. Annotate this bean with spring’s @RefreshScope annotation. This will allow a force-refresh of the bean when fresh Vault credentials are available. This has been explained below
  3. Create a mechanism from one the scheduled methods mentioned under (1) above to issue a /refresh RESTful call to the application in order to force a refresh at pre-configured intervals. Again this has been adequately explained below

The overall idea is to have 2 scheduled jobs within the application. The first job is responsible for renewing the root token at periodic intervals. The developer needs to be aware of the token lifetime policy put into place by the Vault system administrator and to ensure the duration between successive invocations of this job is less than the life of the token itself. The second job, which is a more complicated one, is dedicated to fetching credentials from Vault at intervals lesser than the max lease time for credentials configured for Vault. Let’s look at the jobs individually.

To enable scheduling in a spring boot application, one of the configuration classes or the class annotated as a @SpringBootApplication class needs to be marked with the @EnableScheduling annotation. This annotation basically enables Spring’s scheduled task execution capability. In a spring application enabled for scheduling in the above way, all the bean methods annotated with @Scheduler will be registered for scheduling. Using this capability, a scheduled job dedicated to renewing the root Vault token can now be written along the following lines:

       // class definitions, other variables and other methods 
/**
* cron-fired method to renew the vault token periodically
*/
        @Scheduled(cron = "${my.vault.token.renew.cron.expression}")
public void renewMyRootToken() throws Exception
{
HttpHeaders headers = new HttpHeaders();
headers.add("X-Vault-Token", vaultRootToken); // vaultRootToken to be obtained from the application properties file
headers.add("Cache-Control", "no-cache");
String vaultUrl = vaultScheme + "://" + vaultHost + ":" + vaultPort + "/v1/auth/token/renew-self"; // vaultScheme, vaultHost etc to be obtained from the application properties file 
new RestTemplate().exchange(vaultUrl, HttpMethod.POST, new HttpEntity<>(headers), String.class);
}

The Vault communication scheme (http or https), the Vault server hostname and port can be configured in the application’s property file. In the same file, a cron expression to control the frequency of this job can be assigned to the my.vault.token.renew.cron.expression property key. Adequate help on creating spring cron expressions can be found here. Based on the cron expression, spring will invoke this method periodically and every single invocation of this method will amount to a RESTful HTTP POST call to the Vault token self-renewal API, thereby effectively extending the life of the token to N days from the point of call. As explained earlier, N is configured by the Vault administrator but is limited to a maximum value.

With root token renewal out of the way, it is now time to look at how Vault credentials can be refreshed to ensure the running application has valid credentials to connect to the database at all times and regardless of how long it has been up and running. Again a scheduled spring cron job comes to the rescue. This job can be written along the following lines:

     // class definitions, other variables and other methods
@Autowired
private MyDbCredentialsBean myDbCredentialsBean; // spring will inject the only instance of this (singleton) bean here
/**
* This method is called by the spring scheduler. Its purpose is to refresh Vault credentials being used by this application
*/
@Scheduled(cron = "${my.vault.credentials.renew.cron.expression}") // should come from the properties file
public void attemptARefresh() throws Exception
{
// This calls Vault to fetch new credentials
getAndStoreCredentials(); // this method is explained below
if (!myDbCredentialsBean.isLastSuccessful()) // this flag is explained below
{
return;
}

//Prepare a self-call so that spring knows it has to refresh some beans
String refreshUrl = "http://localhost:" + serverPort + "/refresh"; // serverPort should have been defined in the application property file. Host will always be localhost since this is a self-call
// invoke the self-call
new RestTemplate().exchange(refreshUrl, HttpMethod.POST, new HttpEntity(null), String.class);
}

The private getAndStoreCredentials method, which makes the actual call to Vault is shown below:

        /**
* This is the method that actually make a RESTful call to Vault
*/
private void getAndStoreCredentials()
{
boolean vaultCallSuccess = false;
HttpHeaders headers = new HttpHeaders();
headers.add("X-Vault-Token", vaultRootToken);
headers.add("Cache-Control", "no-cache");
String urlToVault = vaultScheme + "://" + vaultHost + ":" + vaultPort + "/v1/" + vaultPath; // vaultScheme, host, path etc to come from the properties file
ResponseEntity<String> response = new RestTemplate().exchange(urlToVault, HttpMethod.GET, new HttpEntity<>(headers), String.class);
String latestDbUserName = null;
String latestDbPassword = null;
if (response != null && HttpStatus.OK == response.getStatusCode())
{
String vaultJsonString = response.getBody();
JSONObject vaultJsonObject = new JSONObject(vaultJsonString);
latestDbUserName = vaultJsonObject.getJSONObject("data").getString("username");
latestDbPassword = vaultJsonObject.getJSONObject("data").getString("password");
vaultCallSuccess = !StringUtils.isEmpty(latestDbUserName) && !StringUtils.isEmpty(latestDbPassword);
}
myDbCredentialsBean.setLastSuccessful(vaultCallSuccess);
if (vaultCallSuccess)
{
// Save credentials in a singleton bean.
myDbCredentialsBean.setPresentlyWorkingUserName(latestDbUserName);
myDbCredentialsBean.setPresentlyWorkingPassword(latestDbPassword);
}
}

This solution involves a singleton bean called MyDbCredentialsBean, which is a very simple POJO with fields for user name, password and a boolean flag to indicate if the call to Vault was successful. As is clear from the example above, the scheduled method, reloadRefreshableBeans first attempts to call Vault and fetch a fresh set of credentials. If the call to Vault fails, the scheduled method simply sets this flag to false and the flow ends there. However, if the call succeeds, the scheduled method saves the freshly fetched credentials into the singleton bean, sets its flag to true and then goes on to initiate a REST API call for /refresh on this very micro service. The HTTP verb used for this call is a PUT. Such a call causes all beans in this application marked with a “RefreshScope” annotation to be reloaded. Since the datasource is the only bean marked with this annotation, spring effectively tries destroy the datasource by shutting it down and clearing off its cache. In doing so, Spring will wait for the datasource to complete all ongoing transactions before shutting the connection pool down. Thereafter, as soon as the datasource is required again for any interaction with the database, spring will try to re-create the datasource bean. Needless to say, this needs to be controlled programatically too to ensure that connections to the database are created on the basis of latest credentials saved in the singleton MyDbCredentialsBean. The following blocks of code show how the MyDbCredentialsBean has been wired into the application as a singleton and how the datasource can be created programatically with credentials borrowed from the singleton:

       /**
* Setup a HikariDataSource for this application. The datasource is a bean
* that can be refreshed. It needs to be refreshed periodically so that
* fresh credentials obtained from Vault can be loaded in

* @return instance of a com.zaxxer.hikari.HikariDataSource
*/
@RefreshScope
@Bean(name = "datasource")
public HikariDataSource dataSource() throws Exception
{
// call Vault if the credentials have not already been refreshed
if (!myDbCredentialsBean.isLastSuccessful())
{
getAndStoreCredentials();
}
HikariConfig jdbcConfig = new HikariConfig();
jdbcConfig.setUsername(myDbCredentialsBean.getPresentlyWorkingUserName());
jdbcConfig.setPassword(myDbCredentialsBean.getPresentlyWorkingPassword());
jdbcConfig.setJdbcUrl(databaseUrl); // database URL from application properties
// Other hikari pool config code
return new HikariDataSource(jdbcConfig);
}

It should be noted that the idea of introducing a singleton MyDbCredentialsBean with a status flag is to make sure that the application carries on with the old (still working) credentials in the event of a failure to call Vault. Fresh credentials are only injected into the application if the call to Vault succeeds. When the application starts up, it notices that the status flag in the singleton bean is false and therefore calls Vault for credentials. Obviously, the application will fail to start if the first call (at start up) is a failure but beyond that, the scheduled job makes sure that application can carry on functioning with working credentials should subsequent calls to Vault fail.

 For completeness, here is an outline of the MyDbCredentialsBean:

         /**
* This is a Plain POJO for holding db credentials
* Configured as a singleton bean
*
*/
public class MyDbCredentialsBean
{
/**
* A flag indicating success/failure of last attempt to fetch credentials from vault
*/
private boolean lastSuccessful = false;
/**
* Working database user name
*/
private String presentlyWorkingUserName = null;
/**
* Working  database password 
*/
private String presentlyWorkingPassword = null;

// getter and setter methods for the above fields
}

And here is how it has been introduced into the system as a singleton:

       /**
* Plain POJO for holding db credentials
* Configured as a singleton bean. Holds working db credentials that can be reused if a vault call fails
*/
@Bean
@Scope(value="singleton")
public MyDbCredentialsBean createMyDbCredentialsBean()
{
return new MyDbCredentialsBean();
}

It is highly recommended that these jobs be scheduled to fire during off-peak hours when the traffic is minimal. This will minimize any possible disruption during the small amount of time it takes to re-initialize the connection pool.

About Dipak Jha

Dipak Jha is is a hands-on Solutions and Integration Architect. He is based at London, UK and works as a SME on Cloud Technologies, Enterprise Architecture, Middleware, Systems Integration, Transformations, Migrations, and general Internet technologies.

Leave a Comment

Your email address will not be published. Required fields are marked *