11 Oct 2017, 20:02 Technologies Development Liberty Local

Using webpack watch with the Gradle and the Liberty Application Server

Using webpack watch with the Gradle and the Liberty Application Server

I recently realised that while I’m very comfortable coding in Java, and I’m become a lot more confident with Javascript, I ~had no idea how to use the two together~. When I was doing back-end code and implementing middleware, I was using Java, and when I switched to writing full-stack applications, I used Javascript for both front and back end. In the rare cases when I needed to do both a front end and a back end, I’d use webjars. Although webjars are handy, they’re also limiting for people used to npm.

Using gradle with modern client-side Javascript

I had some time in an airport (where all good development is done), so I decided to learn how to get a more powerful cross-language build. The heart of the application was Java, so I used gradle as my base build language, but I also wrote a package.json to capture some js-related content.

First, I defined a gradle task to wrap npm install.

task installJavascriptDependencies(type: Exec) { 
    inputs.file("package-lock.json") 
    inputs.file("package.json") 
    outputs.dir("node_modules")
    
    commandLine "npm", "install"
}

Although there are a few options, I find webpack works well for minification and bundling. I defined a gradle task which calls directly out to the webpack command line

task webpack(type: Exec) { 
    dependsOn installJavascriptDependencies
    inputs.file("webpack.config.js") 
    inputs.dir("src/main/webapp")
    outputs.dir("$buildDir/js")
    
    commandLine "$projectDir/node_modules/.bin/webpack", "-p", "--output-path", "$buildDir/js", "src/main/webapp/index.js" 
}

The webpack task depends on the install task. Notice also that the directory structure is gradle-style, rather than node-style; the client lives in src/main/webapp.

Finally, I modified the default war task to also copy in the generated javascript into the war:

war {
  dependsOn webpack
  from( "${buildDir}/js" )
}

So far so good, but every time I made a change, I had to re-run the whole build, including the full webpack processing, which was taking three minutes. Three minute builds do not go well with debugging.

Setting up a loose application

Liberty has the ability to run ‘loose applications’, that is, applications which haven’t actually been zipped up into a jar file. Instead, there is a virtual packaging structure defined by an xml file. Loose applications work well with Liberty’s application monitoring and dynamic application updates (hot reload). Every time a file in the virtual assembly is updated, Liberty will near-instantaneously update the application. This is perfect for development.

You can write a .war.xml file by hand and set up monitoring in the server.xml, but it’s simplest to let the IDE take care of both. For example, in Eclipse, right click on a web application and choose “Run on Server”:

If you look inside the server.xml, you should see something like the following:

    <applicationMonitor updateTrigger="mbean"/>

    <webApplication contextRoot="/" id="some-name" location="some-name.war" name="some-name"/>

It’s a good idea to have different server definition for the dynamic server and the production one, although they can include a common set of config. And, of course, the user directories should all be source-controlled (by storing them outside the runtime install tree.

In order to ensure all my runtime dependencies were included in the loose war, I had to edit the Deployment Assembly to include dependencies which should be included in the package:

Using webpack watch

All changes to my Java application would now be automatically reflected in the live app, but the client side still needed a full rebuild. To get the final piece of dynamism, I defined a script for dynamic webpack in my package.json:

"build:watch": "webpack --watch --progress --display-error-details --output-path build/js"

I then launched the webpack watch daemon from the command line and left it running:

npm run build:watch

With this, both client and server changes to my application were live almost immediately, and I was a happy debugger.

21 May 2017, 16:45 Technologies Cloud Foundry Bluemix Development

Configuring a Bluemix Cloud Foundry Application To Run Locally

During the course of a typical day, a Bluemix Garage programming pair will deploy code around twenty times. Each of those code pushes will be unit tested in the Build and Deploy pipeline, integration-tested, and then be subjected to a final smoke test before going live. All of this cloud-based testing is marvellous, but it’s not a substitute for pre-push local testing.

Every Cloud Foundry application has a VCAP_SERVICES environment variable automatically created for it, with details of bound platform services and user-provided services. For unit tests we always stub services, but for integration tests and general ‘exploring the application locally,’ it’s convenient to run against the same services as the live application.

This is trivial to do by creating a local .secrets file (and .gitignore-ing it), and sourcing the .secrets file before running. The .secrets file can be assembled manually by reading from the Bluemix console or inspecting the output of cf env, but it’s easiest to generate it automatically. The Garage has a handy script for generating them. Generating it automatically has an extra advantage; some Bluemix services generate a new set of credentials every time they’re bound to an application. Manually maintaining a .secrets file can be hard work if pushes are frequent. In those cases, it’s best to create an extra set of credentials in the Bluemix console, and name them PERMANENT_CREDENTIALS. If this naming convention is used, the script will prefer the permanent credentials to the transient ones.

For example, the complete workflow to run a Node.js application locally would be:

cf login
script/generateSecrets
. .secrets 
npm start

where the first three steps only have to be done once per terminal window.

Using generateSecrets will pick up a Bluemix setting which puts Node.js into production mode, so you may prefer to comment out the line for sharing environment variables, as follows:

//  write(environmentVariables.join(''));