24 Aug 2016, 12:25 Sizing Estimation Project management Agile Lean XP

Working out how much a project will cost

When the London Garage meet with customers, one of the most frequently asked questions is “how much is my project going to cost?”.

It’s generally accepted that - without a working time machine - at least one corner of the project management triangle needs to be flexible. Trying to nail down scope, cost, and schedule at the beginning of a project is a notorious anti-pattern; delivering a pre-determined list of features on-schedule within a given budget may give the illusion of success, but quality will inevitably suffer. This isn’t malicious or a sign of a bad team; tired people make mistakes.

In the Garage, we’re convinced that time-boxing iterations while keeping scope flexible is the way to go. We work at a sustainable pace, instead of rushing to meet a deadline, injecting lots of exhaustion-induced defects, and then collapsing in a burned-out pile to recover until the next rush. In order to enable this sustainable pace, we can’t commit to a detailed feature list at the beginning of the project. This makes products way better, for two important reasons. No one is cutting corners to meet the letter of a contractual obligation. More importantly, the beginning of a project is when you know least about what the actual requirements are. Requirements will to change as the project progresses; if they don’t, that means you’re not learning as you go. Writing the wrong requirements in stone into a contract at the beginning of a project is pretty self-defeating. The end result is that the development team are too busy implementing requirements that no one really wants, just because they’re in the contract, to be able to pivot and work on the stuff that we’ve learned does have value. Over-specifying feature lists at the beginning of a project is a terrible risk, both technically and in terms of the user value of the finished product.

Should we be estimating?

So far, so good - until we get back to that “how much is this project going to cost?” question. The garage developers have been watching the #NoEstimates movement with interest, because much of it aligns well with our thinking. We want to be as lean and reflective as possible and only do things which add user value (lean startup). We release often and checkpoint even more often (these are extreme programming values). We know the real measure of value is user experience, not metrics on a spreadsheet or feature lists (that’s one of the reasons we do design thinking).

#NoEstimates suggests that trying to estimate project duration is inaccurate and wasteful. In the Garage, we know that we can’t know how much it will cost to build something until we fully understand the technical landscape, and we’ll only get that knowledge once we’re deep into the implementation (and even then, there will always be surprises). More fundamentally, we won’t know what we really need to build for our users until we’re getting regular feedback from them about what we’ve built so far. Does that mean estimation is always a bad idea?

In the Garage, we proud of what we do. I know my team is awesome, and they deliver great value. However, if we’re working with a customer we’ve never worked with before, they don’t necessarily already know we’re awesome. There’s no pre-existing trust relationship, and they’re possibly comparing our value proposition to that of another vendor. It’s difficult for a customer to evaluate value-for-money of the Bluemix Garage unless we give some kind of estimate about how much a project will cost (the money), as well as describing how we can provide unique solutions for business problems (the value). There’s another aspect, too. In general businesses will have capped budgets, determined either by how much seed funding they’ve received, or by internal budgeting decisions. A business will need to decide if they can afford to build the minimum viable product that allows them to start testing business hypotheses, before starting that build. Building half a product, that never reaches viability, and then throwing it out because the money’s run out, is bad business. In other words, a customer needs enough information to be able to decide whether to go ahead with a project, and by definition that information is needed at the beginning of a project. Our job in the garage is to give them that viability assessment, so that we can then help them build a great project.

Sizing methodologies

There’s an growing mountain of academic and industry research about the optimum technique for sizing, including a whole range of models for software cost, from the simple to the super complex. Some are surprisingly old (back to the 1980s), and some are impressively mathematical. We aim for continual improvement in the Garage, so we’re experimenting with a couple of different sizing methodologies that have the potential to be best practice. We need something low cost, because experience has taught us that spending more time on estimates doesn’t actually reduce project risk. On the other hand, the estimate needs to have enough predictive value to allow a customer to make a sound go/no-go decision.

These are the ones we’ve tried so far:

  • Our starting point is the process described in Kent Beck and Martin’s Fowler’sPlanning Extreme Programming. The basic principle is to break the project down into smaller pieces of work, estimate them, and then add back up.
  • A much faster approach is to estimate projects based on our experience of similar projects. This is surprisingly effective - when we’ve tried the two techniques side by side, we’ve found that ‘gut feel’ estimates line up pretty well with ‘break down and add up’ estimates, and of course they’re way faster to produce. The gut feel approach falls down when we do a project which is the first of its kind, and since the Garage specialises in innovation, that’s actually pretty often.
  • One way of adding a bit more rigour to the ‘gut feel’ estimate, is to lay out a straw-man architecture with CRC cards. Since we’re thinking at the architectural level, instead of “Class-Responsibility-Collaborator”, we do something more like “Component-Responsibility-Collaborator”. Actually, it’s “Component-or-Actor-Responsibility-Collaborator”, but that acronym gets pretty messy. We use different colour post-its for “things we write”, things we interact with”, and “actors”. Our aim is to get some of these relationships out on paper and get a sense of the shape of the project, rather than to produce a final architecture. What the rough architecture gives us is a “landscape” that we can then compare to other projects we’ve done in the past, to produce an experience-based effort estimate.
  • Another approach is to make the gut feel less rigorous. In other words, the best way to handle the huge uncertainty in cost estimation is just to acknowledge it. What the ‘real’ project ends up costing will land somewhere on a spectrum, and there are a whole bunch of business and technical factors that can influence that. So rather than trying to guess those factors away, we can simply advise a customer to plan for both a likely-best and likely-worst case: “we think this project will take somewhere between two and four three week phases.” I call this spectrum-sizing.
  • Agile methodologies emphasise the importance of estimating in terms of points, rather than weeks, so we decided to try a points-based project estimation technique. Break a project down into hills and goals, and then for each one assess the complexity of its user interface, data store, business logic, and integration. We don’t want to get hung up on the precise complexity score. Instead we use score to guide categorising the goal into a Fibonacci complexity bucket, and we can sum up how many goals are in each bucket, and derive an estimate in weeks. As we use the technique more, our calibration between ‘bucket count’ and pair-weeks should improve. This technique has the potential to be extremely accurate, but we found it a bit too labour-intensive.
  • Some of our mobile development colleagues have developed a quite extraordinary spreadsheet which encodes years of team experience about how long stuff takes into Excel formulas. Starting with inputs about the number of user screens, supported platforms, and backend interactions, along with a whole bunch of experience-led (but adjustable) assumptions about the other activities that will be required, it produces a detailed cost estimate. It even takes into account things like re-use efficiencies for subsequent stories. The thing I like about it is that it stores our collective experience, and it’s detailed enough to act as an aide-memoire about some things that take time, and that which we should be doing, but which are easy to forget. However, for most of our projects it assumes too much up-front knowledge about the final UX design.

To date, we’ve had the best success with the first three, but we’re still looking out for improvements. We haven’t tried them yet, but we’re intrigued by some of the Monte Carlo modelling and randomised branch sampling. They leverage advanced mathematics to offer the promise of more accurate estimates for less effort, which is pretty awesome. They do assume that a reasonably detailed breakdown of a project into epics and at least some stories has already been done, so we’ll need to decide if they’re appropriate at the very-very beginning of a project.

Conclusion

As I was writing this blog, Kent Beck, the father of extreme programming, posted a blog on the same subject. I won’t try and reproduce it here (you should just go read it, because it’s really good!), but I was pleased to see that some of his arguments line up with what I’d already written. Kent points out that in an ideal world one would do everything, but in a world where resources are finite, and doing one thing means not doing another thing, estimates help us make informed choices about where we should put our resources. He summarises his position as “Estimate Sometimes”. “Estimate Sometimes” isn’t the catchiest strap line, but it’s the right thing to do, for us and our customers. We need to make sure, though, that our estimates are not turned into prescriptions about duration or commitments about detailed feature lists, because we don’t want to be making those sorts of decisions at the point in the project cycle where we know least. So estimate sometimes, and then leave that estimate aside and use all the feedback we can get over the course of the project to make sure we deliver the right thing.

20 Mar 2015, 16:45 XP Projects

Garden Bridge

In 2014 IBM UKI General Manager David Stokes made the offer to the Garden Bridge Trust for IBM to do the development and initial hosting of the gardenbridge.london website. The London Bluemix Garage has set up the various apps that are behind the website on IBM Bluemix and has developed the integration with the payment gateway run by Elavon for processing donations.

The gardenbridge.london website is made up of three apps:

1. Donate app

It handles all the communication with the payment gateway, triggers the correct e-mails to be sent based on the donation journey that a visitor takes and gracefully handles payment gateway unavailability.

This was developed by the London Bluemix Garage using eXtreme Programming practices including pairing, test-driven development, continuous delivery and blue-green deployment. All Garden Bridge apps are continuously delivered using blue-green deployment.

From the Bluemix service catalog, we’ve chosen SendGrid for all outgoing e-mails. Load Impact proved highly efficient and nice to work with when load testing all Garden Bridge apps. Monitoring & Analytics gives us sufficient insight into historical app availability and response timings.

As for the code stack, we’ve chosen Mocha, Chai & Sinon.js for the test suite, Express for the web server and couple of Node.js modules for integrating with third parties.

2. CMS app

It provides a UI for Garden Bridge Trust employees running the website to make content changes & maintain a newsletter without needing to involve an IT person. This piece, and the beautiful website design, was largely done by Wilson Fletcher, GBT’s chosen design agency.

This app is powered by Keystone.js, an open source framework built on Express (Node.js), and uses the Bluemix MongoDB service for persistence. As this service was still in the experimental phase, automated MongoDB database backups that we’ve previously talked about came in handy.

3. Web app

This app proxies requests to the previous 2 apps, handles URL rewrites and redirects, and provides a static page caching layer for the rare situations when the CMS app is unavailable.

Behind the scenes, this is powered by nginx, a most capable reverse proxy server, via the staticfile-buildpack CloudFoundry buildpack.

XP in action

With no estimates or scoping exercises, the donate app was delivered ahead of schedule. All apps were live, in production, weeks ahead of the public launch of the website. Even though Garden Bridge Trust has not yet launched their public fundraising campaign, all apps are meeting production requirements including high availability, horizontal scalability and fault tolerance.

While the CMS and web app were developed by a third party, Bluemix allowed us to collaborate on the different Garden Bridge components seamlessly and effortlessly. The deployment pipelines ensured that business value was continuously delivered, without any downtime, by separate teams.

Focusing on outcomes

XP is an agile development methodology which is focussed on business outcomes. Every story delivered should add business value, and all code is developed test driven. By adopting this approach, the Garage team is able to satisfy the business requirements with a code base that is bug-free, and deliver working code to the user in super quick time.

13 Mar 2015, 11:06 Agile XP IBM Design Thinking

Using Agile Methods in the Bluemix Garage

With the cloud technologies and platforms available today, we finally have the opportunity to build applications in an agile fashion, that in the past has been hindered by the inability of traditional platforms to accommodate change.

In the Bluemix Garage we offer clients the opportunity to experience various agile methods through a number of different engagement types, which attempt to take the most relevant aspects of these practices and employ them to rapidly achieve the outcome the client is looking for. Design Thinking and Extreme Programming have very different heritages and differing focuses, but they are highly complementary provided the terminologies are understood and the appropriate hand offs are managed. In this article we overview the two key methods we use in the Garage and discuss how they focus on different stages of the development process, and why we believe they represent a string combination for organisations to adopt when building out the new innovative applications they want to bring to the cloud.

Key concepts of IBM Design Thinking

Design Thinking is all about getting into the mindset of the end user. It starts in the Explore stage with a detailed examination of the user personas being targeted, even giving them names so that the team develop a strong empathy with them going forward.

The IBM Design Thinking framework encourages the use of design thinking methods to envision the user experience. Design thinking requires diverging on many possible solutions and converging on a focused direction. You will practice design thinking using four mental spaces: Understand users, Explore concepts, Prototype designs, and Evaluate with users and stakeholders. This work may be iterative or non-linear, and can be used whenever you need to push the experience forward, diverge, or check in with users.

In the Explore phase, the team develops ‘Hills’ which are a concise written statement of what the proposed solution will deliver to the target user, stating what they will be able to do in the imagined future, and why this will be on benefit to them. In the Prototype phase, the imagined future experience is developed through a set of increasing fidelity prototypes which are shown to Stakeholders and Expert users through ‘Playbacks’, so that the whole team moves forward together against a common view of what the solution will look and feel like to the user. In the Evaluate phase, measurements are made as to how successful the prototypes are in achieving the objectives stated in the Hills. Feedback is taken on board and used in the next refinement of the system. A design thinking team is multi-disciplinary, with Designers, Product Managers, Engineers, Stakeholders and Expert Users working together as tightly as possible.

In this way the whole team has a collective agreed view of what needs to be created, and everyone has a chance to bring their ideas & concerns to the table. Design Thinking is an excellent way of ensuring pre-conceptions of what needs to be done does not prevent the creative process from coming up with even better solutions, whilst focussing in strongly on delivering the stated benefits to end users.

Key concepts of Extreme Programming

Extreme Programming (XP) is a discipline of software development based on values of simplicity, communication, feedback, courage, and respect. It works by bringing the whole team together in the presence of simple practices, with enough feedback to enable the team to see where they are and to tune the practices to their unique situation.

XP is laser focussed on delivering the minimum amount of code to satisfy the acceptance criteria in a backlog of stories that the team have agreed represent a prioritised list of work that represents a minimum viable product.

XP requires that there is strong Product Management with a clear view of what is good and what is not, and a team of engineers who are able to work together very closely through the practice of Pair programming. All coding is done test first, with the tests being written to support the acceptance criteria and then the code being written to pass the tests. No additional code is written. XP relies on good communications, a willingness for all members of the team to go on a learning journey together with the courage to embrace change at all times.

A team practicing XP will recognise they need to change direction and then do it straight away rather than blindly continuing down the wrong alley. This is achieved through fast feedback loops that are enabled by co-located teams, delivering of working software at the end of every story and continuous integration through automated builds deploying to production. Technical debt is acknowledged openly and represented in the backlog by refactoring chores and bugs. The likely future delivery of functionality is defined by the past ‘velocity’ of the team projected through the estimated size of the stories remaining in the backlog.

Common Ground

IBM Design Thinking and XP share a number of common philosophies and practices. Probably the most important one is fast feedback loops, which is baked into XP through the process of story acceptance, stand up meetings, iteration planning meetings and demos. Design Thinking employs the process of regular playbacks and evaluation of prototypes to achieve the same thing. Both methodologies stress the importance of a committed, multi-disciplinary team, preferably co-located at all times as key to really making this work. Expert users and stakeholders are particularly key in Design Thinking whereas in XP the Product Manager is seen as the one to represent the interests of those parties.

Another key shared philosophy is that of Minimum Viable Product (MVP). Both approaches focus on delivering what the business has defined to be the most valuable functions, and anything that does not contribute to that value is either removed, or, most obviously in XP, highlighted as a ‘chore’ which may have to be done but will have a measurable cost against the delivery of the overall project.

Differences of Focus

XP absolutely requires that the Product Manager has a clear view of what needs to be done to meet the business need. If this is not the case then the project ‘inception’ should not go ahead. IBM Design Thinking is an approach that can be used much earlier in the innovation process. A problem may have been identified but the solution does not have to be clear for a Design Thinking session to go ahead - in fact it is often better if the team are not already over focused on a solution in order that some more outlandish ideas can be shared and potentially lead to a more radical end result than we previously imagined to come to the fore. XP, as the name suggests, has a strong focus on delivering working software. It’s not untypical for an XP project to be delivering working code on the day after the project inception, and because the team is working from a prioritised backlog, every accepted story should be delivering value.

Test Driven Development (TDD) ensures that only the code that is needed to satisfy the story is written, and no code is written without tests. Concepts such as continuous Integration, automated builds and blue/green deployment ensure that every new piece of code delivered makes its way into production through a standardised, repeatable process and any problems introduced are immediately flagged to the engineers who wrote the code so they can fix it or back it out. In this way, new function can be rapidly tried out by selected users and any modifications needed can be quickly identified and prioritised using bug reports or new stories in the backlog.

Design Thinking is much more focussed on the user experience, and therefore various techniques for avoiding writing code that doesn’t meet the requirements are employed instead of the XP approach of quickly producing something and then changing it. Low fidelity prototypes, sometimes created on paper, are used to facilitate early user testing, and then storyboards, wireframes and mockups can be used in subsequent playbacks before the engineers start writing any UI code. Of course this doesn’t mean that the engineers are not involved until late in the process - far from it. They should be part of the Design Thinking process right from the beginning so they can give their views on feasibility of the ideas being suggested, as well as providing ideas of their own. The engineers also need to identify any ‘technical foundation’ required to support the UI designs being created, and they may well decide to start building some of this underpinning before the final UI designs have been agreed.

Working Together

IBM Design Thinking and XP will be used in different doses in different projects, but there is a clear synergy between them which will be exploited in the IBM Bluemix Garage. By merging Design Thinking with XP, it should be possible to come up with highly innovative solutions to client problems, and to then very rapidly turn them into working systems to use as Proof of Concepts or even live production applications. The start point is to have a team assembled that has the right combination of disciplines either committed to the project, or easily accessible to the project team with the importance of the work to the business firmly agreed and prioritised for the key contributors. The engineers and product management must have day job time dedicated to the project, even if it is not 100% of that time. Design expertise also needs to be readily available, especially early on in the Design Thinking sessions, and then for performing the evaluation of user feedback in order to assess likely success of the solution and suggest refinements. Empowered stakeholders in the business, who ultimately will own the solution and reap its benefits need to be closely involved in the project, or prepared to delegate the responsibility to the product manager.

Even in the latter case, they should be regularly updated on progress and have opportunities to feedback through demos or access to early system deliveries. Expert users are also key. Ideally they are part of the project team, but if not they need to be identified and methods for engaging them regularly to get feedback worked out. In terms of the process itself, once the Design Thinking sessions have successfully completed a Hills Playback, it should be possible for the team to attempt an inception in order to identify the Activities, Stories and Risks that the XP process needs in order to produce a Pointed Prioritised Backlog for the engineers to start work from. The Hills themselves will have directly identified the Actors, but if not enough is understood yet about what needs to be built in order to do an inception then the team could continue with Design Thinking to get to the Playback Zero, where a low fidelity prototype should be available and a lot more detail have been thrashed out. Beyond this point it should be possible for working code based on agreed designs to start to be built, and for the project as a whole to be able to reduce the time between subsequent playbacks or maybe even the number of playbacks needed because XP is driving the production of working code at a much faster rate. Another side effect could be that more granular playbacks can be achieved, possibly even with the concept of ‘continuous playbacks’, because the XP process is able to deliver small increments of new capability against each Hill in turn without a large overhead in producing code releases.

Summary

It has always been a stated intention of the Bluemix Garage that we would use the disciplines of IBM Design Thinking and Extreme Programming to deliver rapid value to clients exploiting cloud technology centred on Bluemix. Having gained some experience of this through our early projects, and after engaging acknowledged experts in both philosophies to discuss the similarities and differences, it is only now we are starting to appreciate how the practicalities of this will work going forward.

The good news is that the assertions that IBM Design Thinking and Extreme Programming are complementary appears to be sound, and we can now look forward to taking these approaches forward into varied projects in the future and discover more about what the ‘Bluemix Garage Way’ should look like.

Acknowledgements

Thanks to Steve Haskey from the IBM Design team in Hursley and Colin Humphreys, CEO of CloudCredo for their engagement with this discussion and valuable views based on their vast experience of IBM Design Thinking and Extreme Programming, respectively.

10 Mar 2015, 15:52 Bluemix Continuous Delivery XP

Blue-Green Deployment

One of our core principles is to deliver stories with the highest business value from day one. To deliver something means to put it in front of our customer. Regardless of whether our customer decides to make it public or not, it is available in production from the very first cf push. We strongly believe that having a partially complete solution in production from day 1 is more valuable than having the full solution delivered later.

All Bluemix Garage projects are continuously and automatically deployed into production, regardless whether they are internal tools or software components that we develop for customers.

One of the goals with any production environment is maintaining high availability for your application. The CloudFoundry platform helps with this by giving you the ability to scale your application horizontally to prevent downtime. There is another availability challenge when continuously deploying applications - if you cf push an existing application in production, your instances must be restarted to update your code. Not only does this inevitably introduce downtime, but you are left without a fallback if something goes wrong.

Blue Green deployment is not a new concept by any means. Nor is it tied to any particular platform. In short the idea is:

  1. Deploy a new instance of your application (leaving your current “green” environment untouched).
  2. Run smoke tests to verify the new environment, before taking it live.
  3. Switch over production traffic, keeping the old environment as a hot backup.

As we are deploying straight to production, using blue-green is imperative.

The current status-quo

We discovered these two tools:

  • Buildpack Utilities from the IBM WAS team — An interactive Bash script that will take you through the process
  • Autopilot from Concourse — A plugin for cf (We really like that it’s built into the tooling)

Both tools in fact do roughly the same thing:

  1. Rename the existing app while keeping all routes in place
  2. Push the new instance, which will automatically map the correct routes to it
  3. Delete the old instance

There are three problems with this approach:

  • They do not support running tests against the new app before making it available on the production route.
  • They immediately delete the app, making it impossible to revert to the old environment if something goes wrong
  • There is a chance that the old app could be serving requests at the time it is deleted.

While Blue-Green is quite a simple concept, we discovered several nuances in how we wanted to approach the problem so we decided to build our own.

Our Take

Our continuous delivery environment already consisted of a couple of shell scripts that provide app lifecycle management on top of the cf cli. We decided to build an extension to these scripts to manage Blue-Green with this process:

  1. Push new app instance to Bluemix with a different name
  2. Run end-to-end integration tests against the new app instance using the auto-generated route
  3. Map production routes (including domain aliases) to the new app instance
  4. Unmap routes from previous app instance
  5. Clean up all older app instances, leaving only the previous instance running as a hot backup

If any step fails then the CI server immediately gives up. We don’t want to continue the deployment if anything goes wrong!

The crucial difference is that we do not touch the production app until we know we have a good instance to replace it with. This means using a new app name for every deploy — as easy as passing a specific application-name when calling cf push. We chose to append the date and time for every deploy, but you could also use the git hash or CI build number.

After the new instance has been tested — with simple smoke tests in our case, the tool will then cf map-route the production route to the application. Now the new instance is handling production traffic, cf unmap-route can be used to disconnect the old instance from the world, leaving it untouched until the next deploy.

Deploying with a new application name each time, means we always run in a clean environment. However, without cleaning up, we’ll be left with a lot of redundant, idle applications consuming resources. This is easy to deal with, we simply delete all previous instances except the new app and the instance we just unmapped.

Blue, Green, Cloud.

Arguably what we’re doing here isn’t strictly Blue-Green deployment. The name “Blue-Green” stems from there being two separate environments on physical hardware, each of which could run an instance of the application. It wasn’t possible to throw away the old hardware and start with a completely fresh system for each install so it was necessary to flip between them.

Developing for a Cloud environment such as IBM Bluemix, means that creating whole new application environments is a simple cf push away, and deleting them is just as easy. With such a cheap way to create whole new environments it just doesn’t make sense to keep flipping between two apps when you can always start afresh.

We’ve been running several applications in this configuration for months now and we’re really pleased with what we’ve built. Everyone has different needs, this works for us but it’s just a different way to approach the problem.

Update 07/09/2015

We have recently open sourced our Blue-Green deploy script. It is now available as a Cloud Foundry CLI plugin. The source code and documentation can be found here:

https://github.com/bluemixgaragelondon/cf-blue-green-deploy

06 Mar 2015, 11:15 Hugo XP Continuous Delivery

Continuously delivered to Bluemix, served by Hugo

This website runs on Bluemix and is built with Hugo, a static site generator written in Go. Pages are served directly by hugo server which thinly wraps the http package from Go’s stdlib.

The entire website setup is versioned by git including the Hugo binaries, continuous delivery scripts, content & assets.

Every local commit gets pushed to a Jazz Hub git repository. Our continuous delivery environment, which runs on Jenkins, triggers a new build every time there is a new commit to this repository. This translates to pulling the latest version from Jazz Hub, running cf push, checking that the new version of the website is up and running and finally making it available to the public. Widely known as zero downtime deployment, we call it Blue-Green deployment since it describes the process more accurately.

Why Hugo?

After having considered WordPress, KeystoneJS & Ghost, we have settled on Hugo for the following reasons:

  • a platform-specific binary file is the only dependency
  • no database required
  • fast feedback when creating new content
  • fits perfectly with our way of working (version control, instant feedback, Continuous Delivery)

Why Bluemix?

We could try other popular alternatives which needed a PHP & node.js runtime with no effort. Rather than spending time with programming language or database setup, we could focus on finding the solution which best met our needs. Bluemix Containers came in handy while evaluating Ghost & WordPress. As for KeystoneJS, a new instance was a simple a cf push away.

Once we have settled on Hugo, we had continuous delivery and blue green deployment set within minutes. We can scale to as many app instances as we need with a single command. We went with the Go buildpack initially, then settled on a generic CloudFoundry binary buildpack as it made our deployment 50% quicker.

Why Continuous Delivery?

All Bluemix Garage projects are continuously delivered, without exception. Whether it’s a utility such as our Bluemix Blue Green Deployment, cf cli plugins, our website or any projects which we deliver for our clients, they are all developed using Extreme Programming practices.

No code is written before there is a failing test. We keep the cost of change low by continuously refactoring. We always pair program. We keep our commits small and constantly push to a central repository. Our Continuous Delivery environment pushes directly to production. We keep all feedback loops as small as possible so that we can quickly learn what works and what doesn’t. There is no better way of consistently delivering working software of the highest quality.