Habitat channels in a hybrid environment

Over the past few months we have created many Habitat packages with the intention of fully automating the provisioning and update of Windows and Linux servers. However, our first offering is going to be a hybrid of our legacy processes and our strategic direction.

We would like to leverage as much of the automation we have built with Habitat but still allow for integration with existing processes so that the current support teams can gain confidence in the automation.

Habitat will be used to help provision new servers but there is a strong concern about how these servers will be updated.

Several ideas have surfaced and we are trying to decide what is the best tactic to bridge the gap between our legacy processes and our strategic direction. The following three ideas are being proposed: environment tiered channels, machine based channels and no channels.

Three assumptions are common to all proposed solutions:

  • Offering definitions are stored in common key value store which is used in the provisioning phase

  • Once a Habitat package is ready to be deployed for any offering it is promoted to a stable channel

  • Once a Habitat package is ready to be provisioned for an offering a channel is added (‘offering_name’ for example)

Tiered Channels

Package channels are based on offering and environment. For example: offering_name-dev, offering_name-qa, offering_name-prod

  • The tier (dev/qa/prod) and Habitat packages selected by the user during the provisioning phase are copied to the server in the form of a json config file

  • Provisioning script appends the tier to the channel on a per package basis. hab svc load origin/package --channel offering_name-dev --strategy at-once

  • Deployment team tags proper channel to packages on the build server when ready to deploy

Tiered Concerns

  • Deployment teams may lack flexibility required to update servers in a way consistent with current processes

  • Failing back to legacy update methodology may require unloading Habitat services

Machine Channels

Package channels are specific to each server.

  • Habitat packages selected by the user in the provisioning phase are copied to the newly provisioned server in the form of a json config file

  • Provisioning script:

  • Installs Habitat packages from a channel that relates to the offering: hab pkg install origin/package --channel offering_name

  • Loads each service with a channel that matches the computername: hab svc load origin/package --channel $env:computername --strategy at-once

  • Deployment team can update infrastructure software on dynamically created groups of servers by tagging a package on the build server with channels that map to each server name

Machine Concerns

  • Packages could potentially have thousands of channels

  • Builder gui may be slow or difficult to use

  • Build server could have performance issues

  • Level of granularity imparted is not part of our strategic direction

Scripted Updates

Channels are only used at provision time and updates are handled by scripts.

  • Provisioning script:

  • Installs Habitat packages from a channel that relates to the offering

  • Package service is loaded with an update strategy of ‘none’: hab svc load origin/package --channel offering_name --strategy none

  • Deployment team can update infrastructure Habitat packages executing hab pkg install and hab svc load via scripts

Scripted Concerns

  • Custom scripting must be developed

  • Not using Habitat’s built in deployment mechanisms

  • Iterating on non strategic solutions is not ideal

Any suggestions would be appreciated!

Ok let’s get the easiest bits out of the way first. Personally, the idea of machine channels makes my stomach turn a bit. That doesn’t mean it wont work, but it seems like it would be some extreme tedium and a LOT of scripting and glue code to make long term management of that some kind of reasonable.

The Scripted Updates strategy feels pretty close in line to the way that some of our users handle airgapped environments. As such I think it could work but I think it also might be more overhead than is necessary if you’re in a non-airgapped environment. That being said - you have to do what works for your organization.

Of the three, tiered channels seems like the most reasonable strategy and it follows closer to the way we handle our own habitat based services today but with some pretty major differences - Specifically, we have a separate rings for our dev/acceptance/prod environments and the supervisors in those environments behave differently. Dev/Acceptance are deployed with automatic update strategies and they watch unstable (for dev this is probably acceptable, but not necessarily so for acceptance). Once we’ve validated our new packages’ stability/functionality in dev then we can promote to stable. From there the shape of the pipeline is really up to the environment you’re working in, maybe that’s all you need for your Acceptance nodes to auto pull down the latest stable packages and you do manual updates of your services in prod. Or maybe you do automatic updates of your services in prod but you have a specific environmental channel prod watches (other than stable) so that you can cherry pick what goes out.

I might suggest you doing some reading and testing of some workflows that leverage our --group calls as well. I think you could end up with more flexibility in your deployment patterns if you do. We’ve actually also got some organizational flags that aren’t all 100% implemented that might be worth looking into: the --environment and --org flags.