We’re just starting to use chef for managing nagios configs - so far, so
good.
What we would like to eventually use it for is to deploy all of our web apps
and services. A specific use case we have is to be able to serially
deploy to hosts/instances within a cluster behind a load balancer. For
example, we would take one host out of rotation, deploy to it, warm it up
and/or smoke test, and when warming completes and/or if smoke tests pass
then put it back into rotation and move on to the next host in the cluster.
How best to achieve that?
Thanks.
Hi Rob,
On Tue, Nov 2, 2010 at 10:38 AM, Rob Guttman robguttman@gmail.com wrote:
We're just starting to use chef for managing nagios configs - so far, so
good.
Nice
What we would like to eventually use it for is to deploy all of our web apps
and services. A specific use case we have is to be able to serially
deploy to hosts/instances within a cluster behind a load balancer. For
example, we would take one host out of rotation, deploy to it, warm it up
and/or smoke test, and when warming completes and/or if smoke tests pass
then put it back into rotation and move on to the next host in the cluster.
How best to achieve that?
Right now there isn't any automated tooling to support this sort of
deployment orchestration, but chef makes this fairly easy to manage in
a by-hand fashion.
Suppose you had a role for your app. A first pass at serial deploy
would be:
- Update data bag or role describing what version should be deployed.
- Login to load balancer, take server 1 out of config, restart
- Run chef-client on server 1. Test it. Put it in rotation and take
out server 2.
[now repeat for each remaining server]
You could go a bit further by having the recipe that generates the
load balancer config check for an "active" attribute:
web_apps = search(:node, "role:myapp AND active_app:true")
Then you could take a node out of rotation by editing the node with
knife, setting active_app:false, and running chef-client on the
load-balancer. You could also use this attribute to decide where to
deploy next:
knife ssh 'role:myall AND active_app:false' sudo chef-client
If you decide to experiment with this approach, you should be aware
that there is some lag (about 1 minute on the Opscode Platform)
between saving a node and being able to search for updated attribute
values.
Rob,
I know this isn't exactly what you're looking for, but figured I'd toss it
out there:
It's possible to do something like this using Unicorn and nginx, without
removing hosts from the proxy/load balancer. This assumes that you do "smoke
testing" in a staging environment prior to deploying to production. The
removal/warm up/addition process is no longer necessary: unicorn is capable
of rolling restarts, and nginx can skip hosts which aren't ready using the
proxy_next_upstream param.
James
On Tue, Nov 2, 2010 at 1:38 PM, Rob Guttman robguttman@gmail.com wrote:
We're just starting to use chef for managing nagios configs - so far, so
good.
What we would like to eventually use it for is to deploy all of our web
apps and services. A specific use case we have is to be able to serially
deploy to hosts/instances within a cluster behind a load balancer. For
example, we would take one host out of rotation, deploy to it, warm it up
and/or smoke test, and when warming completes and/or if smoke tests pass
then put it back into rotation and move on to the next host in the cluster.
How best to achieve that?
Thanks.
This is what we do.
Getting the particular invocation in the unicorn config file right get a number of attempts. In particular set Unicorn::HttpServer::START_CTX[0]. The rest of the config file is pretty much a direct lift from the github's 'seamless with unicorn' post.
However saying that I can appreciate wanting something more complex.
-ash
On 3 Nov 2010, at 21:56, James Sulinski wrote:
Rob,
I know this isn't exactly what you're looking for, but figured I'd toss it out there:
It's possible to do something like this using Unicorn and nginx, without removing hosts from the proxy/load balancer. This assumes that you do "smoke testing" in a staging environment prior to deploying to production. The removal/warm up/addition process is no longer necessary: unicorn is capable of rolling restarts, and nginx can skip hosts which aren't ready using the proxy_next_upstream param.
James
On Tue, Nov 2, 2010 at 1:38 PM, Rob Guttman robguttman@gmail.com wrote:
We're just starting to use chef for managing nagios configs - so far, so good.
What we would like to eventually use it for is to deploy all of our web apps and services. A specific use case we have is to be able to serially deploy to hosts/instances within a cluster behind a load balancer. For example, we would take one host out of rotation, deploy to it, warm it up and/or smoke test, and when warming completes and/or if smoke tests pass then put it back into rotation and move on to the next host in the cluster.
How best to achieve that?
Thanks.