omnibus_updater and AWS bootstrapping


#1

Anyone use this cookbook with dynamically bootstrapped AWS nodes? I
basically have a script that looks at information from ElasticBeanstalk to
determine which environment a node should be placed in that runs from
/etc/rc.local, and creates the client.rb config and executes chef-client -j
/etc/chef/bootstrap.json.

My concern with omnibus_updater is that if I bump the chef-client version,
nodes will start bootstrapping, hit omnibus_updater, and run into the chef
killer. This will mark the run as failed, even though it actually
succeeded, resulting in an empty runlist and other badness.

I know there is a kill_chef_on_upgrade setting, but the docs seem to
indicate that it is a Very Bad Idea to actually use this. Anyone know more
about it who can comment? Alternatively, should I eventually refactor my
bootstrap script to check the exit code of chef-client, and if it fails,
re-run it again? While that won’t help if the server is having issues (or
our VPC VPN), it would solve problems with the omnibus updating.

Thoughts? Looking to see if I can find a best practice for dealing with
this here, other than baking a new AMI any time I really need to update the
version… baking the AMI is easy (I’ve got it pretty well scripted at this
point), but then I’ve got to copy it across regions and update a dozen or
two EBs to use the new ones.


~~ StormeRider ~~

“Every world needs its heroes […] They inspire us to be better than we
are. And they protect from the darkness that’s just around the corner.”

(from Smallville Season 6x1: “Zod”)

On why I hate the phrase “that’s so lame”… http://bit.ly/Ps3uSS


#2

Morgan,

We use the kill_chef_on_upgrade in our prod environment liberally, but with
longer-lived nodes than Elastic Beanstalk.

The idea there is that if you’ve started on Chef version X, and you
actually want Chef version Y, the updater has now installed the new
version, it should stop running now, because presumably there’s a reason
you wanted the other version in the first place.

So the Chef run gets killed off, and a new instance of Chef using the new
version is started.

I’d ask what’s preventing you from updating your AMI bootstrap provisioning
script from leveraging the desired version?

Since after all, adding a repo and defining a package name and version in
yaml is pretty straightforward on ElasticBeanstalk, is it not?

-M

On Wed, Jun 11, 2014 at 6:45 PM, Morgan Blackthorne stormerider@gmail.com
wrote:

Anyone use this cookbook with dynamically bootstrapped AWS nodes? I
basically have a script that looks at information from ElasticBeanstalk to
determine which environment a node should be placed in that runs from
/etc/rc.local, and creates the client.rb config and executes chef-client -j
/etc/chef/bootstrap.json.

My concern with omnibus_updater is that if I bump the chef-client version,
nodes will start bootstrapping, hit omnibus_updater, and run into the chef
killer. This will mark the run as failed, even though it actually
succeeded, resulting in an empty runlist and other badness.

I know there is a kill_chef_on_upgrade setting, but the docs seem to
indicate that it is a Very Bad Idea to actually use this. Anyone know more
about it who can comment? Alternatively, should I eventually refactor my
bootstrap script to check the exit code of chef-client, and if it fails,
re-run it again? While that won’t help if the server is having issues (or
our VPC VPN), it would solve problems with the omnibus updating.

Thoughts? Looking to see if I can find a best practice for dealing with
this here, other than baking a new AMI any time I really need to update the
version… baking the AMI is easy (I’ve got it pretty well scripted at this
point), but then I’ve got to copy it across regions and update a dozen or
two EBs to use the new ones.


~~ StormeRider ~~

“Every world needs its heroes […] They inspire us to be better than we
are. And they protect from the darkness that’s just around the corner.”

(from Smallville Season 6x1: “Zod”)

On why I hate the phrase “that’s so lame”… http://bit.ly/Ps3uSS


#3

If you kill the first chef run with omnibus_updater (or it fails for any other reason) the node’s run list isn’t saved in chef server and subsequent runs won’t apply any recipes.

On Wednesday, June 11, 2014 at 5:07 PM, Mike wrote:

Morgan,

We use the kill_chef_on_upgrade in our prod environment liberally, but with longer-lived nodes than Elastic Beanstalk.

The idea there is that if you’ve started on Chef version X, and you actually want Chef version Y, the updater has now installed the new version, it should stop running now, because presumably there’s a reason you wanted the other version in the first place.

So the Chef run gets killed off, and a new instance of Chef using the new version is started.

I’d ask what’s preventing you from updating your AMI bootstrap provisioning script from leveraging the desired version?

Since after all, adding a repo and defining a package name and version in yaml is pretty straightforward on ElasticBeanstalk, is it not?

-M

On Wed, Jun 11, 2014 at 6:45 PM, Morgan Blackthorne <stormerider@gmail.com (mailto:stormerider@gmail.com)> wrote:

Anyone use this cookbook with dynamically bootstrapped AWS nodes? I basically have a script that looks at information from ElasticBeanstalk to determine which environment a node should be placed in that runs from /etc/rc.local, and creates the client.rb config and executes chef-client -j /etc/chef/bootstrap.json.

My concern with omnibus_updater is that if I bump the chef-client version, nodes will start bootstrapping, hit omnibus_updater, and run into the chef killer. This will mark the run as failed, even though it actually succeeded, resulting in an empty runlist and other badness.

I know there is a kill_chef_on_upgrade setting, but the docs seem to indicate that it is a Very Bad Idea to actually use this. Anyone know more about it who can comment? Alternatively, should I eventually refactor my bootstrap script to check the exit code of chef-client, and if it fails, re-run it again? While that won’t help if the server is having issues (or our VPC VPN), it would solve problems with the omnibus updating.

Thoughts? Looking to see if I can find a best practice for dealing with this here, other than baking a new AMI any time I really need to update the version… baking the AMI is easy (I’ve got it pretty well scripted at this point), but then I’ve got to copy it across regions and update a dozen or two EBs to use the new ones.


~~ StormeRider ~~

“Every world needs its heroes […] They inspire us to be better than we are. And they protect from the darkness that’s just around the corner.”

(from Smallville Season 6x1: “Zod”)

On why I hate the phrase “that’s so lame”… http://bit.ly/Ps3uSS