Managing nodes that aren't servers

I am currently working on a project to implement configuration management in our environment. We have over 300 staff and public access machines (we are a public library system). I am in a quandary as to how to organize the configurations. I have gone through several of the tutorials on learn.chef.io, read many web pages, and watched several YouTube videos. I would like to follow best practices, but I can’t seem to figure out what those best practices would be in my situation.

We have the following hierarchy of systems:

  • Staff PCs (win7)
  • Front-line staff PCs
  • Back-end staff PCs
  • Admin staff PCs
  • IT PCs (win7, win10, macOS)
  • Patron PCs
  • Internet PCs (win7)
  • Catalog PCs (debian)
  • Reservation machines (win7)

There is some overlap, so having base cookbooks and wrapper cookbooks seems like a logical choice here. I would also like to have dev, testing, and production environments in this infrastructure.

I guess, I really just need help planning out how this all should go together. Does it seem like avoiding Chef roles would be a good idea in my situation? Should I create cookbooks for each type of machine (front-line staff PC, catalog PC, etc…)? Is berkshelf the best way to go about creating these cookbooks, or are Policyfiles the new goodness?

Any help would be greatly appreciated. I can see a lot of good potential here, and I am learning a lot going through the tutorials and such, but I seem to be stuck in how to proceed at this point.

Thanks in advance,
Chris

This might help you a bit, have a read about how to create “production, staging, testing, and development environments” https://docs.chef.io/environments.html

Based upon your breakdown of staff and patron machines, one suggestion would be to create “base” cookbooks for each OS, (windows, linux, macOS) and add the configuration that is common across all machines with that OS. Then create the next layer of cookbooks as needed with any desired additional configurations that are common to any one subset of machines, and so on. Based upon desired configuration, there may be several cookbooks that apply to a single machine.

In this scenario, then create a role for each of the differing configurations and layer the cookbooks in the runlist of the role, going from the most general (the base OS) to the most specific. When bootstrapping the machine, just specify the role.

There are other options as well, but this seems to be the most concise, IMHO, especially to those new to configuration management.

Thanks! I will take a look at the link you posted.

Thank you for the advice, it makes a lot of sense to me. My only question would be the use of Chef Roles. They seem to be rather maligned when I researched them, though I’m not entirely sure why. I think part of the issue was that they are not revisioned?

The most often quoted reason to use policyfiles instead of roles is just that. A role can be modified by anyone with appropriate rights to the Chef Server, and there is no versioning on the role. A role in chef is really just a convenient way of grouping cookbooks to be applied to nodes that fulfill a specific “role” in the environment. When not using policyfiles, the way of ensuring that the desired cookbook versions are applied is through environment pinning.

The real problem is not with the use of a role, but with the ability to edit a cookbook and to re-upload it to the chef server without changing the version in the metadata.rb file. The best defense for this is to always commit changes to source control before uploading to the chef server and never override a cookbook when uploading. Always bump the version in the metadata.rb. Both Berkshelf and Knife will fail and warn if an attempt is made to upload an existing cookbook without bumping the version for just this reason. Don’t be tempted to override “just this once” because the change was trivial, as this will result in situation where the cookbook version 1.2.3 is not the same as version 1.2.3 was before the overridden upload.

Whenever there will ongoing development or updates to cookbooks that have nodes in production, it is strongly advised to ensure that all nodes that are in production are in a production environment, and that the cookbooks in use are all version pinned. This prevents cookbooks that are being developed from accidentally being run on production nodes. Testing of new cookbooks or updated to existing ones should be done on nodes placed in a development or similarly named environment without the version pins.

The great advantage of policyfiles is that when creating a policyfile, all of the cookbooks used are bundled together into a single immutable artifact that is used on each chef client run. This absolutely ensures that the policy defined is the one deployed. There is a drawback however, in that if a single recipe in a single cookbook needs to be updated to resolve a code problem, or to mitigate a security vulnerability, all policyfiles that the changed cookbook use have to be re-generated. When using roles, updating the required cookbook, and updating the version pin are all that is necessary.

The bottom line is the Role/Environment and the Policyfile patterns both have merits, and the choice is ultimately yours. Policyfiles are immutable, which ensures that what is there stays there. Using environment pinning with roles can be said to be a bit more quick to respond changes necessary to mitigate new threats, but it requires a bit more wariness to ensure that code changes are always versioned.

I hope this rather long winded reply does more to clarify and less to confuse further.

Your explaination was very helpful. I think in our environment using roles and pinned cookbook versions in the environment is the way to go. We are planning on using git to version control our cookbooks anyway. Thanks for the direction!

One comment you made brought up another question. Knife vs. Berkshelf for cookbook creation. If I am not planning on pushing any cookbooks public (I thought this was something Berkshelf did better, maybe not?), would knife be a better option?

Thanks,
Chris

For Cookbook creation, I use the Chef cookbook generator built into the ChefDK. Just as an FYI, I also use the one cookbook per git repo method of cookbook generation instead of just having a single big Chef-Repo. I find that it works better for managing independent cookbook change.

The best reason to use berkshelf for management of uploads to the chef server is that when you start including supermarket cookbooks in your own workflows, you can simply reference the dependency in the metadata.rb file of your cookbook.

Knife also has the ability to upload cookbooks, but dependency resolution is left to you.

A berks install, will then gather the dependent cookbook, along with all of its dependencies, and a berks upload will upload your cookbook along with the dependent supermarket cookbook and its dependencies in one smooth motion to the chef server.

Since you should not edit supermarket cookbooks, other than to fork for a fix that you wish to contribute back to the supermarket, any supermarket cookbook you choose to use should be wrapped in a cookbook of your own creation that simply includes the supermarket cookbook recipe(s) that you wish to use. You may then set attributes as required for your own environment, without fear of a change in the supermarket cookbook breaking any of your work.