I have two instances of Chef server running, one for development, one for production. But you can probably collapse them into one if you’d like.
I second the idea of using roles.
I have the following model:
Role [prod_env] - sets up DNS, NTP, etc. Pretty minimal stuff
Role [web] - which has run_list:
And each node in “web” cluster has in its run_list:
So, to reiterate (since the above example may be a bit confusing), I have a role for each type of host (web cluster, api cluster, database cluster, etc.)
And each role’s run_list is a list of roles or recipes. This also allows me to modify “default_attributes” and “override_attributes” for each type of hosts as necessary.
From: Brian Akins [mailto:firstname.lastname@example.org]
Sent: Tuesday, October 19, 2010 8:04 AM
Subject: [chef] Re: Re: Organizing multiple “clients” and cookbooks
On Tue, Oct 19, 2010 at 9:10 AM, Seth Chisamore <email@example.com:firstname.lastname@example.org> wrote:
The DRYest approach would be common cookbooks with individual roles per client. The roles could override unique client attributes, things like apache, tomcat and mysql tuning parameters…or ports that apache runs on. Since each role also has it’s own run_list it would allow you to account for the different groupings of software each client may also have (ie some clients use apache only, no tomcat).
Unfortunately, I’m not sure that will work. Each of our “clients” is probably larger than the average chef user’s complete infrastructure in size and complexity. Imagine several large web companies with a single operations group…
For example, client “A” may have 4 different application “stacks”:
- apache -> tomcat -> mysql
- varnish -> apache -> NFS mounts
- apache + wsgi python app -> memcache + mysql
- varnish -> proprietary app server -> who knows what
and also have development, reference, and productions environments for each of those.
Clients B and C may have some of the same components, but in a completely different layout.
Granted, we have been standardizing more, but this is the unfortunate place we find ourselves in now.
Our first deployments will be for some of the more “standardized stacks” we have, so we have some time to figure it out, I hope.