General questions about provision/kickstart!


#1

Hello!

we are examine alternatives to our legacy CM infrastructure.
I’m little bit puzzled about which parts chef plays.

I would look at cobbler vs. foreman for
provisioning and Chef vs. puppet for CM. But is
this correct? Could chef do the provisioning without
cobbler/foreman? What is “best practice” to provision
servers.

We are mainly using Scientific Linux (RHEL-Clone) and
a little bit ubuntu.

Any help would be appreciated.

regards

sven


#2

Provisioning machines is not explicitly done by Chef, but there are
tools that work well with Chef.

For simple PXE-booting nodes and provisioning their OS and installing
Chef, I have a set of cookbooks tied together called pxe_dust that is
currently Ubuntu-only, but there’s no reason other OS’s couldn’t fit
into the system.
http://community.opscode.com/cookbooks/pxe_dust

Crowbar is a tool that provisions machines and is intended for rapidly
deploying large numbers of boxes and managing them with Chef.

Of course, if you’re using the cloud there are a number of knife
plugins for automatically provisioning boxes (knife ec2, rackspace,
cloud-stack, eucalyptus, etc.). A Cobbler cookbook probably wouldn’t
be too difficult to write if there isn’t one already.

Thanks,
Matt Ray
Senior Technical Evangelist | Opscode Inc.
matt@opscode.com | (512) 731-2218
Twitter, IRC, GitHub: mattray

On Wed, Dec 21, 2011 at 3:25 AM, Sven Sternberger
sven.sternberger@desy.de wrote:

Hello!

we are examine alternatives to our legacy CM infrastructure.
I’m little bit puzzled about which parts chef plays.

I would look at cobbler vs. foreman for
provisioning and Chef vs. puppet for CM. But is
this correct? Could chef do the provisioning without
cobbler/foreman? What is “best practice” to provision
servers.

We are mainly using Scientific Linux (RHEL-Clone) and
a little bit ubuntu.

Any help would be appreciated.

regards

sven


#3

Responses in-line

On 12/21/2011 4:25 AM, Sven Sternberger wrote:

Hello!

we are examine alternatives to our legacy CM infrastructure.
I’m little bit puzzled about which parts chef plays.
Chef/Puppet plays the role of idempotent configuration in your post
installation, and post deployment, phase. In the cloud, Chef and Knife
plugins may combine some of these steps with using cloud provider API
calls, e.g. knife ec2, knife bootstrap.

  1. Bootstrap node with JeOS (Just Enough OS)
  2. Optional one-off configuration scripts in Kickstart or Preseed.
    • You may have single run tasks better handled with
      Kickstart/Preseed, like turning off unnecessary services on first boot.
      Maybe you need to turn some of those back on with Chef/Puppet, depending
      on a node’s role.
  3. Bootstrap (install) chef-client/puppet.
  4. Configure node with Chef/Puppet. Bring the node into its correct state.

Can you tell us a bit more about what role your existing CM
infrastructure plays? How are you provisioning systems with existing
"legacy" solutions in place?

I would look at cobbler vs. foreman for
provisioning and Chef vs. puppet for CM. But is
this correct? Could chef do the provisioning without
cobbler/foreman? What is “best practice” to provision
servers.
I use a central tftp/PXE server with Syslinux, gPXE, and Kickstart
script components. That is the “best” I can do, while still meeting
organizational requirements. Our requirements include the ability to
pxechain separate PXE servers like RIS/WDS off of our central server.
“Best practice” is a subjective term that may not apply to everyone.
Chef complements our existing provisioning solution, while meeting all
our requirements and not creating a bottle neck in the provisioning process.

We tried out Cobbler with Puppet, when both solutions were in early
stages of development. If I recall Cobbler was able to generate a
default Puppet manifest for nodes, through the use of templates. Which
to be honest, was kind of neat, but did not seem to be that useful,
because Cobbler was designed primarily for provisioning. Maybe I am
missing the point of that ability to integrate with a CM solution. I
want Puppet nodes managed by the Puppetmaster and Chef nodes managed by
the chef-server. I don’t want, or need, Cobbler to manage my
Puppet/Chef nodes in post deployment. It looks like there is also some
repo-syncing capabilities in Cobbler, but again Cobbler was not designed
for upgrade management, in the same sense that Spacewalk fits that
purpose. I can’t speak to how Cobbler or Foreman integrates with Chef.
I may revisit Cobbler in the future for the repo-synch capabilities and
see if it complements our existing patch management, and upgrade processes.

To answer your question, Chef can do the configuration without Cobbler
or Foreman. If there is a major problem with your existing “legacy CM
infrastructure” that you are specifically trying to address, then look
at what each solution offers and how it matches your requirements. If
your existing provisioning solution can bootstrap a Puppet or
chef-client agent onto newly installed nodes with a simple post-install
script, there is nothing incorrect with doing just that.

We are mainly using Scientific Linux (RHEL-Clone) and
a little bit ubuntu.

Any help would be appreciated.

regards

sven


#4

On Wed, Dec 21, 2011 at 1:25 AM, Sven Sternberger
sven.sternberger@desy.de wrote:

Hello!

we are examine alternatives to our legacy CM infrastructure.
I’m little bit puzzled about which parts chef plays.

I would look at cobbler vs. foreman for
provisioning and Chef vs. puppet for CM. But is
this correct? Could chef do the provisioning without
cobbler/foreman? What is “best practice” to provision
servers.

We are mainly using Scientific Linux (RHEL-Clone) and
a little bit ubuntu.

Any help would be appreciated.

regards

sven

A tool like Cobbler assists with automating the provisioning portion
of your lifecycle management. Chef can be installed in the postinstall
script of the kickstart/preseed and Cobbler can pass the runlist to a
first run json to get the system configured into its role. Using
Cobbler’s API you can integrate it into some other tool that kicks off
the provisioning process. This could be a knife plugin or a script
that talks to the API. In my case I needed to provision on vmware so I
wrote a tool that creates a blank VM, adds it to Cobbler/DHCP/DNS, and
then boots the VM into a non-interactive preseed install of JeOS.
Takes about 10 minutes to build a fully configured system from scratch
with one command.

-Eric


#5

Hello!

On Wed, 2011-12-21 at 14:06 -0500, Eric G. Wolfe wrote:

Can you tell us a bit more about what role your existing CM
infrastructure plays? How are you provisioning systems with existing
"legacy" solutions in place?

  1. Register a host with our selfmade provisoning system
    (http://www-it.desy.de/systems/services/wboom/). Mac,Ip and DHCP/PXE
    template are also stored in an enterprise system (VitalQIP). The
    data is stored in the AFS.

  2. Depending on group assignment, hardware type and some flags the
    provisioning system creates kickstart files for all supported variants
    of Scientificlinux and config files for pxelinux.
    The kickstart files brings an adjusted partition schema and extra
    packages. In the post part we mount afs and start our legacy CM
    (http://www-it.desy.de/systems/services/salad/)
    The pxelinux config sets the os version to install.

  3. Based on the data from the registration, we run in fixed intervals
    our CM (shellscripts).
    These scripts get their parameter from the AFS and bring extra packages,
    updates, nfs, setting root pw, automount config, access rights …

What I’m still missing in all cobbler, forman, puppet, chef stuff is
the central place to register a host and store the meta data. It looks
like I have several places where a host has metadata.
So for example I give a set of workgroup server from one department the
same partion scheme and I want for all workgroup server the same
automount configuration. The first setting is for cobbler, the second
for chef, but I have to configure it in cobbler and chef?

At this point we will have to code the glue between something like
foreman and chef (and it looks like the integration with puppet is
already there for free)
or
we will configure chef with the metadata from our legacy system

regards!

sven


#6

Le 22.12.2011 13:17, Sven Sternberger a écrit :

Hello!

On
Wed, 2011-12-21 at 14:06 -0500, Eric G. Wolfe wrote:

Can you tell
us a bit more about what role your existing CM infrastructure plays? How
are you provisioning systems with existing “legacy” solutions in
place?

  1. Register a host with our selfmade provisoning system

(http://www-it.desy.de/systems/services/wboom/ [1]). Mac,Ip and
DHCP/PXE

template are also stored in an enterprise system (VitalQIP).
The
data is stored in the AFS.

  1. Depending on group assignment,
    hardware type and some flags the
    provisioning system creates kickstart
    files for all supported variants
    of Scientificlinux and config files
    for pxelinux.
    The kickstart files brings an adjusted partition schema
    and extra
    packages. In the post part we mount afs and start our legacy
    CM
    (http://www-it.desy.de/systems/services/salad/ [2])
    The pxelinux
    config sets the os version to install.

  2. Based on the data from
    the registration, we run in fixed intervals
    our CM (shellscripts).

These scripts get their parameter from the AFS and bring extra
packages,

updates, nfs, setting root pw, automount config, access
rights …

What I’m still missing in all cobbler, forman, puppet,
chef stuff is
the central place to register a host and store the meta
data. It looks
like I have several places where a host has metadata.

So for example I give a set of workgroup server from one department
the
same partion scheme and I want for all workgroup server the same

automount configuration. The first setting is for cobbler, the second

for chef, but I have to configure it in cobbler and chef?

At this
point we will have to code the glue between something like
foreman and
chef (and it looks like the integration with puppet is
already there
for free)
or
we will configure chef with the metadata from our
legacy system

regards!

sven

Well, as far as I know, the way
to do it with chef is having a minimal os with a common partition
scheme, and define recipes doing partionning on other parts of the disk
and creating the automount conf accordingly.

It’s more about having a
minimal install, the rest of the machien configuration is maintained and
enforced by chef.

The base install should not include too much things,
as you will use chef to install /update packages/softwares on boxes
after.

I use Altiris as deployment system, which takes care of
installing the OS, chef-client, chef.rb and validation.pem. Then run
chef-client once.

I usually set up the nodes roles manually after, but
I’m working on REST calls to set the roles to the new node (really early
stage for now)

Links:

[1]
http://www-it.desy.de/systems/services/wboom/
[2]
http://www-it.desy.de/systems/services/salad/


#7

On 12/22/2011 7:17 AM, Sven Sternberger wrote:

Hello!

On Wed, 2011-12-21 at 14:06 -0500, Eric G. Wolfe wrote:

Can you tell us a bit more about what role your existing CM
infrastructure plays? How are you provisioning systems with existing
"legacy" solutions in place?

  1. Register a host with our selfmade provisoning system
    (http://www-it.desy.de/systems/services/wboom/). Mac,Ip and DHCP/PXE
    template are also stored in an enterprise system (VitalQIP). The
    data is stored in the AFS.

  2. Depending on group assignment, hardware type and some flags the
    provisioning system creates kickstart files for all supported variants
    of Scientificlinux and config files for pxelinux.
    The kickstart files brings an adjusted partition schema and extra
    packages. In the post part we mount afs and start our legacy CM
    (http://www-it.desy.de/systems/services/salad/)
    The pxelinux config sets the os version to install.

  3. Based on the data from the registration, we run in fixed intervals
    our CM (shellscripts).
    These scripts get their parameter from the AFS and bring extra packages,
    updates, nfs, setting root pw, automount config, access rights …

What I’m still missing in all cobbler, forman, puppet, chef stuff is
the central place to register a host and store the meta data. It looks
like I have several places where a host has metadata.
So for example I give a set of workgroup server from one department the
same partion scheme and I want for all workgroup server the same
automount configuration. The first setting is for cobbler, the second
for chef, but I have to configure it in cobbler and chef?
One of the principles of using chef is managing infrastructure as code.
The point is to be able to restore your IT services from a source code
repository and a data backup. I let Kickstart handle the JeOS:
installing baseline packages; setting the root password; partitioning
and formatting volumes; configuring network services; turning off
unnecessary services. Every single RHEL 5, or RHEL 6, box spun up by my
provisioning system looks exactly the same as a JeOS, a generic server
with the bare minimum running. Kickstart is flexible enough to let you
approximate partitioning predictably. The only variance in my JeOS
image would be volume partitioning depending on application requirements.

Everything else after the initial deployment gets managed by Chef. You
start breaking up all of your services into manageable pieces. You
might grab a community cookbook for NFS (http://ckbk.it/nfs), make a few
changes as needed, and abstract your company specific hostnames and
exports into a role. Autofs is a service in its own right, so you cook
up a recipe for that and then abstract the company specific bits into
another role. Access rights could probably be managed with the sudo
(http://ckbk.it/sudo) and users (http://ckbk.it/users) cookbooks. You
would create a data bag for your users and drop off json objects for
each user, then assign sudo access by a role. If you have specific file
(http://wiki.opscode.com/display/chef/Resources#Resources-File) or
directory
(http://wiki.opscode.com/display/chef/Resources#Resources-Directory)
permission requirements, those might fit best into an application
specific cookbook. The same could be said for package requirements
(http://wiki.opscode.com/display/chef/Resources#Resources-Package),
these usually fit best in an application specific cookbook. For
example, it makes sense for an Apache cookbook (http://ckbk.it/apache2)
to install the httpd or apache2 package and any dependent pieces for
that application.

If you have common requirements across all your machines, you might
develop a baseline role (https://gist.github.com/1295668) which
describes what every server should look like in your environment,
regardless of its purpose. By stacking up these roles in a run_list,
you will likely end up storing less host specific metadata, and more
generalized role specific metadata. If you have edge cases which don’t
conform to the baseline, you can override portions of the baseline in a
secondary role which is then added to your node run_list. I would
recommend aiming for provisioning a generic, “just enough” environment
with Cobbler, or your existing provisioning system, and hand off the
heavyweight configuration to something like Chef. Its not flexible to
have hundreds of post install shell scripts run in your provisioning
phase, I’ve been there and that gets to be unmanageable and error
prone. It is very flexible to have modularized roles which can be
chosen for any given application you wish to deploy in the post
install/configuration phase.

At this point we will have to code the glue between something like
foreman and chef (and it looks like the integration with puppet is
already there for free)
or
we will configure chef with the metadata from our legacy system

regards!

sven