If the systems you are measuring are completely automated and modeled
in chef, the set of unique run lists is the same as the number of
More specifically: two servers with the same run_list are going
through the exact same set of instructions, and that can be a useful
definition of unique for some purposes, like a rough measure of
Digging deeper however, those two systems might be presented with
different data and therefore make different decisions about
configuration. Taken to an extreme, you could have logic like “If
I’m in europe, be a webserver, if I’m in japan be a database server”.
More reasonably, "serve content localized to the country I’m in"
would at one level be “the same” assertion for all such servers, but
cause drastically different behavior from one server in France to
another server in Spain.
So it ends up being a question of modeling: what do you consider the
important set of traits causing uniqueness for the purpose at hand?
Like a 4-tuple for TCP connection, you have to pick the items.
That said, it’s trivial in chef to do this:
- make sure each such trait is published as a node attribute
- write some normalizing function to assemble these into a
"fingerprint", maybe as simple as a known array of attributes
- data mine that out of the fleet: for each node, map out the array
of values of said attributes.
Now you have your list of differentiable, unique-able fingerprints to
give the configuration cardinality.
Aaron J. Peterson
On Mon, Nov 12, 2012 at 1:56 PM, Booker Bense firstname.lastname@example.org wrote:
Has anyone ever implemented a metric where you count the number of unique
configurations in your configuration management system?
For our old in-house system this is fairly simple to calculate, and the
688 unique configurations out of 3363 managed machines. And I know this
number is low since our “options” file on which the metric is based vastly
undercounts host specific changes.
Is there a simple way to get this kind of metric out of a Chef server?