Graphing Chef metrics in graphite

Hi

I’m using Chef Server across a number of environments. I’m also using
Graphite (http://graphite.wikidot.com/) to graph a number of metrics for
various items from infrastructure up to app and service level.

Since I’m sure I’m not the only one using both Chef and Graphite, I
wanted to get a feel for what (and how) others were graphing Chef
metrics into Graphite.

I’d be interested in hearing what metrics you’d found most useful in
building up Chef dashboards in Graphite, and how you went about
gathering the metrics (eg API, log scraping etc)

Cheers

Dan

On Apr 16, 2012, at 9:37 AM, Dan Adams wrote:

Hi

I'm using Chef Server across a number of environments. I'm also using Graphite (http://graphite.wikidot.com/) to graph a number of metrics for various items from infrastructure up to app and service level.

Since I'm sure I'm not the only one using both Chef and Graphite, I wanted to get a feel for what (and how) others were graphing Chef metrics into Graphite.

I'd be interested in hearing what metrics you'd found most useful in building up Chef dashboards in Graphite, and how you went about gathering the metrics (eg API, log scraping etc)

collectd_python + PyChef :slight_smile: Thats actually one of the first reasons I wrote PyChef.

--Noah

I would probably start with a report handler that (somewhat smartly)
puts the metrics you care about into Graphite.

Things like which resources changed, time of converge (both start/end
and total elapsed,) etc. would all be easy.

Adam

On Mon, Apr 16, 2012 at 9:37 AM, Dan Adams dan@wesuckatcomputers.com wrote:

Hi

I'm using Chef Server across a number of environments. I'm also using
Graphite (http://graphite.wikidot.com/) to graph a number of metrics for
various items from infrastructure up to app and service level.

Since I'm sure I'm not the only one using both Chef and Graphite, I wanted
to get a feel for what (and how) others were graphing Chef metrics into
Graphite.

I'd be interested in hearing what metrics you'd found most useful in
building up Chef dashboards in Graphite, and how you went about gathering
the metrics (eg API, log scraping etc)

Cheers

Dan

--
Opscode, Inc.
Adam Jacob, Chief Customer Officer
T: (206) 619-7151 E: adam@opscode.com

Ian Meyer has a graphite handler up that has some of that stuff in it
already

On Mon, Apr 16, 2012 at 2:06 PM, Adam Jacob adam@opscode.com wrote:

I would probably start with a report handler that (somewhat smartly)
puts the metrics you care about into Graphite.

Things like which resources changed, time of converge (both start/end
and total elapsed,) etc. would all be easy.

Adam

On Mon, Apr 16, 2012 at 9:37 AM, Dan Adams dan@wesuckatcomputers.com
wrote:

Hi

I'm using Chef Server across a number of environments. I'm also using
Graphite (http://graphite.wikidot.com/) to graph a number of metrics for
various items from infrastructure up to app and service level.

Since I'm sure I'm not the only one using both Chef and Graphite, I
wanted
to get a feel for what (and how) others were graphing Chef metrics into
Graphite.

I'd be interested in hearing what metrics you'd found most useful in
building up Chef dashboards in Graphite, and how you went about gathering
the metrics (eg API, log scraping etc)

Cheers

Dan

--
Opscode, Inc.
Adam Jacob, Chief Customer Officer
T: (206) 619-7151 E: adam@opscode.com

we have multiple chef installations , one for our own private cloud and
rest for our clients (in aws/ rackspace etc), we are primarily using the
graphite report handler or nsca handler -> nagios-> graphios->graphite.
Among all the metrics (e.g. all resources, updated resources, elapsed time
etc) , the number of updated resources has helped us a lot to cross check
our recipes for non-idempotent resources.

On Mon, Apr 16, 2012 at 11:51 PM, mandi walls mandi.walls@gmail.com wrote:

Ian Meyer has a graphite handler up that has some of that stuff in it
already

GitHub - imeyer/chef-handler-graphite: Push chef reporting data to graphite

On Mon, Apr 16, 2012 at 2:06 PM, Adam Jacob adam@opscode.com wrote:

I would probably start with a report handler that (somewhat smartly)
puts the metrics you care about into Graphite.

Things like which resources changed, time of converge (both start/end
and total elapsed,) etc. would all be easy.

Adam

On Mon, Apr 16, 2012 at 9:37 AM, Dan Adams dan@wesuckatcomputers.com
wrote:

Hi

I'm using Chef Server across a number of environments. I'm also using
Graphite (http://graphite.wikidot.com/) to graph a number of metrics
for
various items from infrastructure up to app and service level.

Since I'm sure I'm not the only one using both Chef and Graphite, I
wanted
to get a feel for what (and how) others were graphing Chef metrics into
Graphite.

I'd be interested in hearing what metrics you'd found most useful in
building up Chef dashboards in Graphite, and how you went about
gathering
the metrics (eg API, log scraping etc)

Cheers

Dan

--
Opscode, Inc.
Adam Jacob, Chief Customer Officer
T: (206) 619-7151 E: adam@opscode.com

Thanks for the replies all. I guess I had my head on wrong in that I
was imagining a from-server dump to Graphite, rather than a from-client
dump.

Chef-handler makes interesting reading and something I will definitely
use. Chef-handler-graphite unfortunately not useable as given due to
some environmental constraints, but certainly a good jumping off point
for a handler I can put in place.

Cheers

Dan

On 16.04.2012 19:21, mandi walls wrote:

Ian Meyer has a graphite handler up that has some of that stuff in it
already

On Mon, Apr 16, 2012 at 2:06 PM, Adam Jacob <adam@opscode.com [6]>
wrote:

I would probably start with a report handler that (somewhat smartly)
puts the metrics you care about into Graphite.

On Mon, Apr 16, 2012 at 9:37 AM, Dan Adams
<dan@wesuckatcomputers.com [1]> wrote:

Hi

I'm using Chef Server across a number of environments. I'm also
using
Graphite (http://graphite.wikidot.com/ [2]) to graph a number of
metrics for
various items from infrastructure up to app and service level.

Since I'm sure I'm not the only one using both Chef and Graphite,
I wanted
to get a feel for what (and how) others were graphing Chef
metrics into
Graphite.

I'd be interested in hearing what metrics you'd found most useful
in
building up Chef dashboards in Graphite, and how you went about
gathering
the metrics (eg API, log scraping etc)

Hi

I have been looking into options for creating a CI (continuous
integration) pipeline for my Chef configuration. My CI server of choice
is Jenkins, and I currently have a pipeline that looks like:

ruby syntax check -> ruby lint check -> chef syntax check -> chef lint
check

using the toolchain:

“ruby -c” -> “nitpick” -> “knife cookbook test” -> “foodcritic”

This gives me a pipeline that runs in under 1 minute for a relatively
large repo, which is fast enough not to make people want to skip the
process, and catches a lot of silly problems early.

However, there’s still a lot of problems of one sort or another that
pass this chain but fail in one way or another in production. I’m
looking for a programmatic way to prevent these additional problems by
tagging something on to the end of my existing pipeline. Something where
unit or integration testing might sit in a traditional CI pipeline.

[I know that a lot of people use vagrant, VMTH or toft or some other system to spin up a temporary VM or set of VMs at the end of their CI chain in order to prove the recipes in the role that traditionally would be filled by integration testing. However, for environmental reasons, this is not an option for me and I’d like to ignore that option as a solution for the purposes of this discussion to save getting sidetracked please.]

What I’m looking for basically is something that:

  1. provides near to or the same level of understanding as the chef
    server API as to whether your chef config is sane
  2. Ideally something drop-in that doesn’t require writing individual
    tests for each new cookbook/recipe (a la cucumber-chef)
  3. It must also be fast, under 1 minute to test a chef repo containing
    hundreds of nodes and hundreds of cookbooks

I have no experience with it but it looks as though chefspec
(http://acrmp.github.com/chefspec/) might match at least requirement
(1). Any other suggestions for investigation?

Cheers

Dan

chefspec and chef-minitest are both viable for this, but both require
#2 while delivering #1/#3

Cheers,

--AJ

On 11 May 2012 09:40, Dan Adams dan@wesuckatcomputers.com wrote:

Hi

I have been looking into options for creating a CI (continuous integration)
pipeline for my Chef configuration. My CI server of choice is Jenkins, and I
currently have a pipeline that looks like:

ruby syntax check -> ruby lint check -> chef syntax check -> chef lint check

using the toolchain:

"ruby -c" -> "nitpick" -> "knife cookbook test" -> "foodcritic"

This gives me a pipeline that runs in under 1 minute for a relatively large
repo, which is fast enough not to make people want to skip the process, and
catches a lot of silly problems early.

However, there's still a lot of problems of one sort or another that pass
this chain but fail in one way or another in production. I'm looking for a
programmatic way to prevent these additional problems by tagging something
on to the end of my existing pipeline. Something where unit or integration
testing might sit in a traditional CI pipeline.

[I know that a lot of people use vagrant, VMTH or toft or some other system to spin up a temporary VM or set of VMs at the end of their CI chain in order to prove the recipes in the role that traditionally would be filled by integration testing. However, for environmental reasons, this is not an option for me and I'd like to ignore that option as a solution for the purposes of this discussion to save getting sidetracked please.]

What I'm looking for basically is something that:

  1. provides near to or the same level of understanding as the chef server
    API as to whether your chef config is sane
  2. Ideally something drop-in that doesn't require writing individual tests
    for each new cookbook/recipe (a la cucumber-chef)
  3. It must also be fast, under 1 minute to test a chef repo containing
    hundreds of nodes and hundreds of cookbooks

I have no experience with it but it looks as though chefspec
(http://acrmp.github.com/chefspec/) might match at least requirement (1).
Any other suggestions for investigation?

Cheers

Dan

Thanks for the reply. I feel like I might be missing something here.
Not having used rspec before, I'm having a slight problem getting my
head around how chefspec works/how it could be incorporated into my
jenkins pipeline. Is there a simple example out there somewhere because
the documentation didn't make any sense to me.

With regards to chef-minitest, the documentation for that seems to
suggest (unless I'm looking at the wrong thing) that its a report
handler that can run tests at the end of a chef run - in a similar
approach to tools such as chef-cucumber this means you have to first
build an infrastructure to test again - but as in my OP, ai have a
pipeline that ends with a git repo of code, not a built infrastructure
with the code applied that I could test against. So unless I'm missing
something, chef-minitest couldn't help me here? (as mentioned, I cannot
go down the path of spinning up some VMs to apply the cookbooks against
for the purposes of testing)

Just to clarify, I'm looking to extend beyond lint tests on a copy of
the codebase, but without having to spin up VMs or similar to then test
against. It may be this tool doesn't exist at the moment?

Cheers

Dan

On 10.05.2012 22:44, AJ Christensen wrote:

chefspec and chef-minitest are both viable for this, but both require
#2 while delivering #1/#3

Cheers,

--AJ

On 11 May 2012 09:40, Dan Adams dan@wesuckatcomputers.com wrote:

Hi

I have been looking into options for creating a CI (continuous
integration)
pipeline for my Chef configuration. My CI server of choice is
Jenkins, and I
currently have a pipeline that looks like:

ruby syntax check -> ruby lint check -> chef syntax check -> chef
lint check

using the toolchain:

"ruby -c" -> "nitpick" -> "knife cookbook test" -> "foodcritic"

This gives me a pipeline that runs in under 1 minute for a
relatively large
repo, which is fast enough not to make people want to skip the
process, and
catches a lot of silly problems early.

However, there's still a lot of problems of one sort or another that
pass
this chain but fail in one way or another in production. I'm looking
for a
programmatic way to prevent these additional problems by tagging
something
on to the end of my existing pipeline. Something where unit or
integration
testing might sit in a traditional CI pipeline.

[I know that a lot of people use vagrant, VMTH or toft or some other system to spin up a temporary VM or set of VMs at the end of their CI chain in order to prove the recipes in the role that traditionally would be filled by integration testing. However, for environmental reasons, this is not an option for me and I'd like to ignore that option as a solution for the purposes of this discussion to save getting sidetracked please.]

What I'm looking for basically is something that:

  1. provides near to or the same level of understanding as the chef
    server
    API as to whether your chef config is sane
  2. Ideally something drop-in that doesn't require writing individual
    tests
    for each new cookbook/recipe (a la cucumber-chef)
  3. It must also be fast, under 1 minute to test a chef repo
    containing
    hundreds of nodes and hundreds of cookbooks

I have no experience with it but it looks as though chefspec
(http://acrmp.github.com/chefspec/) might match at least requirement
(1).
Any other suggestions for investigation?

Cheers

Dan

Hi Dan,

I agree - the chefspec documentation does make it difficult to get started with.

Chefspec expects you to write examples for your recipes that define
what the resources are that you expect to be created by a converge.
You would then run these examples using RSpec. There is an example of
this here:
http://acrmp.github.com/chefspec/#Writing_a_cookbook_example

When you call #converge on the ChefRunner in your tests Chef is
actually run, but in effect with the actions that are normally
performed by the resource providers disabled. Ohai plugins are also
not loaded. The net result is that you can write tests that exercise
your recipes much quicker, without doing a real converge. This kind of
testing has very real limitations but can be useful.

You certainly could use chefspec outside of its intended use by
running through your cookbooks without making assertions. It's not
clear to me how useful this would be, but I'd be interested in
learning what issues you were able to identify that were not picked up
earlier in your pipeline. It may be in the future that Chef's
forthcoming why-run mode might help you here too.

I would highly recommend exploring further the reasons that prevent
you from doing integration testing, and if possible taking this up on
this mailing list or the IRC channel to discuss further:
http://community.opscode.com/chat/chef

Cheers,

Andrew.

On Fri, May 11, 2012 at 5:14 PM, Dan Adams dan@wesuckatcomputers.com wrote:

Thanks for the reply. I feel like I might be missing something here. Not
having used rspec before, I'm having a slight problem getting my head around
how chefspec works/how it could be incorporated into my jenkins pipeline. Is
there a simple example out there somewhere because the documentation didn't
make any sense to me.

With regards to chef-minitest, the documentation for that seems to suggest
(unless I'm looking at the wrong thing) that its a report handler that can
run tests at the end of a chef run - in a similar approach to tools such as
chef-cucumber this means you have to first build an infrastructure to test
again - but as in my OP, ai have a pipeline that ends with a git repo of
code, not a built infrastructure with the code applied that I could test
against. So unless I'm missing something, chef-minitest couldn't help me
here? (as mentioned, I cannot go down the path of spinning up some VMs to
apply the cookbooks against for the purposes of testing)

Just to clarify, I'm looking to extend beyond lint tests on a copy of the
codebase, but without having to spin up VMs or similar to then test against.
It may be this tool doesn't exist at the moment?

Cheers

Dan

On 10.05.2012 22:44, AJ Christensen wrote:

chefspec and chef-minitest are both viable for this, but both require
#2 while delivering #1/#3

Cheers,

--AJ

On 11 May 2012 09:40, Dan Adams dan@wesuckatcomputers.com wrote:

Hi

I have been looking into options for creating a CI (continuous
integration)
pipeline for my Chef configuration. My CI server of choice is Jenkins,
and I
currently have a pipeline that looks like:

ruby syntax check -> ruby lint check -> chef syntax check -> chef lint
check

using the toolchain:

"ruby -c" -> "nitpick" -> "knife cookbook test" -> "foodcritic"

This gives me a pipeline that runs in under 1 minute for a relatively
large
repo, which is fast enough not to make people want to skip the process,
and
catches a lot of silly problems early.

However, there's still a lot of problems of one sort or another that pass
this chain but fail in one way or another in production. I'm looking for
a
programmatic way to prevent these additional problems by tagging
something
on to the end of my existing pipeline. Something where unit or
integration
testing might sit in a traditional CI pipeline.

[I know that a lot of people use vagrant, VMTH or toft or some other system to spin up a temporary VM or set of VMs at the end of their CI chain in order to prove the recipes in the role that traditionally would be filled by integration testing. However, for environmental reasons, this is not an option for me and I'd like to ignore that option as a solution for the purposes of this discussion to save getting sidetracked please.]

What I'm looking for basically is something that:

  1. provides near to or the same level of understanding as the chef server
    API as to whether your chef config is sane
  2. Ideally something drop-in that doesn't require writing individual
    tests
    for each new cookbook/recipe (a la cucumber-chef)
  3. It must also be fast, under 1 minute to test a chef repo containing
    hundreds of nodes and hundreds of cookbooks

I have no experience with it but it looks as though chefspec
(http://acrmp.github.com/chefspec/) might match at least requirement (1).
Any other suggestions for investigation?

Cheers

Dan

Hi Andrew

Thanks for taking the time to reply. Chefspec definitely looks like the
most interesting option for the problem I'm looking to solve. The more I
think about it though the more I realise that nothing short of a full
run on a VM is really going to solve some or all of the problems that
I'm worrying about. For instance, with the pipeline I have now I am able
to catch most issues. The remainder that I cannot catch seem to fall
down to either a) environmental issues or b) failures with quite a
complicated story, eg "the attribute had a typo in it, so the config
file got updated with a blank value on line 10, so the service that was
told to restart on config file change restarted, but it was then
providing innaccurate information to a second service, which then
started throwing errors". In the first case, the tests would pass any
kind of integration test short of on the actual environment, and in the
second case, the tests would pass any kind of integration test short of
a real environment.

I think what I need to get my head round is the category of problems
that aren't picked up by lint tests etc, but are picked up by
chefspec. Are you able to provide a couple of example scenarios or
problems? its definitely an intriguing idea, I just need to understand
what it can help me catch before I commit time to it I think.

Cheers

Dan

On 13.05.2012 23:19, Andrew Crump wrote:

Hi Dan,

I agree - the chefspec documentation does make it difficult to get
started with.

Chefspec expects you to write examples for your recipes that define
what the resources are that you expect to be created by a converge.
You would then run these examples using RSpec. There is an example of
this here:
http://acrmp.github.com/chefspec/#Writing_a_cookbook_example

When you call #converge on the ChefRunner in your tests Chef is
actually run, but in effect with the actions that are normally
performed by the resource providers disabled. Ohai plugins are also
not loaded. The net result is that you can write tests that exercise
your recipes much quicker, without doing a real converge. This kind
of
testing has very real limitations but can be useful.

You certainly could use chefspec outside of its intended use by
running through your cookbooks without making assertions. It's not
clear to me how useful this would be, but I'd be interested in
learning what issues you were able to identify that were not picked
up
earlier in your pipeline. It may be in the future that Chef's
forthcoming why-run mode might help you here too.

I would highly recommend exploring further the reasons that prevent
you from doing integration testing, and if possible taking this up on
this mailing list or the IRC channel to discuss further:
http://community.opscode.com/chat/chef

Cheers,

Andrew.

On Fri, May 11, 2012 at 5:14 PM, Dan Adams
dan@wesuckatcomputers.com wrote:

Thanks for the reply. I feel like I might be missing something here.
Not
having used rspec before, I'm having a slight problem getting my
head around
how chefspec works/how it could be incorporated into my jenkins
pipeline. Is
there a simple example out there somewhere because the documentation
didn't
make any sense to me.

With regards to chef-minitest, the documentation for that seems to
suggest
(unless I'm looking at the wrong thing) that its a report handler
that can
run tests at the end of a chef run - in a similar approach to tools
such as
chef-cucumber this means you have to first build an infrastructure
to test
again - but as in my OP, ai have a pipeline that ends with a git
repo of
code, not a built infrastructure with the code applied that I could
test
against. So unless I'm missing something, chef-minitest couldn't
help me
here? (as mentioned, I cannot go down the path of spinning up some
VMs to
apply the cookbooks against for the purposes of testing)

Just to clarify, I'm looking to extend beyond lint tests on a copy
of the
codebase, but without having to spin up VMs or similar to then test
against.
It may be this tool doesn't exist at the moment?

Cheers

Dan

On 10.05.2012 22:44, AJ Christensen wrote:

chefspec and chef-minitest are both viable for this, but both
require
#2 while delivering #1/#3

Cheers,

--AJ

On 11 May 2012 09:40, Dan Adams dan@wesuckatcomputers.com wrote:

Hi

I have been looking into options for creating a CI (continuous
integration)
pipeline for my Chef configuration. My CI server of choice is
Jenkins,
and I
currently have a pipeline that looks like:

ruby syntax check -> ruby lint check -> chef syntax check -> chef
lint
check

using the toolchain:

"ruby -c" -> "nitpick" -> "knife cookbook test" -> "foodcritic"

This gives me a pipeline that runs in under 1 minute for a
relatively
large
repo, which is fast enough not to make people want to skip the
process,
and
catches a lot of silly problems early.

However, there's still a lot of problems of one sort or another
that pass
this chain but fail in one way or another in production. I'm
looking for
a
programmatic way to prevent these additional problems by tagging
something
on to the end of my existing pipeline. Something where unit or
integration
testing might sit in a traditional CI pipeline.

[I know that a lot of people use vagrant, VMTH or toft or some other system to spin up a temporary VM or set of VMs at the end of their CI chain in order to prove the recipes in the role that traditionally would be filled by integration testing. However, for environmental reasons, this is not an option for me and I'd like to ignore that option as a solution for the purposes of this discussion to save getting sidetracked please.]

What I'm looking for basically is something that:

  1. provides near to or the same level of understanding as the chef
    server
    API as to whether your chef config is sane
  2. Ideally something drop-in that doesn't require writing
    individual
    tests
    for each new cookbook/recipe (a la cucumber-chef)
  3. It must also be fast, under 1 minute to test a chef repo
    containing
    hundreds of nodes and hundreds of cookbooks

I have no experience with it but it looks as though chefspec
(http://acrmp.github.com/chefspec/) might match at least
requirement (1).
Any other suggestions for investigation?

Cheers

Dan

Hi Dan,

Nothing beats a real converge. You definitely should be doing this.

The most useful tests are ones that test the functionality of your
converged node. If your recipe installs a daemon you can talk to over
the network then having tests that try just that following the
converge is a great place to start.

A linter is going to identify issues in your cookbooks that are
generic problems - for example incorrect resource attributes, or if
you are using a deprecated syntax. Your cookbook code can be beautiful
and pristine but still be logically wrong and not work properly.

Unit tests, like those you would write in chefspec, will make
assertions about the resources that are created - specific to your
cookbooks. You can look at your unit tests as writing a series of
examples - when the node has these attributes, and I'm on CentOS then
I expect the generated config file to look like so.

Jim Hopp wrote a nice blog post about the different types of testing
they do at Lookout that you may find useful:
http://hackers.mylookout.com/2012/04/cookout-at-lookout-with-chef/

Testing is sure to be a hot topic at #ChefConf so I'd keep an eye on
what people have to say there to see what else might work for you.

Cheers,

Andrew.

On Mon, May 14, 2012 at 8:06 PM, Dan Adams dan@wesuckatcomputers.com wrote:

Hi Andrew

Thanks for taking the time to reply. Chefspec definitely looks like the most
interesting option for the problem I'm looking to solve. The more I think
about it though the more I realise that nothing short of a full run on a VM
is really going to solve some or all of the problems that I'm worrying
about. For instance, with the pipeline I have now I am able to catch most
issues. The remainder that I cannot catch seem to fall down to either a)
environmental issues or b) failures with quite a complicated story, eg "the
attribute had a typo in it, so the config file got updated with a blank
value on line 10, so the service that was told to restart on config file
change restarted, but it was then providing innaccurate information to a
second service, which then started throwing errors". In the first case, the
tests would pass any kind of integration test short of on the actual
environment, and in the second case, the tests would pass any kind of
integration test short of a real environment.

I think what I need to get my head round is the category of problems that
aren't picked up by lint tests etc, but are picked up by chefspec. Are
you able to provide a couple of example scenarios or problems? its
definitely an intriguing idea, I just need to understand what it can help me
catch before I commit time to it I think.

Cheers

Dan

On 13.05.2012 23:19, Andrew Crump wrote:

Hi Dan,

I agree - the chefspec documentation does make it difficult to get
started with.

Chefspec expects you to write examples for your recipes that define
what the resources are that you expect to be created by a converge.
You would then run these examples using RSpec. There is an example of
this here:
http://acrmp.github.com/chefspec/#Writing_a_cookbook_example

When you call #converge on the ChefRunner in your tests Chef is
actually run, but in effect with the actions that are normally
performed by the resource providers disabled. Ohai plugins are also
not loaded. The net result is that you can write tests that exercise
your recipes much quicker, without doing a real converge. This kind of
testing has very real limitations but can be useful.

You certainly could use chefspec outside of its intended use by
running through your cookbooks without making assertions. It's not
clear to me how useful this would be, but I'd be interested in
learning what issues you were able to identify that were not picked up
earlier in your pipeline. It may be in the future that Chef's
forthcoming why-run mode might help you here too.

I would highly recommend exploring further the reasons that prevent
you from doing integration testing, and if possible taking this up on
this mailing list or the IRC channel to discuss further:
http://community.opscode.com/chat/chef

Cheers,

Andrew.

On Fri, May 11, 2012 at 5:14 PM, Dan Adams dan@wesuckatcomputers.com
wrote:

Thanks for the reply. I feel like I might be missing something here. Not
having used rspec before, I'm having a slight problem getting my head
around
how chefspec works/how it could be incorporated into my jenkins pipeline.
Is
there a simple example out there somewhere because the documentation
didn't
make any sense to me.

With regards to chef-minitest, the documentation for that seems to
suggest
(unless I'm looking at the wrong thing) that its a report handler that
can
run tests at the end of a chef run - in a similar approach to tools such
as
chef-cucumber this means you have to first build an infrastructure to
test
again - but as in my OP, ai have a pipeline that ends with a git repo of
code, not a built infrastructure with the code applied that I could test
against. So unless I'm missing something, chef-minitest couldn't help me
here? (as mentioned, I cannot go down the path of spinning up some VMs to
apply the cookbooks against for the purposes of testing)

Just to clarify, I'm looking to extend beyond lint tests on a copy of the
codebase, but without having to spin up VMs or similar to then test
against.
It may be this tool doesn't exist at the moment?

Cheers

Dan

On 10.05.2012 22:44, AJ Christensen wrote:

chefspec and chef-minitest are both viable for this, but both require
#2 while delivering #1/#3

Cheers,

--AJ

On 11 May 2012 09:40, Dan Adams dan@wesuckatcomputers.com wrote:

Hi

I have been looking into options for creating a CI (continuous
integration)
pipeline for my Chef configuration. My CI server of choice is Jenkins,
and I
currently have a pipeline that looks like:

ruby syntax check -> ruby lint check -> chef syntax check -> chef lint
check

using the toolchain:

"ruby -c" -> "nitpick" -> "knife cookbook test" -> "foodcritic"

This gives me a pipeline that runs in under 1 minute for a relatively
large
repo, which is fast enough not to make people want to skip the process,
and
catches a lot of silly problems early.

However, there's still a lot of problems of one sort or another that
pass
this chain but fail in one way or another in production. I'm looking
for
a
programmatic way to prevent these additional problems by tagging
something
on to the end of my existing pipeline. Something where unit or
integration
testing might sit in a traditional CI pipeline.

[I know that a lot of people use vagrant, VMTH or toft or some other system to spin up a temporary VM or set of VMs at the end of their CI chain in order to prove the recipes in the role that traditionally would be filled by integration testing. However, for environmental reasons, this is not an option for me and I'd like to ignore that option as a solution for the purposes of this discussion to save getting sidetracked please.]

What I'm looking for basically is something that:

  1. provides near to or the same level of understanding as the chef
    server
    API as to whether your chef config is sane
  2. Ideally something drop-in that doesn't require writing individual
    tests
    for each new cookbook/recipe (a la cucumber-chef)
  3. It must also be fast, under 1 minute to test a chef repo containing
    hundreds of nodes and hundreds of cookbooks

I have no experience with it but it looks as though chefspec
(http://acrmp.github.com/chefspec/) might match at least requirement
(1).
Any other suggestions for investigation?

Cheers

Dan

Hi

I have a situation in which I have two chef servers, both of which the
clients need to be able to authenticate against. I’m sure there’s
multiple scenarios in which others might need to achieve the same - a
failover chef server pair, rebuild chef server after server failure,
migration to a more powerful replacement chef server etc.

The issue I have is that I cannot find an easy mechanism to set up a
client against multiple chef servers. My nodes, roles, cookbooks etc are
all in-repo and I can import from them. I want to use the same model for
clients., but the client keys although in-repo don’t seem to have a
"knife client import" or “knife client upload-from-pem” action for them,
so I’m not clear on what the workflow is for importing a client from a
PEM file.

I have set the chef-validator client PEM and the validation.pem to the
same on both chef servers. Things I have tried so far:

  1. “knife client create” on one host, then “knife client create” on the
    other, posting in the public key in the JSON file
  2. “knife client create” on both hosts, then “knife client edit” on the
    second host, pasting in the public key in the JSON file

In neither of the above tests was the client added with the same public
key on both hosts when I did a “knife client show [CLIENTNAME]”

However, if I update the public key for the client directly in the
couchDB using the value from the second server:

curl -X GET “http://127.0.0.1:5984/chef/_design/clients/_view/all_id” |
grep [CLIENTNAME]
curl -X GET
http://127.0.0.1:5984/chef/5f25e314-c6c8-46df-90fb-5736a15472b0 >
client.txt
vi client.txt
curl -X PUT -d @client.txt -H "Content-type: application/json"
http://127.0.0.1:5984/chef/5f25e314-c6c8-46df-90fb-5736a15472b0

then the client private key (in /tmp/test.pem) can authenticate against
both servers:

[root@workstation ~]# knife client list --server-url
http://server1:4000’ --key /tmp/test.pem --user test
You authenticated successfully to http://server1:4000 as test
[root@workstation ~]# knife client list --server-url
http://server2:4000’ --key /tmp/test.pem --user test
You authenticated successfully to http://server2:4000 as test

So am I missing some really easy way of importing a client to a chef
server? Is there another workflow for this that I’m missing?

Many thanks

Dan

why not replicating couch directly ?
http://guide.couchdb.org/draft/replication.html

or if you want only the client certs you can just grab them selectively and
replicate it.

On Tue, May 15, 2012 at 1:54 PM, Dan Adams dan@wesuckatcomputers.comwrote:

Hi

I have a situation in which I have two chef servers, both of which the
clients need to be able to authenticate against. I'm sure there's multiple
scenarios in which others might need to achieve the same - a failover chef
server pair, rebuild chef server after server failure, migration to a more
powerful replacement chef server etc.

The issue I have is that I cannot find an easy mechanism to set up a
client against multiple chef servers. My nodes, roles, cookbooks etc are
all in-repo and I can import from them. I want to use the same model for
clients., but the client keys although in-repo don't seem to have a "knife
client import" or "knife client upload-from-pem" action for them, so I'm
not clear on what the workflow is for importing a client from a PEM file.

I have set the chef-validator client PEM and the validation.pem to the
same on both chef servers. Things I have tried so far:

  1. "knife client create" on one host, then "knife client create" on the
    other, posting in the public key in the JSON file
  2. "knife client create" on both hosts, then "knife client edit" on the
    second host, pasting in the public key in the JSON file

In neither of the above tests was the client added with the same public
key on both hosts when I did a "knife client show [CLIENTNAME]"

However, if I update the public key for the client directly in the couchDB
using the value from the second server:

curl -X GET "http://127.0.0.1:5984/chef/_**design/clients/_view/all_idhttp://127.0.0.1:5984/chef/_design/clients/_view/all_id"
| grep [CLIENTNAME]
curl -X GET http://127.0.0.1:5984/chef/**5f25e314-c6c8-46df-90fb-**
5736a15472b0http://127.0.0.1:5984/chef/5f25e314-c6c8-46df-90fb-5736a15472b0> client.txt
vi client.txt
curl -X PUT -d @client.txt -H "Content-type: application/json"
http://127.0.0.1:5984/chef/**5f25e314-c6c8-46df-90fb-**5736a15472b0http://127.0.0.1:5984/chef/5f25e314-c6c8-46df-90fb-5736a15472b0

then the client private key (in /tmp/test.pem) can authenticate against
both servers:

[root@workstation ~]# knife client list --server-url 'http://server1:4000'
--key /tmp/test.pem --user test
You authenticated successfully to http://server1:4000 as test
[root@workstation ~]# knife client list --server-url 'http://server2:4000'
--key /tmp/test.pem --user test
You authenticated successfully to http://server2:4000 as test

So am I missing some really easy way of importing a client to a chef
server? Is there another workflow for this that I'm missing?

Many thanks

Dan

Hi

Thanks for the suggestion. Is this something you've used yourself? Just
replicate at the level ""http://127.0.0.1:5984/chef/_design/clients" ?

Cheers

Dan

On 15.05.2012 09:34, Ranjib Dey wrote:

why not replicating couch directly ?
Replication [8]

or if you want only the client certs you can just grab them
selectively and replicate it.

On Tue, May 15, 2012 at 1:54 PM, Dan Adams <dan@wesuckatcomputers.com
[9]> wrote:

Hi

I have a situation in which I have two chef servers, both of which
the clients need to be able to authenticate against. I'm sure
there's multiple scenarios in which others might need to achieve the
same - a failover chef server pair, rebuild chef server after server
failure, migration to a more powerful replacement chef server etc.

The issue I have is that I cannot find an easy mechanism to set up
a client against multiple chef servers. My nodes, roles, cookbooks
etc are all in-repo and I can import from them. I want to use the
same model for clients., but the client keys although in-repo don't
seem to have a "knife client import" or "knife client
upload-from-pem" action for them, so I'm not clear on what the
workflow is for importing a client from a PEM file.

I have set the chef-validator client PEM and the validation.pem to
the same on both chef servers. Things I have tried so far:

  1. "knife client create" on one host, then "knife client create" on
    the other, posting in the public key in the JSON file
  2. "knife client create" on both hosts, then "knife client edit" on
    the second host, pasting in the public key in the JSON file

In neither of the above tests was the client added with the same
public key on both hosts when I did a "knife client show
[CLIENTNAME]"

However, if I update the public key for the client directly in the
couchDB using the value from the second server:

curl -X GET
"http://127.0.0.1:5984/chef/_design/clients/_view/all_id [1]" | grep
[CLIENTNAME]
curl -X GET
http://127.0.0.1:5984/chef/5f25e314-c6c8-46df-90fb-5736a15472b0 [2]

client.txt
vi client.txt
curl -X PUT -d @client.txt -H "Content-type: application/json"
http://127.0.0.1:5984/chef/5f25e314-c6c8-46df-90fb-5736a15472b0
[3]

then the client private key (in /tmp/test.pem) can authenticate
against both servers:

[root@workstation ~]# knife client list --server-url
'http://server1:4000 [4]' --key /tmp/test.pem --user test
You authenticated successfully to http://server1:4000 [5] as test
[root@workstation ~]# knife client list --server-url
'http://server2:4000 [6]' --key /tmp/test.pem --user test
You authenticated successfully to http://server2:4000 [7] as test

So am I missing some really easy way of importing a client to a
chef server? Is there another workflow for this that I'm missing?

Many thanks

Dan