Nothing beats a real converge. You definitely should be doing this.
The most useful tests are ones that test the functionality of your
converged node. If your recipe installs a daemon you can talk to over
the network then having tests that try just that following the
converge is a great place to start.
A linter is going to identify issues in your cookbooks that are
generic problems - for example incorrect resource attributes, or if
you are using a deprecated syntax. Your cookbook code can be beautiful
and pristine but still be logically wrong and not work properly.
Unit tests, like those you would write in chefspec, will make
assertions about the resources that are created - specific to your
cookbooks. You can look at your unit tests as writing a series of
examples - when the node has these attributes, and I’m on CentOS then
I expect the generated config file to look like so.
Jim Hopp wrote a nice blog post about the different types of testing
they do at Lookout that you may find useful:
Testing is sure to be a hot topic at #ChefConf so I’d keep an eye on
what people have to say there to see what else might work for you.
On Mon, May 14, 2012 at 8:06 PM, Dan Adams email@example.com wrote:
Thanks for taking the time to reply. Chefspec definitely looks like the most
interesting option for the problem I’m looking to solve. The more I think
about it though the more I realise that nothing short of a full run on a VM
is really going to solve some or all of the problems that I’m worrying
about. For instance, with the pipeline I have now I am able to catch most
issues. The remainder that I cannot catch seem to fall down to either a)
environmental issues or b) failures with quite a complicated story, eg “the
attribute had a typo in it, so the config file got updated with a blank
value on line 10, so the service that was told to restart on config file
change restarted, but it was then providing innaccurate information to a
second service, which then started throwing errors”. In the first case, the
tests would pass any kind of integration test short of on the actual
environment, and in the second case, the tests would pass any kind of
integration test short of a real environment.
I think what I need to get my head round is the category of problems that
aren’t picked up by lint tests etc, but are picked up by chefspec. Are
you able to provide a couple of example scenarios or problems? its
definitely an intriguing idea, I just need to understand what it can help me
catch before I commit time to it I think.
On 13.05.2012 23:19, Andrew Crump wrote:
I agree - the chefspec documentation does make it difficult to get
Chefspec expects you to write examples for your recipes that define
what the resources are that you expect to be created by a converge.
You would then run these examples using RSpec. There is an example of
When you call #converge on the ChefRunner in your tests Chef is
actually run, but in effect with the actions that are normally
performed by the resource providers disabled. Ohai plugins are also
not loaded. The net result is that you can write tests that exercise
your recipes much quicker, without doing a real converge. This kind of
testing has very real limitations but can be useful.
You certainly could use chefspec outside of its intended use by
running through your cookbooks without making assertions. It’s not
clear to me how useful this would be, but I’d be interested in
learning what issues you were able to identify that were not picked up
earlier in your pipeline. It may be in the future that Chef’s
forthcoming why-run mode might help you here too.
I would highly recommend exploring further the reasons that prevent
you from doing integration testing, and if possible taking this up on
this mailing list or the IRC channel to discuss further:
On Fri, May 11, 2012 at 5:14 PM, Dan Adams firstname.lastname@example.org
Thanks for the reply. I feel like I might be missing something here. Not
having used rspec before, I’m having a slight problem getting my head
how chefspec works/how it could be incorporated into my jenkins pipeline.
there a simple example out there somewhere because the documentation
make any sense to me.
With regards to chef-minitest, the documentation for that seems to
(unless I’m looking at the wrong thing) that its a report handler that
run tests at the end of a chef run - in a similar approach to tools such
chef-cucumber this means you have to first build an infrastructure to
again - but as in my OP, ai have a pipeline that ends with a git repo of
code, not a built infrastructure with the code applied that I could test
against. So unless I’m missing something, chef-minitest couldn’t help me
here? (as mentioned, I cannot go down the path of spinning up some VMs to
apply the cookbooks against for the purposes of testing)
Just to clarify, I’m looking to extend beyond lint tests on a copy of the
codebase, but without having to spin up VMs or similar to then test
It may be this tool doesn’t exist at the moment?
On 10.05.2012 22:44, AJ Christensen wrote:
chefspec and chef-minitest are both viable for this, but both require
#2 while delivering #1/#3
On 11 May 2012 09:40, Dan Adams email@example.com wrote:
I have been looking into options for creating a CI (continuous
pipeline for my Chef configuration. My CI server of choice is Jenkins,
currently have a pipeline that looks like:
ruby syntax check -> ruby lint check -> chef syntax check -> chef lint
using the toolchain:
“ruby -c” -> “nitpick” -> “knife cookbook test” -> “foodcritic”
This gives me a pipeline that runs in under 1 minute for a relatively
repo, which is fast enough not to make people want to skip the process,
catches a lot of silly problems early.
However, there’s still a lot of problems of one sort or another that
this chain but fail in one way or another in production. I’m looking
programmatic way to prevent these additional problems by tagging
on to the end of my existing pipeline. Something where unit or
testing might sit in a traditional CI pipeline.
[I know that a lot of people use vagrant, VMTH or toft or some other
to spin up a temporary VM or set of VMs at the end of their CI chain in
order to prove the recipes in the role that traditionally would be
integration testing. However, for environmental reasons, this is not an
option for me and I’d like to ignore that option as a solution for the
purposes of this discussion to save getting sidetracked please.]
What I’m looking for basically is something that:
- provides near to or the same level of understanding as the chef
API as to whether your chef config is sane
- Ideally something drop-in that doesn’t require writing individual
for each new cookbook/recipe (a la cucumber-chef)
- It must also be fast, under 1 minute to test a chef repo containing
hundreds of nodes and hundreds of cookbooks
I have no experience with it but it looks as though chefspec
(http://acrmp.github.com/chefspec/) might match at least requirement
Any other suggestions for investigation?