Testing and Determining Test Coverage in Cookbooks

Continuous cookbook testing framework (analogous to test-kitchen, but
built on Jenkins, Foodcritic, RVM and Vagrant) ... check.
Chef-minitest handler running on VMs and reporting test results back
to CI host ... check.

Now I just have to get all my cookbook authors to write tests!

It seems like a lot of us are in this situation, so I thought I'd
share my findings and the direction in which I seem to be headed. We
have to get a bunch of people who are maybe not so used to TDD/BDD or
writing integration tests to suddenly start writing integration tests
in minitest (spec tests, in my exact case).

There's a growing amount of source code out there, but the best
documentation I've found is still in Bryan's example cookbook (found
here, if this somehow pops up on Google):

So you can use node data, but only within the scope of a particular
test definition. I understand why that is, but folks who know "enough
Ruby for Chef" will probably get tripped up by that.

Still, I need a way to tell my budding cooks what needs testing and
what doesn't. Can I use existing code coverage tools to do this
outside of converging a node and running the minitest handler? As
far as I can tell the answer is, no, I can't.

(I base this preliminary conclusion on an indexed conversation in
#chef from July:

... would love to be proven wrong, BTW)

My current idea for drawing attention to which cookbooks are
adequately tested in our Chef repository involves writing some
Foodcritic rules to check for the presence of minitests in each
cookbook and then critique them. I believe I can get started with
this using three rules:
"Does your cookbook have a files/default/tests/minitest directory?"
"Do you have a test/spec.rb file describing each recipe in the cookbook?"
"Does your test/spec for recipe X cover all of the resources specified
in recipe X?".

Since Foodcritic testing is already in the early stage of my cookbook
testing pipeline, I can easily turn those warnings into pretty graphs.
I'm worried that writing those rules will be a bit brain-melty,
though.

Mission accomplished: I am shaming all my repo contributors for not
writing tests for their cookbook contributions. I still need to make
writing those tests easier for them to pick up or I'm just being mean.
My idea here is to write and exhaustively document a helper gem that
provides convenience functions beyond the resource checks that are
already included, then distribute that along with the recipe that
installs the chef-minitest-handler gem and sets it up as an event
handler on the system. If the helper gem is well-documented and the
tests in the repository leverage the helper properly, I think that
will be enough signposts to get us started. Then It'll be a lot
easier to accept all these pull requests...

Is anyone else proceeding along these lines? Is there any interest in
adding that functionality to mainline Foodcritic? Is there some other
way of ensuring test coverage on cookbooks? Any fellow travelers know
of any pitfalls or open source projects that might help?

(besides test-kitchen, that is. :wink: )

Love the idea of properly tested cookbooks and the fact that you are
pushing your buddies into that direction :slight_smile:

I've taken some notes and collected some links about test-driven cookbook
development:
https://github.com/tknerr/bills-kitchen/blob/master/COOKBOOK_DEVELOPMENT.md

It covers cookbook testing on several levels:

  • syntax & long checking -> foodcritic
  • unit-level spec testing -> chefspec and fauxhai
  • "smoke tests" from the inside -> minitest-handler
  • acceptance-level testing from the outside -> cucumber-nagios

Maybe you can make some use of it or you have some ideas how to improve it.

Cheers, Torben
Am 05.10.2012 03:17 schrieb "steve ." leftathome@gmail.com:

Continuous cookbook testing framework (analogous to test-kitchen, but
built on Jenkins, Foodcritic, RVM and Vagrant) ... check.
Chef-minitest handler running on VMs and reporting test results back
to CI host ... check.

Now I just have to get all my cookbook authors to write tests!

It seems like a lot of us are in this situation, so I thought I'd
share my findings and the direction in which I seem to be headed. We
have to get a bunch of people who are maybe not so used to TDD/BDD or
writing integration tests to suddenly start writing integration tests
in minitest (spec tests, in my exact case).

There's a growing amount of source code out there, but the best
documentation I've found is still in Bryan's example cookbook (found
here, if this somehow pops up on Google):

GitHub - btm/minitest-handler-cookbook

So you can use node data, but only within the scope of a particular
test definition. I understand why that is, but folks who know "enough
Ruby for Chef" will probably get tripped up by that.

Still, I need a way to tell my budding cooks what needs testing and
what doesn't. Can I use existing code coverage tools to do this
outside of converging a node and running the minitest handler? As
far as I can tell the answer is, no, I can't.

(I base this preliminary conclusion on an indexed conversation in
Chef Infra (archive) from July:

http://community.opscode.com/chat/chef/2012-07-18#id-165241

... would love to be proven wrong, BTW)

My current idea for drawing attention to which cookbooks are
adequately tested in our Chef repository involves writing some
Foodcritic rules to check for the presence of minitests in each
cookbook and then critique them. I believe I can get started with
this using three rules:
"Does your cookbook have a files/default/tests/minitest directory?"
"Do you have a test/spec.rb file describing each recipe in the cookbook?"
"Does your test/spec for recipe X cover all of the resources specified
in recipe X?".

Since Foodcritic testing is already in the early stage of my cookbook
testing pipeline, I can easily turn those warnings into pretty graphs.
I'm worried that writing those rules will be a bit brain-melty,
though.

Mission accomplished: I am shaming all my repo contributors for not
writing tests for their cookbook contributions. I still need to make
writing those tests easier for them to pick up or I'm just being mean.
My idea here is to write and exhaustively document a helper gem that
provides convenience functions beyond the resource checks that are
already included, then distribute that along with the recipe that
installs the chef-minitest-handler gem and sets it up as an event
handler on the system. If the helper gem is well-documented and the
tests in the repository leverage the helper properly, I think that
will be enough signposts to get us started. Then It'll be a lot
easier to accept all these pull requests...

Is anyone else proceeding along these lines? Is there any interest in
adding that functionality to mainline Foodcritic? Is there some other
way of ensuring test coverage on cookbooks? Any fellow travelers know
of any pitfalls or open source projects that might help?

(besides test-kitchen, that is. :wink: )

Hi Torbin!

Durf. Never occurred to me to crib off of cucumber-nagios. That's a
pretty neat idea, and it sure solves the problem of "How do you
explain test-writing?" as Cucumber tests are about as easy to read as
they come. I'm going to give that a try next week.

I like the chefspec/fauxhai concept but I'm lucky enough to be testing
in an environment where spinning up and immediately terminating a
bunch of images is relatively painless. So I will probably just laze
out on that step and write unit/basic integration tests with the
minitest handler, then write up some Cucumber features to test each VM
after convergence and before teardown. One place where I do see value
in that would be for our LWRPs, a few of which are getting pretty
complicated and could probably do with some specs running that could
catch regressions.

The only problem I have (and this is due to my particular
implementation) is that I'm testing role/environment/platform
combinations. Can't really test a multi-tiered app this way. It's
one of the things test-kitchen does that my little skunkworks project
doesn't. I guess the answer is to bind some features to a special
full-stack role that we would only use for testing/development, though
that wouldn't work for apps that require multiple hosts (Mongo springs
to mind).

Your documentation and the test-writing docs on the Chefspec and
Fauxhai sites are pretty comprehensive, it seems (or have I just been
reading too much poorly-documented code lately? :wink: ). I'm not sure
how many people are interested in converging Vagrant VMs against a
Chef server endpoint (i.e. using the chef-client provisioner instead
of chef-solo), but jtimberman wrote a couple of pretty good blog posts
on the subject (starting with
http://jtimberman.housepub.org/blog/2012/03/18/multivm-vagrantfile-for-chef/
, again, HI GOOGLE! ) and I could perhaps write something up as well
if the community desires it and existing resources are found wanting.

Anyway, I think this kind of testing is a neat idea and I'm going to
see how far I can get with it. I try to follow this list but I'm
awash in work email most of the time. Comments/off-list conversations
on this subject with interested parties are also welcome.

On Thu, Oct 4, 2012 at 10:22 PM, Torben Knerr ukio@gmx.de wrote:

Love the idea of properly tested cookbooks and the fact that you are pushing
your buddies into that direction :slight_smile:

I've taken some notes and collected some links about test-driven cookbook
development:
https://github.com/tknerr/bills-kitchen/blob/master/COOKBOOK_DEVELOPMENT.md

It covers cookbook testing on several levels:

  • syntax & long checking -> foodcritic
  • unit-level spec testing -> chefspec and fauxhai
  • "smoke tests" from the inside -> minitest-handler
  • acceptance-level testing from the outside -> cucumber-nagios

Maybe you can make some use of it or you have some ideas how to improve it.

Cheers, Torben

Am 05.10.2012 03:17 schrieb "steve ." leftathome@gmail.com:

Continuous cookbook testing framework (analogous to test-kitchen, but
built on Jenkins, Foodcritic, RVM and Vagrant) ... check.
Chef-minitest handler running on VMs and reporting test results back
to CI host ... check.

Now I just have to get all my cookbook authors to write tests!

It seems like a lot of us are in this situation, so I thought I'd
share my findings and the direction in which I seem to be headed. We
have to get a bunch of people who are maybe not so used to TDD/BDD or
writing integration tests to suddenly start writing integration tests
in minitest (spec tests, in my exact case).

There's a growing amount of source code out there, but the best
documentation I've found is still in Bryan's example cookbook (found
here, if this somehow pops up on Google):

GitHub - btm/minitest-handler-cookbook

So you can use node data, but only within the scope of a particular
test definition. I understand why that is, but folks who know "enough
Ruby for Chef" will probably get tripped up by that.

Still, I need a way to tell my budding cooks what needs testing and
what doesn't. Can I use existing code coverage tools to do this
outside of converging a node and running the minitest handler? As
far as I can tell the answer is, no, I can't.

(I base this preliminary conclusion on an indexed conversation in
Chef Infra (archive) from July:

http://community.opscode.com/chat/chef/2012-07-18#id-165241

... would love to be proven wrong, BTW)

My current idea for drawing attention to which cookbooks are
adequately tested in our Chef repository involves writing some
Foodcritic rules to check for the presence of minitests in each
cookbook and then critique them. I believe I can get started with
this using three rules:
"Does your cookbook have a files/default/tests/minitest directory?"
"Do you have a test/spec.rb file describing each recipe in the cookbook?"
"Does your test/spec for recipe X cover all of the resources specified
in recipe X?".

Since Foodcritic testing is already in the early stage of my cookbook
testing pipeline, I can easily turn those warnings into pretty graphs.
I'm worried that writing those rules will be a bit brain-melty,
though.

Mission accomplished: I am shaming all my repo contributors for not
writing tests for their cookbook contributions. I still need to make
writing those tests easier for them to pick up or I'm just being mean.
My idea here is to write and exhaustively document a helper gem that
provides convenience functions beyond the resource checks that are
already included, then distribute that along with the recipe that
installs the chef-minitest-handler gem and sets it up as an event
handler on the system. If the helper gem is well-documented and the
tests in the repository leverage the helper properly, I think that
will be enough signposts to get us started. Then It'll be a lot
easier to accept all these pull requests...

Is anyone else proceeding along these lines? Is there any interest in
adding that functionality to mainline Foodcritic? Is there some other
way of ensuring test coverage on cookbooks? Any fellow travelers know
of any pitfalls or open source projects that might help?

(besides test-kitchen, that is. :wink: )

Hi Steve,

I'm still in the learning phase for testing with Chef, but I consider it an
essential part, thats why I spent some time on it lately.

See comments inline.

Am 07.10.2012 10:38 schrieb "steve ." leftathome@gmail.com:

Hi Torbin!

Durf. Never occurred to me to crib off of cucumber-nagios. That's a
pretty neat idea, and it sure solves the problem of "How do you
explain test-writing?" as Cucumber tests are about as easy to read as
they come. I'm going to give that a try next week.

I like the chefspec/fauxhai concept but I'm lucky enough to be testing
in an environment where spinning up and immediately terminating a
bunch of images is relatively painless.

I like chefspec/fauxhai too. From my experience they are especially (if not
only) useful for testing how combinations of different platforms and
attributes affect the resources that you create.

I quickly felt the urgent need to spin up some images and test real node
convergence as well :slight_smile:

Vagrant works well for me here, but the roundtrip times are naturally
longer. I wanted to play with LXC containers instead of using VirtualBox
(ideally Re-using my Vagrantfile), but haven't come to testing this yet.

Anyone here on the list using LXC containers for testing Chef runs already?

So I will probably just laze

out on that step and write unit/basic integration tests with the
minitest handler, then write up some Cucumber features to test each VM
after convergence and before teardown. One place where I do see value
in that would be for our LWRPs, a few of which are getting pretty
complicated and could probably do with some specs running that could
catch regressions.

The only problem I have (and this is due to my particular
implementation) is that I'm testing role/environment/platform
combinations. Can't really test a multi-tiered app this way. It's
one of the things test-kitchen does that my little skunkworks project
doesn't.

Yeah, that's where test-kitchen really rocks.

I guess the answer is to bind some features to a special

full-stack role that we would only use for testing/development, though
that wouldn't work for apps that require multiple hosts (Mongo springs
to mind).

For testing a complex setup like this I would go for a multivm Vagrantfile
and test the real scenario with multiple VMs - otherwise you are testing
something that dosen't reflect the real situation.

Still it would be useful to have something like a special role for
development in addition.

Your documentation and the test-writing docs on the Chefspec and
Fauxhai sites are pretty comprehensive, it seems (or have I just been
reading too much poorly-documented code lately? :wink: ). I'm not sure
how many people are interested in converging Vagrant VMs against a
Chef server endpoint (i.e. using the chef-client provisioner instead
of chef-solo), but jtimberman wrote a couple of pretty good blog posts
on the subject (starting with

http://jtimberman.housepub.org/blog/2012/03/18/multivm-vagrantfile-for-chef/

, again, HI GOOGLE! ) and I could perhaps write something up as well
if the community desires it and existing resources are found wanting.

I think thats useful and will probably do that for my current project. I'll
probably be using knife-server for painlessly setting up dedicated chef
Servers for my test runs on the fly.

Thanks for the pointer to the blog post, found some valuable tips there!

Cheers, Torben

Anyway, I think this kind of testing is a neat idea and I'm going to
see how far I can get with it. I try to follow this list but I'm
awash in work email most of the time. Comments/off-list conversations
on this subject with interested parties are also welcome.

On Thu, Oct 4, 2012 at 10:22 PM, Torben Knerr ukio@gmx.de wrote:

Love the idea of properly tested cookbooks and the fact that you are
pushing
your buddies into that direction :slight_smile:

I've taken some notes and collected some links about test-driven
cookbook
development:

https://github.com/tknerr/bills-kitchen/blob/master/COOKBOOK_DEVELOPMENT.md

It covers cookbook testing on several levels:

  • syntax & long checking -> foodcritic
  • unit-level spec testing -> chefspec and fauxhai
  • "smoke tests" from the inside -> minitest-handler
  • acceptance-level testing from the outside -> cucumber-nagios

Maybe you can make some use of it or you have some ideas how to improve
it.

Cheers, Torben

Am 05.10.2012 03:17 schrieb "steve ." leftathome@gmail.com:

Continuous cookbook testing framework (analogous to test-kitchen, but
built on Jenkins, Foodcritic, RVM and Vagrant) ... check.
Chef-minitest handler running on VMs and reporting test results back
to CI host ... check.

Now I just have to get all my cookbook authors to write tests!

It seems like a lot of us are in this situation, so I thought I'd
share my findings and the direction in which I seem to be headed. We
have to get a bunch of people who are maybe not so used to TDD/BDD or
writing integration tests to suddenly start writing integration tests
in minitest (spec tests, in my exact case).

There's a growing amount of source code out there, but the best
documentation I've found is still in Bryan's example cookbook (found
here, if this somehow pops up on Google):

GitHub - btm/minitest-handler-cookbook

So you can use node data, but only within the scope of a particular
test definition. I understand why that is, but folks who know "enough
Ruby for Chef" will probably get tripped up by that.

Still, I need a way to tell my budding cooks what needs testing and
what doesn't. Can I use existing code coverage tools to do this
outside of converging a node and running the minitest handler? As
far as I can tell the answer is, no, I can't.

(I base this preliminary conclusion on an indexed conversation in
Chef Infra (archive) from July:

http://community.opscode.com/chat/chef/2012-07-18#id-165241

... would love to be proven wrong, BTW)

My current idea for drawing attention to which cookbooks are
adequately tested in our Chef repository involves writing some
Foodcritic rules to check for the presence of minitests in each
cookbook and then critique them. I believe I can get started with
this using three rules:
"Does your cookbook have a files/default/tests/minitest directory?"
"Do you have a test/spec.rb file describing each recipe in the
cookbook?"
"Does your test/spec for recipe X cover all of the resources specified
in recipe X?".

Since Foodcritic testing is already in the early stage of my cookbook
testing pipeline, I can easily turn those warnings into pretty graphs.
I'm worried that writing those rules will be a bit brain-melty,
though.

Mission accomplished: I am shaming all my repo contributors for not
writing tests for their cookbook contributions. I still need to make
writing those tests easier for them to pick up or I'm just being mean.
My idea here is to write and exhaustively document a helper gem that
provides convenience functions beyond the resource checks that are
already included, then distribute that along with the recipe that
installs the chef-minitest-handler gem and sets it up as an event
handler on the system. If the helper gem is well-documented and the
tests in the repository leverage the helper properly, I think that
will be enough signposts to get us started. Then It'll be a lot
easier to accept all these pull requests...

Is anyone else proceeding along these lines? Is there any interest in
adding that functionality to mainline Foodcritic? Is there some other
way of ensuring test coverage on cookbooks? Any fellow travelers know
of any pitfalls or open source projects that might help?

(besides test-kitchen, that is. :wink: )

On Mon, Oct 8, 2012 at 4:13 PM, Torben Knerr ukio@gmx.de wrote:

Vagrant works well for me here, but the roundtrip times are naturally
longer. I wanted to play with LXC containers instead of using VirtualBox
(ideally Re-using my Vagrantfile), but haven't come to testing this yet.

Anyone here on the list using LXC containers for testing Chef runs already?

I have yet to do so myself. But I saw the creator of Toft [1] give a
talk and demonstration of it and it looked really nice.

[1] GitHub - exceedhl/toft: toft is a library currently supporting testing infrastructure code such as chef, shell scripts with cucumber and lxc on linux machine

For testing a complex setup like this I would go for a multivm Vagrantfile
and test the real scenario with multiple VMs - otherwise you are testing
something that dosen't reflect the real situation.

We do that ...

Still it would be useful to have something like a special role for
development in addition.

except we use a "chef_dev" environment instead of a role

--
Cheers,

Peter Donald

Am 08.10.2012 07:27 schrieb "Peter Donald" peter@realityforge.org:

On Mon, Oct 8, 2012 at 4:13 PM, Torben Knerr ukio@gmx.de wrote:

Vagrant works well for me here, but the roundtrip times are naturally
longer. I wanted to play with LXC containers instead of using VirtualBox
(ideally Re-using my Vagrantfile), but haven't come to testing this yet.

Anyone here on the list using LXC containers for testing Chef runs
already?

I have yet to do so myself. But I saw the creator of Toft [1] give a
talk and demonstration of it and it looked really nice.

[1] GitHub - exceedhl/toft: toft is a library currently supporting testing infrastructure code such as chef, shell scripts with cucumber and lxc on linux machine

Nice, looks promising. Now if that would also run on travis-ci, that would
be super-neat!

Anyone tried that already?

For testing a complex setup like this I would go for a multivm
Vagrantfile
and test the real scenario with multiple VMs - otherwise you are testing
something that dosen't reflect the real situation.

We do that ...

Still it would be useful to have something like a special role for
development in addition.

except we use a "chef_dev" environment instead of a role

--
Cheers,

Peter Donald

I might investigate this, as one problem we've run into with Vagrant
testing is that concurrent VM testing kind of doesn't work. So we
have to test VMs serially (on a physical box). The I/O is also not
that great, so Chef runs are taking several minutes (up to 20!) to
finish. Some of this might be Vagrant's fault but I'm willing to bet
it's mostly VirtualBox.

Unfortunately, I don't really see anything in the change logs for
either project that addresses my issues, which means it's either
something tunable or nobody's figured this out yet.

Maybe I should just bite the bullet and hop onto the plugin version of
Vagrant, get it working with the Vsphere API and use that instead.
LXC doesn't help me test on the Windows platforms, after all...

On Sun, Oct 7, 2012 at 11:25 PM, Torben Knerr ukio@gmx.de wrote:

Am 08.10.2012 07:27 schrieb "Peter Donald" peter@realityforge.org:

On Mon, Oct 8, 2012 at 4:13 PM, Torben Knerr ukio@gmx.de wrote:

Vagrant works well for me here, but the roundtrip times are naturally
longer. I wanted to play with LXC containers instead of using VirtualBox
(ideally Re-using my Vagrantfile), but haven't come to testing this yet.

Anyone here on the list using LXC containers for testing Chef runs
already?

I have yet to do so myself. But I saw the creator of Toft [1] give a
talk and demonstration of it and it looked really nice.

[1] GitHub - exceedhl/toft: toft is a library currently supporting testing infrastructure code such as chef, shell scripts with cucumber and lxc on linux machine

Nice, looks promising. Now if that would also run on travis-ci, that would
be super-neat!

Anyone tried that already?

For testing a complex setup like this I would go for a multivm
Vagrantfile
and test the real scenario with multiple VMs - otherwise you are testing
something that dosen't reflect the real situation.

We do that ...

Still it would be useful to have something like a special role for
development in addition.

except we use a "chef_dev" environment instead of a role

--
Cheers,

Peter Donald

you can consider using SSDs on the vagrant hosts, along with
http://scache.sourceforge.net/ for caching network i/o , we are using these
two for our CI server, and the results has been decent

On Tue, Oct 9, 2012 at 11:20 PM, steve . leftathome@gmail.com wrote:

I might investigate this, as one problem we've run into with Vagrant
testing is that concurrent VM testing kind of doesn't work. So we
have to test VMs serially (on a physical box). The I/O is also not
that great, so Chef runs are taking several minutes (up to 20!) to
finish. Some of this might be Vagrant's fault but I'm willing to bet
it's mostly VirtualBox.

Unfortunately, I don't really see anything in the change logs for
either project that addresses my issues, which means it's either
something tunable or nobody's figured this out yet.

Maybe I should just bite the bullet and hop onto the plugin version of
Vagrant, get it working with the Vsphere API and use that instead.
LXC doesn't help me test on the Windows platforms, after all...

On Sun, Oct 7, 2012 at 11:25 PM, Torben Knerr ukio@gmx.de wrote:

Am 08.10.2012 07:27 schrieb "Peter Donald" peter@realityforge.org:

On Mon, Oct 8, 2012 at 4:13 PM, Torben Knerr ukio@gmx.de wrote:

Vagrant works well for me here, but the roundtrip times are naturally
longer. I wanted to play with LXC containers instead of using
VirtualBox
(ideally Re-using my Vagrantfile), but haven't come to testing this
yet.

Anyone here on the list using LXC containers for testing Chef runs
already?

I have yet to do so myself. But I saw the creator of Toft [1] give a
talk and demonstration of it and it looked really nice.

[1] GitHub - exceedhl/toft: toft is a library currently supporting testing infrastructure code such as chef, shell scripts with cucumber and lxc on linux machine

Nice, looks promising. Now if that would also run on travis-ci, that
would
be super-neat!

Anyone tried that already?

For testing a complex setup like this I would go for a multivm
Vagrantfile
and test the real scenario with multiple VMs - otherwise you are
testing
something that dosen't reflect the real situation.

We do that ...

Still it would be useful to have something like a special role for
development in addition.

except we use a "chef_dev" environment instead of a role

--
Cheers,

Peter Donald

Thanks for the tip. I haven't instrumented this issue yet on the
Vagrant host but I doubt the problem is related to disk I/O on the
host. It seems like it's related to network I/O on the guest,
actually, as I've seen a similar problem on other hosts (including my
SSD-equipped MacBook ... though I would hope that an iSCSI LUN spread
across some indeterminate number of drive spindles would at least
equal the sequential/random I/O rates of a single SSD!).

Still, I'm not ruling anything out -- I'll add some monitors and take a look.

On Tue, Oct 9, 2012 at 10:56 AM, Ranjib Dey ranjibd@thoughtworks.com wrote:

you can consider using SSDs on the vagrant hosts, along with
http://scache.sourceforge.net/ for caching network i/o , we are using these
two for our CI server, and the results has been decent

On Tue, Oct 9, 2012 at 11:20 PM, steve . leftathome@gmail.com wrote:

I might investigate this, as one problem we've run into with Vagrant
testing is that concurrent VM testing kind of doesn't work. So we
have to test VMs serially (on a physical box). The I/O is also not
that great, so Chef runs are taking several minutes (up to 20!) to
finish. Some of this might be Vagrant's fault but I'm willing to bet
it's mostly VirtualBox.

Unfortunately, I don't really see anything in the change logs for
either project that addresses my issues, which means it's either
something tunable or nobody's figured this out yet.

Maybe I should just bite the bullet and hop onto the plugin version of
Vagrant, get it working with the Vsphere API and use that instead.
LXC doesn't help me test on the Windows platforms, after all...

On Sun, Oct 7, 2012 at 11:25 PM, Torben Knerr ukio@gmx.de wrote:

Am 08.10.2012 07:27 schrieb "Peter Donald" peter@realityforge.org:

On Mon, Oct 8, 2012 at 4:13 PM, Torben Knerr ukio@gmx.de wrote:

Vagrant works well for me here, but the roundtrip times are naturally
longer. I wanted to play with LXC containers instead of using
VirtualBox
(ideally Re-using my Vagrantfile), but haven't come to testing this
yet.

Anyone here on the list using LXC containers for testing Chef runs
already?

I have yet to do so myself. But I saw the creator of Toft [1] give a
talk and demonstration of it and it looked really nice.

[1] GitHub - exceedhl/toft: toft is a library currently supporting testing infrastructure code such as chef, shell scripts with cucumber and lxc on linux machine

Nice, looks promising. Now if that would also run on travis-ci, that
would
be super-neat!

Anyone tried that already?

For testing a complex setup like this I would go for a multivm
Vagrantfile
and test the real scenario with multiple VMs - otherwise you are
testing
something that dosen't reflect the real situation.

We do that ...

Still it would be useful to have something like a special role for
development in addition.

except we use a "chef_dev" environment instead of a role

--
Cheers,

Peter Donald