Passing information between test kitchen instances

#1

I have the following situation that I would like some tips on. I’m developing some cookbooks that involve configuring a software stack that involves more than 1 node at a time. We have our test kitchen configuration pointing at AWS, and we can spin up test instances on demand in AWS. Here’s my scenario:

  • I spin up test kitchen instance 1 with recipe A
  • I sping up test kitchen instance 2 with recipe B
  • Instance 2 needs information in recipe B about instance 1, specifically its IP address.

For now, I’ve set the IP address of node 1 as an attribute that I manually enter once kitchen has created my instances, by looking at the generated YAML file in the .kitchen folder. Ideally, I’d love to be able to automatically set this value somehow.

I’ve been searching online and found various examples of using Chef search to identify the target node and get information that way, but this doesn’t seem to work with test kitchen instances, only deployed nodes.The only other thing that occurred to me was to figure out how to parse the kitchen YAML file somehow, but this file doesn’t seem to be deployed to the servers by test kitchen.

Any ideas are highly appreciated. Thanks!
-Peter

0 Likes

#2

This falls under the general category of “multi-node testing” which is currently not supported. It’s a topic with a long and storied history in the Chef community, and everyone wants it, but we haven’t gotten there yet. Sorry. A possible workaround is the older kitchen-nodes plugin which provides some limited sync capabilities, or kitchen-terraform which lets you run both nodes in one suite.

0 Likes

#3

First off, thank you very much for the super quick reply!. I will keep that in mind for future kitchen testing.

0 Likes

#4

Peter,

I have the gluster cookbook and Andrew Repton added multinode kitchen
testing a little bit ago, but uses vagrant where its easy to control IP’s.
See https://github.com/shortdudey123/chef-gluster/blob/master/.kitchen.yml

Not sure how much this helps you with the AWS driver as opposed to the
vagrant drivet

-Grant

0 Likes

#5

OK, this took me way longer than I wanted, but why not do something like:

suites:
  - name: node1
    run_list:
      - recipe[app_vault::default]
    attributes:
      app_vault:
        node2: <%= '"' + YAML::load_file('.kitchen/node2-rhel-7.yml')['hostname'] + '"' %>
        node3: <%= '"' + YAML::load_file('.kitchen/node3-rhel-7.yml')['hostname'] + '"' %>
  - name: node2
    run_list:
      - recipe[app_vault::default]
    attributes:
      app_vault:
        node1: <%= '"' + YAML::load_file('.kitchen/node1-rhel-7.yml')['hostname'] + '"' %>
        node3: <%= '"' + YAML::load_file('.kitchen/node3-rhel-7.yml')['hostname'] + '"' %>
  - name: node3
    run_list:
      - recipe[app_vault::default]
    attributes:
      app_vault:
        node1: <%= '"' + YAML::load_file('.kitchen/node1-rhel-7.yml')['hostname'] + '"' %>
        node2: <%= '"' + YAML::load_file('.kitchen/node2-rhel-7.yml')['hostname'] + '"' %>

This would make the IP for the other nodes available as attributes. You would have to run a kitchen create first, to get all of the instances created, then a kitchen converge. But that solves the dynamic IP address problem. If the instances need to share other information, you probably should have some service discovery system already in place, and use that (a dev instance would probably be best).

0 Likes

#6

Thanks again to everyone who replied. Please note that I used the solution from @freimer above, slightly modified to have an if/else to test for the existence of the kitchen auto-generated YAML file:

node2: <%= if File.exist?('.kitchen/node2-rhel-7.yml')
                 '"' + YAML::load_file('.kitchen/node2-rhel-7.yml')['hostname'] + '"' 
               else 
                 '""' 
               end %>
0 Likes