Compile vs execution phase


#1

I’ve been working on learning how to write cookbooks using chef-solo. I
have created two cookbooks, one that evaluates the physical array
controllers and configures, partitions, luks_encrypts, mkfs and mounts the
drives to /data/1, /data/2, /data/3, etc… This works really well on a
new machine and puts the hardware in a state we are looking for no matter
how many drives there are or what their sizes are.

I also have a simple cassandra cookbook that installs an fpm version of
cassandra, runs a template of the cassandra.yaml and creates the data
directories depending on what drives are mounted to /data/1 /data/2 /data/3
etc…
done similar to:

#attributes file
default[:datadirs] = []
Dir.foreach("/data") do |drives|
next if drives.match(/^..?$)
if system “/bin/mountpoint /data/#{drives}>/dev/null” then
default[:cassandra][:data_dirs].push( “/data/#{drives}/cassandra/data” )
end
end

#recipe
node[:cassandra][:data_dirs].each do |data_dir|
directory “#{data_dir}” do
owner "cassandra"
group "cassandra"
mode 0750
action :create
recursive true
end
end

Of course this is getting generated at compile time when nothing is in
/data. The disks cookbook runs and populates /data/# however, the
cassandra cookbook wont work unless there is a way to regen the recipe
after the disk cookbook has completed. running chef-solo a second time
gets things into a proper state.

Is there a better way to create the directories? I think I can use the
lazy attribute assignment to make my template work, but the directory
creation is giving me hell.

Thanks for any help

Bill Warner


#2

I think this is worse than you think it is, because all attributes files
are parsed before any recipes are evaluated, so your trouble here is
going beyond compile-converge mode issues.

You may want to move both your attribute-default-setting code and your
directory construction into an LWRP. That will result in your
attributes only being set correctly in converge mode after the LWRP has
run, however. If you have more code that expects to be able to consume
the node[:cassandra][:data_dirs] attribute in compile mode, then you’ll
probably need to modify or wrap your cassandra cookbook to do its work
at compile time and then update the attributes after its done its work.

On 1/3/14 10:46 PM, Bill Warner wrote:

I’ve been working on learning how to write cookbooks using chef-solo.
I have created two cookbooks, one that evaluates the physical array
controllers and configures, partitions, luks_encrypts, mkfs and mounts
the drives to /data/1, /data/2, /data/3, etc… This works really
well on a new machine and puts the hardware in a state we are looking
for no matter how many drives there are or what their sizes are.

I also have a simple cassandra cookbook that installs an fpm version
of cassandra, runs a template of the cassandra.yaml and creates the
data directories depending on what drives are mounted to /data/1
/data/2 /data/3 etc…
done similar to:

#attributes file
default[:datadirs] = []
Dir.foreach("/data") do |drives|
next if drives.match(/^..?$)
if system “/bin/mountpoint /data/#{drives}>/dev/null” then
default[:cassandra][:data_dirs].push(
"/data/#{drives}/cassandra/data" )
end
end

#recipe
node[:cassandra][:data_dirs].each do |data_dir|
directory “#{data_dir}” do
owner "cassandra"
group "cassandra"
mode 0750
action :create
recursive true
end
end

Of course this is getting generated at compile time when nothing is in
/data. The disks cookbook runs and populates /data/# however, the
cassandra cookbook wont work unless there is a way to regen the recipe
after the disk cookbook has completed. running chef-solo a second
time gets things into a proper state.

Is there a better way to create the directories? I think I can use
the lazy attribute assignment to make my template work, but the
directory creation is giving me hell.

Thanks for any help

Bill Warner


#3

I was thinking of extending the directory resource with an lwrp that can
take a wild card. Not sure if it will do what I want, but If I could have
an lwrp that could create directories based on a path of
/data/*/cassandra/data then evaluate at execution time and create all the
data dirs it may help.

I think the idea of Attributes being the desired state of the node, then
the execution phase getting it to that state is something I still need to
figure out how to work with.

Is there something I’m missing to the chef architecture? One recipe that
depends on another recipe executing before it’s attributes could even be
inferred doesn’t seem to me like an obscure use case. Is a multi-pass
convergence something I should expect as necessary?

Thanks again

On Sun, Jan 5, 2014 at 1:46 PM, Lamont Granquist lamont@opscode.com wrote:

I think this is worse than you think it is, because all attributes files
are parsed before any recipes are evaluated, so your trouble here is going
beyond compile-converge mode issues.

You may want to move both your attribute-default-setting code and your
directory construction into an LWRP. That will result in your attributes
only being set correctly in converge mode after the LWRP has run, however.
If you have more code that expects to be able to consume the
node[:cassandra][:data_dirs] attribute in compile mode, then you’ll
probably need to modify or wrap your cassandra cookbook to do its work at
compile time and then update the attributes after its done its work.

On 1/3/14 10:46 PM, Bill Warner wrote:

I’ve been working on learning how to write cookbooks using chef-solo. I
have created two cookbooks, one that evaluates the physical array
controllers and configures, partitions, luks_encrypts, mkfs and mounts the
drives to /data/1, /data/2, /data/3, etc… This works really well on a
new machine and puts the hardware in a state we are looking for no matter
how many drives there are or what their sizes are.

I also have a simple cassandra cookbook that installs an fpm version of
cassandra, runs a template of the cassandra.yaml and creates the data
directories depending on what drives are mounted to /data/1 /data/2 /data/3
etc…
done similar to:

#attributes file
default[:datadirs] = []
Dir.foreach("/data") do |drives|
next if drives.match(/^..?$)
if system “/bin/mountpoint /data/#{drives}>/dev/null” then
default[:cassandra][:data_dirs].push( “/data/#{drives}/cassandra/data”
)
end
end

#recipe
node[:cassandra][:data_dirs].each do |data_dir|
directory “#{data_dir}” do
owner "cassandra"
group "cassandra"
mode 0750
action :create
recursive true
end
end

Of course this is getting generated at compile time when nothing is in
/data. The disks cookbook runs and populates /data/# however, the
cassandra cookbook wont work unless there is a way to regen the recipe
after the disk cookbook has completed. running chef-solo a second time
gets things into a proper state.

Is there a better way to create the directories? I think I can use the
lazy attribute assignment to make my template work, but the directory
creation is giving me hell.

Thanks for any help

Bill Warner

Bill Warner


#4

Hi Bill,

I’m not going to advocate this for your particular use case, but you can
re-load attribute files. So you COULD create a third recipe that includes
the disk recipe first, re-loads the attributes file for the Cassandra
cookbook, then includes the Cassandra recipe. This is just hypothetical
though to give you an idea on how things can work.

I think the compile/converge stuff is misleading and the terminology leads
to more confusion than anything else. Your recipes are just ruby code that
get evaluated. Everything you write in a recipe is run at this time. It
just so happens that some of what you write is creating resource objects
that are put into a queue. After all the recipes are evaluated(run as ruby
code) the queued objects get pulled up and their action method is called on
them.

-Greg

On Mon, Jan 6, 2014 at 2:50 PM, Bill Warner bill.warner@gmail.com wrote:

I was thinking of extending the directory resource with an lwrp that can
take a wild card. Not sure if it will do what I want, but If I could have
an lwrp that could create directories based on a path of
/data/*/cassandra/data then evaluate at execution time and create all the
data dirs it may help.

I think the idea of Attributes being the desired state of the node, then
the execution phase getting it to that state is something I still need to
figure out how to work with.

Is there something I’m missing to the chef architecture? One recipe that
depends on another recipe executing before it’s attributes could even be
inferred doesn’t seem to me like an obscure use case. Is a multi-pass
convergence something I should expect as necessary?

Thanks again

On Sun, Jan 5, 2014 at 1:46 PM, Lamont Granquist lamont@opscode.comwrote:

I think this is worse than you think it is, because all attributes files
are parsed before any recipes are evaluated, so your trouble here is going
beyond compile-converge mode issues.

You may want to move both your attribute-default-setting code and your
directory construction into an LWRP. That will result in your attributes
only being set correctly in converge mode after the LWRP has run, however.
If you have more code that expects to be able to consume the
node[:cassandra][:data_dirs] attribute in compile mode, then you’ll
probably need to modify or wrap your cassandra cookbook to do its work at
compile time and then update the attributes after its done its work.

On 1/3/14 10:46 PM, Bill Warner wrote:

I’ve been working on learning how to write cookbooks using chef-solo. I
have created two cookbooks, one that evaluates the physical array
controllers and configures, partitions, luks_encrypts, mkfs and mounts the
drives to /data/1, /data/2, /data/3, etc… This works really well on a
new machine and puts the hardware in a state we are looking for no matter
how many drives there are or what their sizes are.

I also have a simple cassandra cookbook that installs an fpm version of
cassandra, runs a template of the cassandra.yaml and creates the data
directories depending on what drives are mounted to /data/1 /data/2 /data/3
etc…
done similar to:

#attributes file
default[:datadirs] = []
Dir.foreach("/data") do |drives|
next if drives.match(/^..?$)
if system “/bin/mountpoint /data/#{drives}>/dev/null” then
default[:cassandra][:data_dirs].push( “/data/#{drives}/cassandra/data”
)
end
end

#recipe
node[:cassandra][:data_dirs].each do |data_dir|
directory “#{data_dir}” do
owner "cassandra"
group "cassandra"
mode 0750
action :create
recursive true
end
end

Of course this is getting generated at compile time when nothing is in
/data. The disks cookbook runs and populates /data/# however, the
cassandra cookbook wont work unless there is a way to regen the recipe
after the disk cookbook has completed. running chef-solo a second time
gets things into a proper state.

Is there a better way to create the directories? I think I can use the
lazy attribute assignment to make my template work, but the directory
creation is giving me hell.

Thanks for any help

Bill Warner

Bill Warner


#5

On 1/5/14 6:50 PM, Bill Warner wrote:

I was thinking of extending the directory resource with an lwrp that
can take a wild card. Not sure if it will do what I want, but If I
could have an lwrp that could create directories based on a path of
/data/*/cassandra/data then evaluate at execution time and create all
the data dirs it may help.

I think the idea of Attributes being the desired state of the node,
then the execution phase getting it to that state is something I still
need to figure out how to work with.

Is there something I’m missing to the chef architecture? One recipe
that depends on another recipe executing before it’s attributes could
even be inferred doesn’t seem to me like an obscure use case. Is a
multi-pass convergence something I should expect as necessary?
Attributes should really /be/ the desired state of the node, but you’re
making those attributes be dynamic and depend on prior resources getting
executed. I’m not quite sure exactly why you need this layer of
indirection, since it seems like it’d be better to have the inputs that
feed the cassandra cookbook and setup the drives in the /data
subdirectory also feed creating the subdirectories?


#6

I was trying to keep the disks somewhat generic as we have other clusters
that will use the same base then setup hadoop, solr and other data
directories for other environments/clusters. If I merge the disk setup
into the Cassandra cookbook then I’d have to do the same for all the others
and it seems like a loot of code duplication.

I’ll have to chew on this for a bit and see if I understand where you’re
coming from.

On Sun, Jan 5, 2014 at 10:05 PM, Lamont Granquist lamont@opscode.comwrote:

On 1/5/14 6:50 PM, Bill Warner wrote:

I was thinking of extending the directory resource with an lwrp that can
take a wild card. Not sure if it will do what I want, but If I could have
an lwrp that could create directories based on a path of
/data/*/cassandra/data then evaluate at execution time and create all the
data dirs it may help.

I think the idea of Attributes being the desired state of the node, then
the execution phase getting it to that state is something I still need to
figure out how to work with.

Is there something I’m missing to the chef architecture? One recipe that
depends on another recipe executing before it’s attributes could even be
inferred doesn’t seem to me like an obscure use case. Is a multi-pass
convergence something I should expect as necessary?

Attributes should really /be/ the desired state of the node, but you’re
making those attributes be dynamic and depend on prior resources getting
executed. I’m not quite sure exactly why you need this layer of
indirection, since it seems like it’d be better to have the inputs that
feed the cassandra cookbook and setup the drives in the /data subdirectory
also feed creating the subdirectories?

Bill Warner


#7

Hello

I was wondering how most of you manage your chef-repo.

Currently we use one chef repository with all our cookbooks in it and some community cookbooks added as submodules.
As we do not edit the community cookbooks directly, but rather wrap them, this is not much an issue. But now we’d like to opensource some of our own cookbooks, which probably means splitting them from our chef repository and adding them as submodules as well.

Some problems I foresee with this, are:

  • it’s not good practice to edit a submodule directly inside the repo (or is it?)
  • it’s not practical to not have all your cookbooks in one place (i.e. vagrant expects 1 cookbook path)
  • our chef developers have to take into account to constantly update all submodules instead of just one git pull (or we need to complicate git usage with git aliases such as git config alias.pullall '!git pull && git submodule update --init --recursive’)

Nothing big, nothing that’s not manageable. It just feels weird and as if I’m missing something.

How do you guys handle this?

Kr.
S


#8

Hey Stephen,

We have been using a tool called berkshelf. There is lots written about it
online but in summary: It is a dependency management tool for cookbooks.
Each cookbook lives in a single git repository and there is no overall
chef repo. Dependencies are specified in the metadata.rb (as usual).
Berkshelf pulls a cookbooks dependencies down from either: the opscode
site; or a chef server acting as a cookbook repository.

It may not be straight forward migrating from an existing chef-repo to the
berkshelf way of doing things but there are some info online about that
migration too.

http://berkshelf.com/

http://jtimberman.housepub.org/blog/2012/11/19/chef-repository-berkshelf-co
nversion/

Thanks,

Brian

On 06/01/2014 07:39, “Steven De Coeyer” steven@banteng.be wrote:

Hello

I was wondering how most of you manage your chef-repo.

Currently we use one chef repository with all our cookbooks in it and
some community cookbooks added as submodules.
As we do not edit the community cookbooks directly, but rather wrap them,
this is not much an issue. But now we¹d like to opensource some of our
own cookbooks, which probably means splitting them from our chef
repository and adding them as submodules as well.

Some problems I foresee with this, are:

  • it¹s not good practice to edit a submodule directly inside the repo
    (or is it?)
  • it¹s not practical to not have all your cookbooks in one place (i.e.
    vagrant expects 1 cookbook path)
  • our chef developers have to take into account to constantly update all
    submodules instead of just one git pull (or we need to complicate git
    usage with git aliases such as git config alias.pullall '!git pull && git submodule update --init --recursive¹)

Nothing big, nothing that¹s not manageable. It just feels weird and as if
I¹m missing something.

How do you guys handle this?

Kr.
S


#9

If you merge the disk setup and create that code duplication, can you
remove that duplication with compile-time definitions, or LWRPs, rather
than breaking it up into different cookbooks?

On 1/5/14 10:43 PM, Bill Warner wrote:

I was trying to keep the disks somewhat generic as we have other
clusters that will use the same base then setup hadoop, solr and other
data directories for other environments/clusters. If I merge the disk
setup into the Cassandra cookbook then I’d have to do the same for all
the others and it seems like a loot of code duplication.

I’ll have to chew on this for a bit and see if I understand where
you’re coming from.

On Sun, Jan 5, 2014 at 10:05 PM, Lamont Granquist <lamont@opscode.com
mailto:lamont@opscode.com> wrote:

On 1/5/14 6:50 PM, Bill Warner wrote:

    I was thinking of extending the directory resource with an
    lwrp that can take a wild card.  Not sure if it will do what I
    want, but If I could have an lwrp that could create
    directories based on a path of /data/*/cassandra/data then
    evaluate at execution time and create all the data dirs it may
    help.

    I think the idea of Attributes being the desired state of the
    node, then the execution phase getting it to that state is
    something I still need to figure out how to work with.

    Is there something I'm missing to the chef architecture?  One
    recipe that depends on another recipe executing before it's
    attributes could even be inferred doesn't seem to me like an
    obscure use case.  Is a multi-pass convergence something I
    should expect as necessary?

Attributes should really /be/ the desired state of the node, but
you're making those attributes be dynamic and depend on prior
resources getting executed.  I'm not quite sure exactly why you
need this layer of indirection, since it seems like it'd be better
to have the inputs that feed the cassandra cookbook and setup the
drives in the /data subdirectory also feed creating the
subdirectories?

Bill Warner


#10

This problem comes up almost every day for me.
What needs to happen is… recipe A needs to converge before recipe B
compiles.

You can manually manipulate run_contexts if you’re brave enough to go
mucking around in Chef internals…

or…

There is a library cookbook called “now” on the community site that
provides a hack for you.

Check out the integration tests for a usage example

-s

On Sat, Jan 4, 2014 at 12:46 AM, Bill Warner bill.warner@gmail.com wrote:

I’ve been working on learning how to write cookbooks using chef-solo. I
have created two cookbooks, one that evaluates the physical array
controllers and configures, partitions, luks_encrypts, mkfs and mounts the
drives to /data/1, /data/2, /data/3, etc… This works really well on a
new machine and puts the hardware in a state we are looking for no matter
how many drives there are or what their sizes are.

I also have a simple cassandra cookbook that installs an fpm version of
cassandra, runs a template of the cassandra.yaml and creates the data
directories depending on what drives are mounted to /data/1 /data/2 /data/3
etc…
done similar to:

#attributes file
default[:datadirs] = []
Dir.foreach("/data") do |drives|
next if drives.match(/^..?$)
if system “/bin/mountpoint /data/#{drives}>/dev/null” then
default[:cassandra][:data_dirs].push( “/data/#{drives}/cassandra/data”
)
end
end

#recipe
node[:cassandra][:data_dirs].each do |data_dir|
directory “#{data_dir}” do
owner "cassandra"
group "cassandra"
mode 0750
action :create
recursive true
end
end

Of course this is getting generated at compile time when nothing is in
/data. The disks cookbook runs and populates /data/# however, the
cassandra cookbook wont work unless there is a way to regen the recipe
after the disk cookbook has completed. running chef-solo a second time
gets things into a proper state.

Is there a better way to create the directories? I think I can use the
lazy attribute assignment to make my template work, but the directory
creation is giving me hell.

Thanks for any help

Bill Warner


#11

I haven’t been able to get back to this yet, but I was thinking I may be
able to have one resource subscribe to the creation resources from the
other cookbook so when they do something it gets kicked off in the first.
That way it would negate the need to predefine the loops. I may play with
this some tonight or maybe tomorrow.

On Mon, Jan 6, 2014 at 8:03 AM, Sean OMeara someara@opscode.com wrote:

This problem comes up almost every day for me.
What needs to happen is… recipe A needs to converge before recipe B
compiles.

You can manually manipulate run_contexts if you’re brave enough to go
mucking around in Chef internals…

or…

There is a library cookbook called “now” on the community site that
provides a hack for you.
https://github.com/someara/now-cookbook

Check out the integration tests for a usage example

https://github.com/someara/now-cookbook/blob/master/test/fixtures/cookbooks/now_test/recipes/default.rb

-s

On Sat, Jan 4, 2014 at 12:46 AM, Bill Warner bill.warner@gmail.comwrote:

I’ve been working on learning how to write cookbooks using chef-solo. I
have created two cookbooks, one that evaluates the physical array
controllers and configures, partitions, luks_encrypts, mkfs and mounts the
drives to /data/1, /data/2, /data/3, etc… This works really well on a
new machine and puts the hardware in a state we are looking for no matter
how many drives there are or what their sizes are.

I also have a simple cassandra cookbook that installs an fpm version of
cassandra, runs a template of the cassandra.yaml and creates the data
directories depending on what drives are mounted to /data/1 /data/2 /data/3
etc…
done similar to:

#attributes file
default[:datadirs] = []
Dir.foreach("/data") do |drives|
next if drives.match(/^..?$)
if system “/bin/mountpoint /data/#{drives}>/dev/null” then
default[:cassandra][:data_dirs].push(
"/data/#{drives}/cassandra/data" )
end
end

#recipe
node[:cassandra][:data_dirs].each do |data_dir|
directory “#{data_dir}” do
owner "cassandra"
group "cassandra"
mode 0750
action :create
recursive true
end
end

Of course this is getting generated at compile time when nothing is in
/data. The disks cookbook runs and populates /data/# however, the
cassandra cookbook wont work unless there is a way to regen the recipe
after the disk cookbook has completed. running chef-solo a second time
gets things into a proper state.

Is there a better way to create the directories? I think I can use the
lazy attribute assignment to make my template work, but the directory
creation is giving me hell.

Thanks for any help

Bill Warner

Bill Warner


#12

I started playing with a subscribes and got something functional. As a
mock up I was able to use:

#second recipe in run list
node[:directories].each do |dir|
directory “#{dir}/cassandra/data” do
action :nothing
recursive true
subscribes :create, "directory[#{dir}]"
end
end

where node[:directories] was equal to [ “/data/1”, “/data/2”, “/data/3” ]
and it was set and creating the directories in another recipe:

#first in run list
#attributes file to simulate the physical disks being created:
node[:directories] = [ “/data/1”, “/data/2”, “/data/3” ]

#recipe
node[:directories].each do |dir|
directory dir do
action :create
recursive true
end
end

I still have to work on the template code and figure out how I can get
everything to it. but I am getting closer. I will try to implemet this
into my real cookbooks tomorrow.

On Mon, Jan 6, 2014 at 5:24 PM, Bill Warner bill.warner@gmail.com wrote:

I haven’t been able to get back to this yet, but I was thinking I may be
able to have one resource subscribe to the creation resources from the
other cookbook so when they do something it gets kicked off in the first.
That way it would negate the need to predefine the loops. I may play with
this some tonight or maybe tomorrow.

On Mon, Jan 6, 2014 at 8:03 AM, Sean OMeara someara@opscode.com wrote:

This problem comes up almost every day for me.
What needs to happen is… recipe A needs to converge before recipe B
compiles.

You can manually manipulate run_contexts if you’re brave enough to go
mucking around in Chef internals…

or…

There is a library cookbook called “now” on the community site that
provides a hack for you.
https://github.com/someara/now-cookbook

Check out the integration tests for a usage example

https://github.com/someara/now-cookbook/blob/master/test/fixtures/cookbooks/now_test/recipes/default.rb

-s

On Sat, Jan 4, 2014 at 12:46 AM, Bill Warner bill.warner@gmail.comwrote:

I’ve been working on learning how to write cookbooks using chef-solo. I
have created two cookbooks, one that evaluates the physical array
controllers and configures, partitions, luks_encrypts, mkfs and mounts the
drives to /data/1, /data/2, /data/3, etc… This works really well on a
new machine and puts the hardware in a state we are looking for no matter
how many drives there are or what their sizes are.

I also have a simple cassandra cookbook that installs an fpm version of
cassandra, runs a template of the cassandra.yaml and creates the data
directories depending on what drives are mounted to /data/1 /data/2 /data/3
etc…
done similar to:

#attributes file
default[:datadirs] = []
Dir.foreach("/data") do |drives|
next if drives.match(/^..?$)
if system “/bin/mountpoint /data/#{drives}>/dev/null” then
default[:cassandra][:data_dirs].push(
"/data/#{drives}/cassandra/data" )
end
end

#recipe
node[:cassandra][:data_dirs].each do |data_dir|
directory “#{data_dir}” do
owner "cassandra"
group "cassandra"
mode 0750
action :create
recursive true
end
end

Of course this is getting generated at compile time when nothing is in
/data. The disks cookbook runs and populates /data/# however, the
cassandra cookbook wont work unless there is a way to regen the recipe
after the disk cookbook has completed. running chef-solo a second time
gets things into a proper state.

Is there a better way to create the directories? I think I can use the
lazy attribute assignment to make my template work, but the directory
creation is giving me hell.

Thanks for any help

Bill Warner

Bill Warner

Bill Warner


#13

In case anyone was on the edge of their seat waiting to see if this all
worked.

I was able to add the lazy evaluation to the template creation for the
cassandra.yaml and it will add data directories as they exist. A nice side
effect, slap in a new disk, my disks cookbook part/encrypt/mkfs/mounts it,
cassandra cookbook adds the mount to the cassandra.yaml and restarts the
service.

Also used subscribes as I noted in the example above to the mount[/data/#]
to get everything working. This has kept my disks cookbook as something
that can be included and used for other applications. Mainly hadoop of
which I will be working on next.

Thanks for the help,
-BillWa

On Mon, Jan 6, 2014 at 9:55 PM, Bill Warner bill.warner@gmail.com wrote:

I started playing with a subscribes and got something functional. As a
mock up I was able to use:

#second recipe in run list
node[:directories].each do |dir|
directory “#{dir}/cassandra/data” do
action :nothing
recursive true
subscribes :create, "directory[#{dir}]"
end
end

where node[:directories] was equal to [ “/data/1”, “/data/2”, “/data/3” ]
and it was set and creating the directories in another recipe:

#first in run list
#attributes file to simulate the physical disks being created:
node[:directories] = [ “/data/1”, “/data/2”, “/data/3” ]

#recipe
node[:directories].each do |dir|
directory dir do

action :create
recursive true

end
end

I still have to work on the template code and figure out how I can get
everything to it. but I am getting closer. I will try to implemet this
into my real cookbooks tomorrow.

On Mon, Jan 6, 2014 at 5:24 PM, Bill Warner bill.warner@gmail.com wrote:

I haven’t been able to get back to this yet, but I was thinking I may be
able to have one resource subscribe to the creation resources from the
other cookbook so when they do something it gets kicked off in the first.
That way it would negate the need to predefine the loops. I may play with
this some tonight or maybe tomorrow.

On Mon, Jan 6, 2014 at 8:03 AM, Sean OMeara someara@opscode.com wrote:

This problem comes up almost every day for me.
What needs to happen is… recipe A needs to converge before recipe B
compiles.

You can manually manipulate run_contexts if you’re brave enough to go
mucking around in Chef internals…

or…

There is a library cookbook called “now” on the community site that
provides a hack for you.
https://github.com/someara/now-cookbook

Check out the integration tests for a usage example

https://github.com/someara/now-cookbook/blob/master/test/fixtures/cookbooks/now_test/recipes/default.rb

-s

On Sat, Jan 4, 2014 at 12:46 AM, Bill Warner bill.warner@gmail.comwrote:

I’ve been working on learning how to write cookbooks using chef-solo.
I have created two cookbooks, one that evaluates the physical array
controllers and configures, partitions, luks_encrypts, mkfs and mounts the
drives to /data/1, /data/2, /data/3, etc… This works really well on a
new machine and puts the hardware in a state we are looking for no matter
how many drives there are or what their sizes are.

I also have a simple cassandra cookbook that installs an fpm version of
cassandra, runs a template of the cassandra.yaml and creates the data
directories depending on what drives are mounted to /data/1 /data/2 /data/3
etc…
done similar to:

#attributes file
default[:datadirs] = []
Dir.foreach("/data") do |drives|
next if drives.match(/^..?$)
if system “/bin/mountpoint /data/#{drives}>/dev/null” then
default[:cassandra][:data_dirs].push(
"/data/#{drives}/cassandra/data" )
end
end

#recipe
node[:cassandra][:data_dirs].each do |data_dir|
directory “#{data_dir}” do
owner "cassandra"
group "cassandra"
mode 0750
action :create
recursive true
end
end

Of course this is getting generated at compile time when nothing is in
/data. The disks cookbook runs and populates /data/# however, the
cassandra cookbook wont work unless there is a way to regen the recipe
after the disk cookbook has completed. running chef-solo a second time
gets things into a proper state.

Is there a better way to create the directories? I think I can use the
lazy attribute assignment to make my template work, but the directory
creation is giving me hell.

Thanks for any help

Bill Warner

Bill Warner

Bill Warner

Bill Warner