Chef-client memory usage

On 23 December 2011 00:52, Chris grocerylist@gmail.com wrote:

I agree, but my ops team does not

Make it your personal mission to use Chef to automate them into new employment?

maybe run the client via cron instead of a daemon so the memory use is only during client runs.

On Dec 8, 2011, at 1:43 PM, Chris wrote:

My company is pretty late to the Chef party, only getting things started about 6 months ago (after a year of asking for it), but now that we have things up and running we've run into a bit of a problem. The client consumes a fairly large amount of memory, between 175-250m per server. This has caused a lot of concern from the Operations team since that amount * N VMs can get quite expensive. I've been doing some research into this and noticed that the amount of resident memory can depend on how many recipes are loaded on a node, and Opscode docs seem to confirm this. Right now these cookbooks are loaded into a single base role and added to each node for ease of use. They're all OS level recipes to manage hostfiles, resolv.conf etc.. etc.. There are 20 total. We also have application roles that can add another 3 or 4 recipes.
I've hacked around a bit on the Samba cookbook and removed all the code used to create users, which has lowered the memory foot print down to a steady 192m, but i fear this won't be enough to convince my ops team to keep chef. They want to dump it and go back to using shell and perl scripts for everything.

My question is, does anyone have any tips for reducing the memory usage? I'd like to be able to keep Chef around.

Thanks!

I suggested that. But that got nix'd because they were afraid the spikes in
usage would cause problems.

On Thu, Dec 8, 2011 at 1:46 PM, Alex Soto apsoto@gmail.com wrote:

maybe run the client via cron instead of a daemon so the memory use is
only during client runs.

On Dec 8, 2011, at 1:43 PM, Chris wrote:

My company is pretty late to the Chef party, only getting things started
about 6 months ago (after a year of asking for it), but now that we have
things up and running we've run into a bit of a problem. The client
consumes a fairly large amount of memory, between 175-250m per server. This
has caused a lot of concern from the Operations team since that amount * N
VMs can get quite expensive. I've been doing some research into this and
noticed that the amount of resident memory can depend on how many recipes
are loaded on a node, and Opscode docs seem to confirm this. Right now
these cookbooks are loaded into a single base role and added to each node
for ease of use. They're all OS level recipes to manage hostfiles,
resolv.conf etc.. etc.. There are 20 total. We also have application roles
that can add another 3 or 4 recipes.
I've hacked around a bit on the Samba cookbook and removed all the code
used to create users, which has lowered the memory foot print down to a
steady 192m, but i fear this won't be enough to convince my ops team to
keep chef. They want to dump it and go back to using shell and perl scripts
for everything.

My question is, does anyone have any tips for reducing the memory usage?
I'd like to be able to keep Chef around.

Thanks!

--
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.

You could also schedule chef-client runs when they're necessary, e.g. design hooks for controlled deployments.

-Erik

On Dec 8, 2011, at 1:46 PM, Alex Soto wrote:

maybe run the client via cron instead of a daemon so the memory use is only during client runs.

On Dec 8, 2011, at 1:43 PM, Chris wrote:

My company is pretty late to the Chef party, only getting things started about 6 months ago (after a year of asking for it), but now that we have things up and running we've run into a bit of a problem. The client consumes a fairly large amount of memory, between 175-250m per server. This has caused a lot of concern from the Operations team since that amount * N VMs can get quite expensive. I've been doing some research into this and noticed that the amount of resident memory can depend on how many recipes are loaded on a node, and Opscode docs seem to confirm this. Right now these cookbooks are loaded into a single base role and added to each node for ease of use. They're all OS level recipes to manage hostfiles, resolv.conf etc.. etc.. There are 20 total. We also have application roles that can add another 3 or 4 recipes.
I've hacked around a bit on the Samba cookbook and removed all the code used to create users, which has lowered the memory foot print down to a steady 192m, but i fear this won't be enough to convince my ops team to keep chef. They want to dump it and go back to using shell and perl scripts for everything.

My question is, does anyone have any tips for reducing the memory usage? I'd like to be able to keep Chef around.

Thanks!

I think that might end up being the plan. We deploy our application with
some code i wrote to pulled stored configuration from a role, each deploy
run calls the chef-client to make sure the role is up to date. The thing
that worries me is that some applications can go months with out any
updates, which means those VMs could be really out of date the next time we
touch them. The ops team has problems patching Linux systems, so a while a
cookbook change works on 1 set of VMs it might not work on something older
etc.

On Thu, Dec 8, 2011 at 2:17 PM, Erik Hollensbe <
erik.hollensbe@wildfireapp.com> wrote:

You could also schedule chef-client runs when they're necessary, e.g.
design hooks for controlled deployments.

-Erik

On Dec 8, 2011, at 1:46 PM, Alex Soto wrote:

maybe run the client via cron instead of a daemon so the memory use is
only during client runs.

On Dec 8, 2011, at 1:43 PM, Chris wrote:

My company is pretty late to the Chef party, only getting things
started about 6 months ago (after a year of asking for it), but now that we
have things up and running we've run into a bit of a problem. The client
consumes a fairly large amount of memory, between 175-250m per server. This
has caused a lot of concern from the Operations team since that amount * N
VMs can get quite expensive. I've been doing some research into this and
noticed that the amount of resident memory can depend on how many recipes
are loaded on a node, and Opscode docs seem to confirm this. Right now
these cookbooks are loaded into a single base role and added to each node
for ease of use. They're all OS level recipes to manage hostfiles,
resolv.conf etc.. etc.. There are 20 total. We also have application roles
that can add another 3 or 4 recipes.
I've hacked around a bit on the Samba cookbook and removed all the code
used to create users, which has lowered the memory foot print down to a
steady 192m, but i fear this won't be enough to convince my ops team to
keep chef. They want to dump it and go back to using shell and perl scripts
for everything.

My question is, does anyone have any tips for reducing the memory
usage? I'd like to be able to keep Chef around.

Thanks!

--
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.

Brainstorming session ahead.

You can try

  • forcing Ruby to GC more frequently by patching the chef-client in key
    places

  • running the chef-client with ulimit

  • Jay Feldblum

On Thu, Dec 8, 2011 at 5:17 PM, Erik Hollensbe <
erik.hollensbe@wildfireapp.com> wrote:

You could also schedule chef-client runs when they're necessary, e.g.
design hooks for controlled deployments.

-Erik

On Dec 8, 2011, at 1:46 PM, Alex Soto wrote:

maybe run the client via cron instead of a daemon so the memory use is
only during client runs.

On Dec 8, 2011, at 1:43 PM, Chris wrote:

My company is pretty late to the Chef party, only getting things
started about 6 months ago (after a year of asking for it), but now that we
have things up and running we've run into a bit of a problem. The client
consumes a fairly large amount of memory, between 175-250m per server. This
has caused a lot of concern from the Operations team since that amount * N
VMs can get quite expensive. I've been doing some research into this and
noticed that the amount of resident memory can depend on how many recipes
are loaded on a node, and Opscode docs seem to confirm this. Right now
these cookbooks are loaded into a single base role and added to each node
for ease of use. They're all OS level recipes to manage hostfiles,
resolv.conf etc.. etc.. There are 20 total. We also have application roles
that can add another 3 or 4 recipes.
I've hacked around a bit on the Samba cookbook and removed all the code
used to create users, which has lowered the memory foot print down to a
steady 192m, but i fear this won't be enough to convince my ops team to
keep chef. They want to dump it and go back to using shell and perl scripts
for everything.

My question is, does anyone have any tips for reducing the memory
usage? I'd like to be able to keep Chef around.

Thanks!

#2 is a quick and easy solution which the Ops team would probably go for.
Good thinking!

On Thu, Dec 8, 2011 at 2:38 PM, Jay Feldblum y_feldblum@yahoo.com wrote:

Brainstorming session ahead.

You can try

  • forcing Ruby to GC more frequently by patching the chef-client in key
    places

  • running the chef-client with ulimit

  • Jay Feldblum

On Thu, Dec 8, 2011 at 5:17 PM, Erik Hollensbe <
erik.hollensbe@wildfireapp.com> wrote:

You could also schedule chef-client runs when they're necessary, e.g.
design hooks for controlled deployments.

-Erik

On Dec 8, 2011, at 1:46 PM, Alex Soto wrote:

maybe run the client via cron instead of a daemon so the memory use is
only during client runs.

On Dec 8, 2011, at 1:43 PM, Chris wrote:

My company is pretty late to the Chef party, only getting things
started about 6 months ago (after a year of asking for it), but now that we
have things up and running we've run into a bit of a problem. The client
consumes a fairly large amount of memory, between 175-250m per server. This
has caused a lot of concern from the Operations team since that amount * N
VMs can get quite expensive. I've been doing some research into this and
noticed that the amount of resident memory can depend on how many recipes
are loaded on a node, and Opscode docs seem to confirm this. Right now
these cookbooks are loaded into a single base role and added to each node
for ease of use. They're all OS level recipes to manage hostfiles,
resolv.conf etc.. etc.. There are 20 total. We also have application roles
that can add another 3 or 4 recipes.
I've hacked around a bit on the Samba cookbook and removed all the
code used to create users, which has lowered the memory foot print down to
a steady 192m, but i fear this won't be enough to convince my ops team to
keep chef. They want to dump it and go back to using shell and perl scripts
for everything.

My question is, does anyone have any tips for reducing the memory
usage? I'd like to be able to keep Chef around.

Thanks!

--
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.