Sudden Increase in Merb RAM Use

Hi,

merb RAM usage (rss) appears to stay flat for a long time and then suddenly shoots up.

After the RAM usage has spiked it looks like this:

$ ps vax --sort=-rss |head
PID TTY STAT TIME MAJFL TRS DRS RSS %MEM COMMAND
29513 ? R 2192:16 1141997 2 2549253 2352616 79.9 merb : worker (port
4000)
9882 ?
S 39:31 31673 2 172461 111188 3.7 ruby /usr/bin/chef-indexer -d -c /etc/chef/indexer.rb
29514 ? S 20:39 55304 2 132805 58284 1.9 merb : worker (port 4001)
29512 ? S 811:15 11118 2 104457 29028 0.9 merb : spawner (ports
4000)
10213 ? S 3:37 119973 2 301561 18080 0.6 /usr/bin/ruby1.8 /usr/bin/stompserver -C /etc/stompserver.conf
15666 ? Sl 0:15 1473 1617 111930 10768 0.3 /usr/lib/erlang/erts-5.7.2/bin/beam.smp -Bd -K true – -root /usr/lib/erlang -progname erl – -home /var/lib/couchdb -noshell -noinput -smp auto -sasl errlog_type error -pa /usr/lib/couchdb/erlang/lib/couch-0.10.0/ebin
/usr/lib/couchdb/erlang/lib/mochiweb-r97/ebin /usr/lib/couchdb/erlang/lib/ibrowse-1.5.2/ebin /usr/lib/couchdb/erlang/lib/erlang-oauth/ebin -eval application:load(ibrowse) -eval application:load(oauth) -eval application:load(crypto) -eval application:load(couch) -eval crypto:start() -eval ssl:start() -eval ibrowse:start() -eval couch_server:start([ “/etc/couchdb/default.ini”, “/etc/couchdb/local.ini”, “/etc/couchdb/default.ini”, “/etc/couchdb/local.ini”]), receive done -> done end. -pidfile /var/run/couchdb/couchdb.pid -heart

Bumped up VM that is running Chef server from 3 GB RAM to 5 GB RAM.

After the reboot merb is only using about 185 MiB rss vs. 2.3 GiB

$ ps vax --sort=-rss |head
PID TTY STAT TIME MAJFL TRS DRS RSS %MEM COMMAND
29513 ? R 2192:16
1141997 2 2549253 2352616 79.9 merb : worker (port 4000)
9882 ? S 39:31 31673 2 172461 111188 3.7 ruby /usr/bin/chef-indexer -d -c /etc/chef/indexer.rb
29514 ? S 20:39 55304 2 132805 58284 1.9 merb : worker (port
4001)
29512 ? S 811:15 11118 2 104457 29028 0.9 merb : spawner (ports 4000)
10213
? S 3:37 119973 2 301561 18080 0.6 /usr/bin/ruby1.8 /usr/bin/stompserver -C /etc/stompserver.conf
15666 ? Sl 0:15 1473 1617 111930 10768 0.3 /usr/lib/erlang/erts-5.7.2/bin/beam.smp -Bd -K true – -root /usr/lib/erlang -progname erl – -home /var/lib/couchdb -noshell -noinput -smp auto -sasl errlog_type error -pa /usr/lib/couchdb/erlang/lib/couch-0.10.0/ebin /usr/lib/couchdb/erlang/lib/mochiweb-r97/ebin /usr/lib/couchdb/erlang/lib/ibrowse-1.5.2/ebin /usr/lib/couchdb/erlang/lib/erlang-oauth/ebin -eval application:load(ibrowse) -eval application:load(oauth) -eval application:load(crypto) -eval application:load(couch) -eval crypto:start() -eval ssl:start() -eval ibrowse:start() -eval couch_server:start([ “/etc/couchdb/default.ini”, “/etc/couchdb/local.ini”,
"/etc/couchdb/default.ini", “/etc/couchdb/local.ini”]), receive done -> done end. -pidfile /var/run/couchdb/couchdb.pid -heart

We are keeping charts of RAM use over time via Nagios/RRD and they show the increase in RAM usage happens suddenly not gradually. It also repeats over time.

Do other people see this issue?

Thanks,
Ken

What's the frequency of the spike? Does it correlate to a particular
set of nodes checking in?

Just to be sure, this is on 0.7.x?

Sent from my iPhone

On Feb 3, 2010, at 8:10 AM, Kenneth Stailey kstailey@yahoo.com wrote:

Hi,

merb RAM usage (rss) appears to stay flat for a long time and then
suddenly shoots up.

After the RAM usage has spiked it looks like this:

$ ps vax --sort=-rss |head
PID TTY STAT TIME MAJFL TRS DRS RSS %MEM COMMAND
29513 ? R 2192:16 1141997 2 2549253 2352616 79.9 merb :
worker (port 4000)
9882 ? S 39:31 31673 2 172461 111188 3.7 ruby /usr/
bin/chef-indexer -d -c /etc/chef/indexer.rb
29514 ? S 20:39 55304 2 132805 58284 1.9 merb :
worker (port 4001)
29512 ? S 811:15 11118 2 104457 29028 0.9 merb :
spawner (ports 4000)
10213 ? S 3:37 119973 2 301561 18080 0.6 /usr/bin/
ruby1.8 /usr/bin/stompserver -C /etc/stompserver.conf
15666 ? Sl 0:15 1473 1617 111930 10768 0.3 /usr/lib/
erlang/erts-5.7.2/bin/beam.smp -Bd -K true -- -root /usr/lib/erlang -
progname erl -- -home /var/lib/couchdb -noshell -noinput -smp auto -
sasl errlog_type error -pa /usr/lib/couchdb/erlang/lib/couch-0.10.0/
ebin /usr/lib/couchdb/erlang/lib/mochiweb-r97/ebin /usr/lib/couchdb/
erlang/lib/ibrowse-1.5.2/ebin /usr/lib/couchdb/erlang/lib/erlang-
oauth/ebin -eval application:load(ibrowse) -eval application:load
(oauth) -eval application:load(crypto) -eval application:load(couch)
-eval crypto:start() -eval ssl:start() -eval ibrowse:start() -eval
couch_server:start([ "/etc/couchdb/default.ini", "/etc/couchdb/
local.ini", "/etc/couchdb/default.ini", "/etc/couchdb/local.ini"]),
receive done -> done end. -pidfile /var/run/couchdb/couchdb.pid -heart

Bumped up VM that is running Chef server from 3 GB RAM to 5 GB RAM.

After the reboot merb is only using about 185 MiB rss vs. 2.3 GiB

$ ps vax --sort=-rss |head
PID TTY STAT TIME MAJFL TRS DRS RSS %MEM COMMAND
29513 ? R 2192:16 1141997 2 2549253 2352616 79.9 merb :
worker (port 4000)
9882 ? S 39:31 31673 2 172461 111188 3.7 ruby /usr/
bin/chef-indexer -d -c /etc/chef/indexer.rb
29514 ? S 20:39 55304 2 132805 58284 1.9 merb :
worker (port 4001)
29512 ? S 811:15 11118 2 104457 29028 0.9 merb :
spawner (ports 4000)
10213 ? S 3:37 119973 2 301561 18080 0.6 /usr/bin/
ruby1.8 /usr/bin/stompserver -C /etc/stompserver.conf
15666 ? Sl 0:15 1473 1617 111930 10768 0.3 /usr/lib/
erlang/erts-5.7.2/bin/beam.smp -Bd -K true -- -root /usr/lib/erlang -
progname erl -- -home /var/lib/couchdb -noshell -noinput -smp auto -
sasl errlog_type error -pa /usr/lib/couchdb/erlang/lib/couch-0.10.0/
ebin /usr/lib/couchdb/erlang/lib/mochiweb-r97/ebin /usr/lib/couchdb/
erlang/lib/ibrowse-1.5.2/ebin /usr/lib/couchdb/erlang/lib/erlang-
oauth/ebin -eval application:load(ibrowse) -eval application:load
(oauth) -eval application:load(crypto) -eval application:load(couch)
-eval crypto:start() -eval ssl:start() -eval ibrowse:start() -eval
couch_server:start([ "/etc/couchdb/default.ini", "/etc/couchdb/
local.ini", "/etc/couchdb/default.ini", "/etc/couchdb/local.ini"]),
receive done -> done end. -pidfile /var/run/couchdb/couchdb.pid -heart

We are keeping charts of RAM use over time via Nagios/RRD and they
show the increase in RAM usage happens suddenly not gradually. It
also repeats over time.

Do other people see this issue?

Thanks,
Ken

Particulars:

$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 9.10
Release: 9.10
Codename: karmic

$ uname -m
x86_64

$ COLUMNS=40 dpkg -l | grep chef
ii chef 0.7.16-1 configuration management system written in R
ii chef-indexer 0.7.16-1 Creates search indexes of Chef node attribut
ii chef-server 0.7.16-1 Merb application providing centralized manag
ii chef-server-sl 0.7.16-1 Merb app slice providing centralized managem
ii libchef-ruby 0.7.16-1 Ruby libraries for Chef configuration manage
ii libchef-ruby1. 0.7.16-1 Ruby 1.8 libraries for Chef configuration ma

Looking closer the increase in RAM usage is not sudden. I was looking at a chart of "free memory" blush so the "spikes" were really reboots resulting in sudden increase of RAM.

However, the merb worker was using an rss of ~2.3 GiB

$ ps vax --sort=-rss |head
PID TTY STAT TIME MAJFL TRS DRS RSS %MEM COMMAND
29513 ? R 2192:16 1141997 2 2549253 2352616 79.9 merb : worker (port 4000)
9882 ? S 39:31 31673 2 172461 111188 3.7 ruby /usr/bin/chef-indexer -d -c /etc/chef/indexer.rb
29514 ? S 20:39 55304 2 132805 58284 1.9 merb : worker (port 4001)
29512 ? S 811:15 11118 2 104457 29028 0.9 merb : spawner (ports 4000)

Since the reboot the rss of the restarted merb worker has climbed steadily from about 185 MiB to 320 MiB

$ ps vax --sort=-rss |head -n 5
PID TTY STAT TIME MAJFL TRS DRS RSS %MEM COMMAND
1728 ? S 105:02 1 2 400393 322248 6.6 merb : worker (port 4000)
1613 ? S 0:12 0 2 128161 78968 1.6 /usr/bin/ruby1.8 /usr/bin/stompserver -C /etc/stompserver.conf
1729 ? S 1:08 0 2 109313 36864 0.7 merb : worker (port 4001)
1727 ? S 41:55 3 2 105145 32532 0.6 merb : spawner (ports 4000)

Does it have a memory leak or is it merely a resource hog?

Thanks,
Ken

--- On Wed, 2/3/10, Adam Jacob adam@opscode.com wrote:

From: Adam Jacob adam@opscode.com
Subject: [chef] Re: Sudden Increase in Merb RAM Use
To: "chef@lists.opscode.com" chef@lists.opscode.com
Cc: "chef@lists.opscode.com" chef@lists.opscode.com
Date: Wednesday, February 3, 2010, 11:16 AM
What's the frequency of the
spike? Does it correlate to a particular set of nodes
checking in?
Just to be sure, this is on 0.7.x?

Sent from my iPhone
On Feb 3, 2010, at 8:10 AM, Kenneth Stailey kstailey@yahoo.com
wrote:

Hi,

merb RAM usage (rss) appears to stay flat for a long time
and then suddenly shoots up.

After the RAM usage has spiked it looks like this:

$ ps vax --sort=-rss |head
PID TTY
STAT TIME MAJFL
TRS DRS RSS %MEM COMMAND
29513 ?
R 2192:16 1141997 2 2549253
2352616 79.9 merb : worker (port

9882 ?
S 39:31
31673 2 172461 111188 3.7 ruby
/usr/bin/chef-indexer -d -c /etc/chef/indexer.rb
29514 ?
S 20:39
55304 2 132805 58284 1.9 merb
: worker (port
4001)

29512 ?
S 811:15
11118 2 104457 29028 0.9 merb
: spawner (ports

10213 ?
S 3:37
119973 2 301561 18080 0.6
/usr/bin/ruby1.8 /usr/bin/stompserver -C
/etc/stompserver.conf
15666 ?
Sl 0:15 1473 1617
111930 10768 0.3
/usr/lib/erlang/erts-5.7.2/bin/beam.smp -Bd -K true -- -root
/usr/lib/erlang -progname erl -- -home /var/lib/couchdb
-noshell -noinput -smp auto -sasl errlog_type error -pa
/usr/lib/couchdb/erlang/lib/couch-0.10.0/ebin
/usr/lib/couchdb/erlang/lib/mochiweb-r97/ebin
/usr/lib/couchdb/erlang/lib/ibrowse-1.5.2/ebin
/usr/lib/couchdb/erlang/lib/erlang-oauth/ebin -eval
application:load(ibrowse) -eval application:load(oauth)
-eval application:load(crypto) -eval application:load(couch)
-eval crypto:start() -eval ssl:start() -eval ibrowse:start()
-eval couch_server:start([
"/etc/couchdb/default.ini",
"/etc/couchdb/local.ini",
"/etc/couchdb/default.ini",
"/etc/couchdb/local.ini"]), receive done ->
done end. -pidfile /var/run/couchdb/couchdb.pid -heart

Bumped up VM that is running Chef server from 3 GB RAM to 5
GB RAM.

After the reboot merb is only using about 185 MiB rss vs.
2.3 GiB

$ ps vax --sort=-rss |head
PID TTY
STAT TIME MAJFL
TRS DRS RSS %MEM COMMAND
29513 ?
R 2192:16
1141997 2 2549253 2352616 79.9 merb : worker
(port
4000)

9882 ?
S 39:31
31673 2 172461 111188 3.7 ruby
/usr/bin/chef-indexer -d -c /etc/chef/indexer.rb
29514 ?
S 20:39
55304 2 132805 58284 1.9 merb
: worker (port

29512 ?
S 811:15
11118 2 104457 29028 0.9 merb
: spawner (ports
4000)

10213
?
S 3:37
119973 2 301561 18080 0.6
/usr/bin/ruby1.8 /usr/bin/stompserver -C
/etc/stompserver.conf
15666 ?
Sl 0:15 1473 1617
111930 10768 0.3
/usr/lib/erlang/erts-5.7.2/bin/beam.smp -Bd -K true -- -root
/usr/lib/erlang -progname erl -- -home /var/lib/couchdb
-noshell -noinput -smp auto -sasl errlog_type error -pa
/usr/lib/couchdb/erlang/lib/couch-0.10.0/ebin
/usr/lib/couchdb/erlang/lib/mochiweb-r97/ebin
/usr/lib/couchdb/erlang/lib/ibrowse-1.5.2/ebin
/usr/lib/couchdb/erlang/lib/erlang-oauth/ebin -eval
application:load(ibrowse) -eval application:load(oauth)
-eval application:load(crypto) -eval application:load(couch)
-eval crypto:start() -eval ssl:start() -eval ibrowse:start()
-eval couch_server:start([
"/etc/couchdb/default.ini",
"/etc/couchdb/local.ini",
"/etc/couchdb/default.ini",
"/etc/couchdb/local.ini"]), receive done ->
done end. -pidfile /var/run/couchdb/couchdb.pid -heart

We are keeping charts of RAM use over time via Nagios/RRD
and they show the increase in RAM usage happens suddenly not
gradually. It also repeats over time.

Do other people see this issue?

Thanks,
Ken

Well, it certainly smells like a leak - I'm not seeing this behavior,
though, and we haven't had any other reports of this kind of growth
with the chef server. I'm also only running 32bit ruby, which may be
something.

My next request will be to line up the growth in your graphs with the
timing of particular servers checking in.

If nothing obvious comes from that, it'll be time to start talking
about breaking out a debugger and seeing what's what.

Can you send the version of ruby your running along as well?

Best,
Adam

On Wed, Feb 3, 2010 at 1:58 PM, Kenneth Stailey kstailey@yahoo.com wrote:

Particulars:

$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 9.10
Release: 9.10
Codename: karmic

$ uname -m
x86_64

$ COLUMNS=40 dpkg -l | grep chef
ii chef 0.7.16-1 configuration management system written in R
ii chef-indexer 0.7.16-1 Creates search indexes of Chef node attribut
ii chef-server 0.7.16-1 Merb application providing centralized manag
ii chef-server-sl 0.7.16-1 Merb app slice providing centralized managem
ii libchef-ruby 0.7.16-1 Ruby libraries for Chef configuration manage
ii libchef-ruby1. 0.7.16-1 Ruby 1.8 libraries for Chef configuration ma

Looking closer the increase in RAM usage is not sudden. I was looking at a chart of "free memory" blush so the "spikes" were really reboots resulting in sudden increase of RAM.

However, the merb worker was using an rss of ~2.3 GiB

$ ps vax --sort=-rss |head
PID TTY STAT TIME MAJFL TRS DRS RSS %MEM COMMAND
29513 ? R 2192:16 1141997 2 2549253 2352616 79.9 merb : worker (port 4000)
9882 ? S 39:31 31673 2 172461 111188 3.7 ruby /usr/bin/chef-indexer -d -c /etc/chef/indexer.rb
29514 ? S 20:39 55304 2 132805 58284 1.9 merb : worker (port 4001)
29512 ? S 811:15 11118 2 104457 29028 0.9 merb : spawner (ports 4000)

Since the reboot the rss of the restarted merb worker has climbed steadily from about 185 MiB to 320 MiB

$ ps vax --sort=-rss |head -n 5
PID TTY STAT TIME MAJFL TRS DRS RSS %MEM COMMAND
1728 ? S 105:02 1 2 400393 322248 6.6 merb : worker (port 4000)
1613 ? S 0:12 0 2 128161 78968 1.6 /usr/bin/ruby1.8 /usr/bin/stompserver -C /etc/stompserver.conf
1729 ? S 1:08 0 2 109313 36864 0.7 merb : worker (port 4001)
1727 ? S 41:55 3 2 105145 32532 0.6 merb : spawner (ports 4000)

Does it have a memory leak or is it merely a resource hog?

Thanks,
Ken

--- On Wed, 2/3/10, Adam Jacob adam@opscode.com wrote:

From: Adam Jacob adam@opscode.com
Subject: [chef] Re: Sudden Increase in Merb RAM Use
To: "chef@lists.opscode.com" chef@lists.opscode.com
Cc: "chef@lists.opscode.com" chef@lists.opscode.com
Date: Wednesday, February 3, 2010, 11:16 AM
What's the frequency of the
spike? Does it correlate to a particular set of nodes
checking in?
Just to be sure, this is on 0.7.x?

Sent from my iPhone
On Feb 3, 2010, at 8:10 AM, Kenneth Stailey kstailey@yahoo.com
wrote:

Hi,

merb RAM usage (rss) appears to stay flat for a long time
and then suddenly shoots up.

After the RAM usage has spiked it looks like this:

$ ps vax --sort=-rss |head
PID TTY
STAT TIME MAJFL
TRS DRS RSS %MEM COMMAND
29513 ?
R 2192:16 1141997 2 2549253
2352616 79.9 merb : worker (port

9882 ?
S 39:31
31673 2 172461 111188 3.7 ruby
/usr/bin/chef-indexer -d -c /etc/chef/indexer.rb
29514 ?
S 20:39
55304 2 132805 58284 1.9 merb
: worker (port
4001)

29512 ?
S 811:15
11118 2 104457 29028 0.9 merb
: spawner (ports

10213 ?
S 3:37
119973 2 301561 18080 0.6
/usr/bin/ruby1.8 /usr/bin/stompserver -C
/etc/stompserver.conf
15666 ?
Sl 0:15 1473 1617
111930 10768 0.3
/usr/lib/erlang/erts-5.7.2/bin/beam.smp -Bd -K true -- -root
/usr/lib/erlang -progname erl -- -home /var/lib/couchdb
-noshell -noinput -smp auto -sasl errlog_type error -pa
/usr/lib/couchdb/erlang/lib/couch-0.10.0/ebin
/usr/lib/couchdb/erlang/lib/mochiweb-r97/ebin
/usr/lib/couchdb/erlang/lib/ibrowse-1.5.2/ebin
/usr/lib/couchdb/erlang/lib/erlang-oauth/ebin -eval
application:load(ibrowse) -eval application:load(oauth)
-eval application:load(crypto) -eval application:load(couch)
-eval crypto:start() -eval ssl:start() -eval ibrowse:start()
-eval couch_server:start([
"/etc/couchdb/default.ini",
"/etc/couchdb/local.ini",
"/etc/couchdb/default.ini",
"/etc/couchdb/local.ini"]), receive done ->
done end. -pidfile /var/run/couchdb/couchdb.pid -heart

Bumped up VM that is running Chef server from 3 GB RAM to 5
GB RAM.

After the reboot merb is only using about 185 MiB rss vs.
2.3 GiB

$ ps vax --sort=-rss |head
PID TTY
STAT TIME MAJFL
TRS DRS RSS %MEM COMMAND
29513 ?
R 2192:16
1141997 2 2549253 2352616 79.9 merb : worker
(port
4000)

9882 ?
S 39:31
31673 2 172461 111188 3.7 ruby
/usr/bin/chef-indexer -d -c /etc/chef/indexer.rb
29514 ?
S 20:39
55304 2 132805 58284 1.9 merb
: worker (port

29512 ?
S 811:15
11118 2 104457 29028 0.9 merb
: spawner (ports
4000)

10213
?
S 3:37
119973 2 301561 18080 0.6
/usr/bin/ruby1.8 /usr/bin/stompserver -C
/etc/stompserver.conf
15666 ?
Sl 0:15 1473 1617
111930 10768 0.3
/usr/lib/erlang/erts-5.7.2/bin/beam.smp -Bd -K true -- -root
/usr/lib/erlang -progname erl -- -home /var/lib/couchdb
-noshell -noinput -smp auto -sasl errlog_type error -pa
/usr/lib/couchdb/erlang/lib/couch-0.10.0/ebin
/usr/lib/couchdb/erlang/lib/mochiweb-r97/ebin
/usr/lib/couchdb/erlang/lib/ibrowse-1.5.2/ebin
/usr/lib/couchdb/erlang/lib/erlang-oauth/ebin -eval
application:load(ibrowse) -eval application:load(oauth)
-eval application:load(crypto) -eval application:load(couch)
-eval crypto:start() -eval ssl:start() -eval ibrowse:start()
-eval couch_server:start([
"/etc/couchdb/default.ini",
"/etc/couchdb/local.ini",
"/etc/couchdb/default.ini",
"/etc/couchdb/local.ini"]), receive done ->
done end. -pidfile /var/run/couchdb/couchdb.pid -heart

We are keeping charts of RAM use over time via Nagios/RRD
and they show the increase in RAM usage happens suddenly not
gradually. It also repeats over time.

Do other people see this issue?

Thanks,
Ken

--
Opscode, Inc.
Adam Jacob, CTO
T: (206) 508-7449 E: adam@opscode.com

Hi Adam,

$ dpkg -l | grep "ii ruby" | awk '{print $2 " " $3 }'
ruby 4.2
ruby1.8 1.8.7.174-1ubuntu1
rubygems 1.3.5-1ubuntu2
rubygems1.8 1.3.5-1ubuntu2

$ ruby --version
ruby 1.8.7 (2009-06-12 patchlevel 174) [x86_64-linux]

Actually looking at the charts shows a gradual decline in free RAM for a while (hours and hours) and then a sudden decline.

There are 107 chef clients which go off twice an hour but not all at once since they use splayed cron jobs. The splay is done by using a modulo of the last octet of the IP address not by any Chef splaying mechanism. This yields an average of 2-3 a minute that check in nearly simultaneously, maybe a few seconds apart.

The Chef server is running in a VM which is not hosting any other major applications. It has sshd etc.

Thanks,
Ken

--- On Wed, 2/3/10, Adam Jacob adam@opscode.com wrote:

From: Adam Jacob adam@opscode.com
Subject: [chef] Re: Re: Re: Sudden Increase in Merb RAM Use
To: chef@lists.opscode.com
Date: Wednesday, February 3, 2010, 5:06 PM
Well, it certainly smells like a leak

  • I'm not seeing this behavior,
    though, and we haven't had any other reports of this kind
    of growth
    with the chef server. I'm also only running 32bit
    ruby, which may be
    something.

My next request will be to line up the growth in your
graphs with the
timing of particular servers checking in.

If nothing obvious comes from that, it'll be time to start
talking
about breaking out a debugger and seeing what's what.

Can you send the version of ruby your running along as
well?

Best,
Adam

On Wed, Feb 3, 2010 at 1:58 PM, Kenneth Stailey kstailey@yahoo.com
wrote:

Particulars:

$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 9.10
Release: 9.10
Codename: karmic

$ uname -m
x86_64

$ COLUMNS=40 dpkg -l | grep chef
ii chef 0.7.16-1
configuration management system written in R
ii chef-indexer 0.7.16-1 Creates search
indexes of Chef node attribut
ii chef-server 0.7.16-1 Merb
application providing centralized manag
ii chef-server-sl 0.7.16-1 Merb app slice
providing centralized managem
ii libchef-ruby 0.7.16-1 Ruby libraries
for Chef configuration manage
ii libchef-ruby1. 0.7.16-1 Ruby 1.8
libraries for Chef configuration ma

Looking closer the increase in RAM usage is not
sudden. I was looking at a chart of "free memory" blush
so the "spikes" were really reboots resulting in sudden
increase of RAM.

However, the merb worker was using an rss of ~2.3 GiB

$ ps vax --sort=-rss |head
PID TTY STAT TIME MAJFL TRS DRS
RSS %MEM COMMAND
29513 ? R 2192:16 1141997 2 2549253
2352616 79.9 merb : worker (port 4000)
9882 ? S 39:31 31673 2
172461 111188 3.7 ruby /usr/bin/chef-indexer -d -c
/etc/chef/indexer.rb
29514 ? S 20:39 55304 2
132805 58284 1.9 merb : worker (port 4001)
29512 ? S 811:15 11118 2
104457 29028 0.9 merb : spawner (ports 4000)

Since the reboot the rss of the restarted merb worker
has climbed steadily from about 185 MiB to 320 MiB

$ ps vax --sort=-rss |head -n 5
PID TTY STAT TIME MAJFL TRS DRS
RSS %MEM COMMAND
1728 ? S 105:02 1 2
400393 322248 6.6 merb : worker (port 4000)
1613 ? S 0:12 0 2
128161 78968 1.6 /usr/bin/ruby1.8 /usr/bin/stompserver -C
/etc/stompserver.conf
1729 ? S 1:08 0 2
109313 36864 0.7 merb : worker (port 4001)
1727 ? S 41:55 3 2
105145 32532 0.6 merb : spawner (ports 4000)

Does it have a memory leak or is it merely a resource
hog?

Thanks,
Ken

--- On Wed, 2/3/10, Adam Jacob adam@opscode.com
wrote:

From: Adam Jacob adam@opscode.com
Subject: [chef] Re: Sudden Increase in Merb RAM
Use
To: "chef@lists.opscode.com"
chef@lists.opscode.com
Cc: "chef@lists.opscode.com"
chef@lists.opscode.com
Date: Wednesday, February 3, 2010, 11:16 AM
What's the frequency of the
spike? Does it correlate to a particular set of
nodes
checking in?
Just to be sure, this is on 0.7.x?

Sent from my iPhone
On Feb 3, 2010, at 8:10 AM, Kenneth Stailey kstailey@yahoo.com
wrote:

Hi,

merb RAM usage (rss) appears to stay flat for a
long time
and then suddenly shoots up.

After the RAM usage has spiked it looks like
this:

$ ps vax --sort=-rss |head
PID TTY
STAT TIME MAJFL
TRS DRS RSS %MEM COMMAND
29513 ?
R 2192:16 1141997 2 2549253
2352616 79.9 merb : worker (port

9882 ?
S 39:31
31673 2 172461 111188 3.7 ruby
/usr/bin/chef-indexer -d -c /etc/chef/indexer.rb
29514 ?
S 20:39
55304 2 132805 58284 1.9 merb
: worker (port
4001)

29512 ?
S 811:15
11118 2 104457 29028 0.9 merb
: spawner (ports

10213 ?
S 3:37
119973 2 301561 18080 0.6
/usr/bin/ruby1.8 /usr/bin/stompserver -C
/etc/stompserver.conf
15666 ?
Sl 0:15 1473 1617
111930 10768 0.3
/usr/lib/erlang/erts-5.7.2/bin/beam.smp -Bd -K
true -- -root
/usr/lib/erlang -progname erl -- -home
/var/lib/couchdb
-noshell -noinput -smp auto -sasl errlog_type
error -pa
/usr/lib/couchdb/erlang/lib/couch-0.10.0/ebin
/usr/lib/couchdb/erlang/lib/mochiweb-r97/ebin
/usr/lib/couchdb/erlang/lib/ibrowse-1.5.2/ebin
/usr/lib/couchdb/erlang/lib/erlang-oauth/ebin
-eval
application:load(ibrowse) -eval
application:load(oauth)
-eval application:load(crypto) -eval
application:load(couch)
-eval crypto:start() -eval ssl:start() -eval
ibrowse:start()
-eval couch_server:start([
"/etc/couchdb/default.ini",
"/etc/couchdb/local.ini",
"/etc/couchdb/default.ini",
"/etc/couchdb/local.ini"]), receive done ->
done end. -pidfile /var/run/couchdb/couchdb.pid
-heart

Bumped up VM that is running Chef server from 3 GB
RAM to 5
GB RAM.

After the reboot merb is only using about 185 MiB
rss vs.
2.3 GiB

$ ps vax --sort=-rss |head
PID TTY
STAT TIME MAJFL
TRS DRS RSS %MEM COMMAND
29513 ?
R 2192:16
1141997 2 2549253 2352616 79.9 merb :
worker
(port
4000)

9882 ?
S 39:31
31673 2 172461 111188 3.7 ruby
/usr/bin/chef-indexer -d -c /etc/chef/indexer.rb
29514 ?
S 20:39
55304 2 132805 58284 1.9 merb
: worker (port

29512 ?
S 811:15
11118 2 104457 29028 0.9 merb
: spawner (ports
4000)

10213
?
S 3:37
119973 2 301561 18080 0.6
/usr/bin/ruby1.8 /usr/bin/stompserver -C
/etc/stompserver.conf
15666 ?
Sl 0:15 1473 1617
111930 10768 0.3
/usr/lib/erlang/erts-5.7.2/bin/beam.smp -Bd -K
true -- -root
/usr/lib/erlang -progname erl -- -home
/var/lib/couchdb
-noshell -noinput -smp auto -sasl errlog_type
error -pa
/usr/lib/couchdb/erlang/lib/couch-0.10.0/ebin
/usr/lib/couchdb/erlang/lib/mochiweb-r97/ebin
/usr/lib/couchdb/erlang/lib/ibrowse-1.5.2/ebin
/usr/lib/couchdb/erlang/lib/erlang-oauth/ebin
-eval
application:load(ibrowse) -eval
application:load(oauth)
-eval application:load(crypto) -eval
application:load(couch)
-eval crypto:start() -eval ssl:start() -eval
ibrowse:start()
-eval couch_server:start([
"/etc/couchdb/default.ini",
"/etc/couchdb/local.ini",
"/etc/couchdb/default.ini",
"/etc/couchdb/local.ini"]), receive done ->
done end. -pidfile /var/run/couchdb/couchdb.pid
-heart

We are keeping charts of RAM use over time via
Nagios/RRD
and they show the increase in RAM usage happens
suddenly not
gradually. It also repeats over time.

Do other people see this issue?

Thanks,
Ken

--
Opscode, Inc.
Adam Jacob, CTO
T: (206) 508-7449 E: adam@opscode.com

Next step here is to get this running under something that can give us
some memory profiling - may I recommend bleak_house?

Is anyone else seeing this kind of growth on 64bit?

It's "normal" to see ruby use roughly double the memory on 64bit
systems, at least in my experience. We tend to see roughly 50-60MB
resident across the board on 32bit, and it's pretty stable... but we
don't have any production systems at 9.10.

Adam

On Wed, Feb 3, 2010 at 2:25 PM, Kenneth Stailey kstailey@yahoo.com wrote:

Hi Adam,

$ dpkg -l | grep "ii ruby" | awk '{print $2 " " $3 }'
ruby 4.2
ruby1.8 1.8.7.174-1ubuntu1
rubygems 1.3.5-1ubuntu2
rubygems1.8 1.3.5-1ubuntu2

$ ruby --version
ruby 1.8.7 (2009-06-12 patchlevel 174) [x86_64-linux]

Actually looking at the charts shows a gradual decline in free RAM for a while (hours and hours) and then a sudden decline.

There are 107 chef clients which go off twice an hour but not all at once since they use splayed cron jobs. The splay is done by using a modulo of the last octet of the IP address not by any Chef splaying mechanism. This yields an average of 2-3 a minute that check in nearly simultaneously, maybe a few seconds apart.

The Chef server is running in a VM which is not hosting any other major applications. It has sshd etc.

Thanks,
Ken

--- On Wed, 2/3/10, Adam Jacob adam@opscode.com wrote:

From: Adam Jacob adam@opscode.com
Subject: [chef] Re: Re: Re: Sudden Increase in Merb RAM Use
To: chef@lists.opscode.com
Date: Wednesday, February 3, 2010, 5:06 PM
Well, it certainly smells like a leak

  • I'm not seeing this behavior,
    though, and we haven't had any other reports of this kind
    of growth
    with the chef server. I'm also only running 32bit
    ruby, which may be
    something.

My next request will be to line up the growth in your
graphs with the
timing of particular servers checking in.

If nothing obvious comes from that, it'll be time to start
talking
about breaking out a debugger and seeing what's what.

Can you send the version of ruby your running along as
well?

Best,
Adam

On Wed, Feb 3, 2010 at 1:58 PM, Kenneth Stailey kstailey@yahoo.com
wrote:

Particulars:

$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 9.10
Release: 9.10
Codename: karmic

$ uname -m
x86_64

$ COLUMNS=40 dpkg -l | grep chef
ii chef 0.7.16-1
configuration management system written in R
ii chef-indexer 0.7.16-1 Creates search
indexes of Chef node attribut
ii chef-server 0.7.16-1 Merb
application providing centralized manag
ii chef-server-sl 0.7.16-1 Merb app slice
providing centralized managem
ii libchef-ruby 0.7.16-1 Ruby libraries
for Chef configuration manage
ii libchef-ruby1. 0.7.16-1 Ruby 1.8
libraries for Chef configuration ma

Looking closer the increase in RAM usage is not
sudden. I was looking at a chart of "free memory" blush
so the "spikes" were really reboots resulting in sudden
increase of RAM.

However, the merb worker was using an rss of ~2.3 GiB

$ ps vax --sort=-rss |head
PID TTY STAT TIME MAJFL TRS DRS
RSS %MEM COMMAND
29513 ? R 2192:16 1141997 2 2549253
2352616 79.9 merb : worker (port 4000)
9882 ? S 39:31 31673 2
172461 111188 3.7 ruby /usr/bin/chef-indexer -d -c
/etc/chef/indexer.rb
29514 ? S 20:39 55304 2
132805 58284 1.9 merb : worker (port 4001)
29512 ? S 811:15 11118 2
104457 29028 0.9 merb : spawner (ports 4000)

Since the reboot the rss of the restarted merb worker
has climbed steadily from about 185 MiB to 320 MiB

$ ps vax --sort=-rss |head -n 5
PID TTY STAT TIME MAJFL TRS DRS
RSS %MEM COMMAND
1728 ? S 105:02 1 2
400393 322248 6.6 merb : worker (port 4000)
1613 ? S 0:12 0 2
128161 78968 1.6 /usr/bin/ruby1.8 /usr/bin/stompserver -C
/etc/stompserver.conf
1729 ? S 1:08 0 2
109313 36864 0.7 merb : worker (port 4001)
1727 ? S 41:55 3 2
105145 32532 0.6 merb : spawner (ports 4000)

Does it have a memory leak or is it merely a resource
hog?

Thanks,
Ken

--- On Wed, 2/3/10, Adam Jacob adam@opscode.com
wrote:

From: Adam Jacob adam@opscode.com
Subject: [chef] Re: Sudden Increase in Merb RAM
Use
To: "chef@lists.opscode.com"
chef@lists.opscode.com
Cc: "chef@lists.opscode.com"
chef@lists.opscode.com
Date: Wednesday, February 3, 2010, 11:16 AM
What's the frequency of the
spike? Does it correlate to a particular set of
nodes
checking in?
Just to be sure, this is on 0.7.x?

Sent from my iPhone
On Feb 3, 2010, at 8:10 AM, Kenneth Stailey kstailey@yahoo.com
wrote:

Hi,

merb RAM usage (rss) appears to stay flat for a
long time
and then suddenly shoots up.

After the RAM usage has spiked it looks like
this:

$ ps vax --sort=-rss |head
PID TTY
STAT TIME MAJFL
TRS DRS RSS %MEM COMMAND
29513 ?
R 2192:16 1141997 2 2549253
2352616 79.9 merb : worker (port

9882 ?
S 39:31
31673 2 172461 111188 3.7 ruby
/usr/bin/chef-indexer -d -c /etc/chef/indexer.rb
29514 ?
S 20:39
55304 2 132805 58284 1.9 merb
: worker (port
4001)

29512 ?
S 811:15
11118 2 104457 29028 0.9 merb
: spawner (ports

10213 ?
S 3:37
119973 2 301561 18080 0.6
/usr/bin/ruby1.8 /usr/bin/stompserver -C
/etc/stompserver.conf
15666 ?
Sl 0:15 1473 1617
111930 10768 0.3
/usr/lib/erlang/erts-5.7.2/bin/beam.smp -Bd -K
true -- -root
/usr/lib/erlang -progname erl -- -home
/var/lib/couchdb
-noshell -noinput -smp auto -sasl errlog_type
error -pa
/usr/lib/couchdb/erlang/lib/couch-0.10.0/ebin
/usr/lib/couchdb/erlang/lib/mochiweb-r97/ebin
/usr/lib/couchdb/erlang/lib/ibrowse-1.5.2/ebin
/usr/lib/couchdb/erlang/lib/erlang-oauth/ebin
-eval
application:load(ibrowse) -eval
application:load(oauth)
-eval application:load(crypto) -eval
application:load(couch)
-eval crypto:start() -eval ssl:start() -eval
ibrowse:start()
-eval couch_server:start([
"/etc/couchdb/default.ini",
"/etc/couchdb/local.ini",
"/etc/couchdb/default.ini",
"/etc/couchdb/local.ini"]), receive done ->
done end. -pidfile /var/run/couchdb/couchdb.pid
-heart

Bumped up VM that is running Chef server from 3 GB
RAM to 5
GB RAM.

After the reboot merb is only using about 185 MiB
rss vs.
2.3 GiB

$ ps vax --sort=-rss |head
PID TTY
STAT TIME MAJFL
TRS DRS RSS %MEM COMMAND
29513 ?
R 2192:16
1141997 2 2549253 2352616 79.9 merb :
worker
(port
4000)

9882 ?
S 39:31
31673 2 172461 111188 3.7 ruby
/usr/bin/chef-indexer -d -c /etc/chef/indexer.rb
29514 ?
S 20:39
55304 2 132805 58284 1.9 merb
: worker (port

29512 ?
S 811:15
11118 2 104457 29028 0.9 merb
: spawner (ports
4000)

10213
?
S 3:37
119973 2 301561 18080 0.6
/usr/bin/ruby1.8 /usr/bin/stompserver -C
/etc/stompserver.conf
15666 ?
Sl 0:15 1473 1617
111930 10768 0.3
/usr/lib/erlang/erts-5.7.2/bin/beam.smp -Bd -K
true -- -root
/usr/lib/erlang -progname erl -- -home
/var/lib/couchdb
-noshell -noinput -smp auto -sasl errlog_type
error -pa
/usr/lib/couchdb/erlang/lib/couch-0.10.0/ebin
/usr/lib/couchdb/erlang/lib/mochiweb-r97/ebin
/usr/lib/couchdb/erlang/lib/ibrowse-1.5.2/ebin
/usr/lib/couchdb/erlang/lib/erlang-oauth/ebin
-eval
application:load(ibrowse) -eval
application:load(oauth)
-eval application:load(crypto) -eval
application:load(couch)
-eval crypto:start() -eval ssl:start() -eval
ibrowse:start()
-eval couch_server:start([
"/etc/couchdb/default.ini",
"/etc/couchdb/local.ini",
"/etc/couchdb/default.ini",
"/etc/couchdb/local.ini"]), receive done ->
done end. -pidfile /var/run/couchdb/couchdb.pid
-heart

We are keeping charts of RAM use over time via
Nagios/RRD
and they show the increase in RAM usage happens
suddenly not
gradually. It also repeats over time.

Do other people see this issue?

Thanks,
Ken

--
Opscode, Inc.
Adam Jacob, CTO
T: (206) 508-7449 E: adam@opscode.com

--
Opscode, Inc.
Adam Jacob, CTO
T: (206) 508-7449 E: adam@opscode.com

Also, can you file a ticket? Lets move this into the ticket tracking
system, so we can keep all the artifacts in the same place.

Adam

On Wed, Feb 3, 2010 at 2:44 PM, Adam Jacob adam@opscode.com wrote:

Next step here is to get this running under something that can give us
some memory profiling - may I recommend bleak_house?

Evan Weaver

Is anyone else seeing this kind of growth on 64bit?

It's "normal" to see ruby use roughly double the memory on 64bit
systems, at least in my experience. We tend to see roughly 50-60MB
resident across the board on 32bit, and it's pretty stable... but we
don't have any production systems at 9.10.

Adam

On Wed, Feb 3, 2010 at 2:25 PM, Kenneth Stailey kstailey@yahoo.com wrote:

Hi Adam,

$ dpkg -l | grep "ii ruby" | awk '{print $2 " " $3 }'
ruby 4.2
ruby1.8 1.8.7.174-1ubuntu1
rubygems 1.3.5-1ubuntu2
rubygems1.8 1.3.5-1ubuntu2

$ ruby --version
ruby 1.8.7 (2009-06-12 patchlevel 174) [x86_64-linux]

Actually looking at the charts shows a gradual decline in free RAM for a while (hours and hours) and then a sudden decline.

There are 107 chef clients which go off twice an hour but not all at once since they use splayed cron jobs. The splay is done by using a modulo of the last octet of the IP address not by any Chef splaying mechanism. This yields an average of 2-3 a minute that check in nearly simultaneously, maybe a few seconds apart.

The Chef server is running in a VM which is not hosting any other major applications. It has sshd etc.

Thanks,
Ken

--- On Wed, 2/3/10, Adam Jacob adam@opscode.com wrote:

From: Adam Jacob adam@opscode.com
Subject: [chef] Re: Re: Re: Sudden Increase in Merb RAM Use
To: chef@lists.opscode.com
Date: Wednesday, February 3, 2010, 5:06 PM
Well, it certainly smells like a leak

  • I'm not seeing this behavior,
    though, and we haven't had any other reports of this kind
    of growth
    with the chef server. I'm also only running 32bit
    ruby, which may be
    something.

My next request will be to line up the growth in your
graphs with the
timing of particular servers checking in.

If nothing obvious comes from that, it'll be time to start
talking
about breaking out a debugger and seeing what's what.

Can you send the version of ruby your running along as
well?

Best,
Adam

On Wed, Feb 3, 2010 at 1:58 PM, Kenneth Stailey kstailey@yahoo.com
wrote:

Particulars:

$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 9.10
Release: 9.10
Codename: karmic

$ uname -m
x86_64

$ COLUMNS=40 dpkg -l | grep chef
ii chef 0.7.16-1
configuration management system written in R
ii chef-indexer 0.7.16-1 Creates search
indexes of Chef node attribut
ii chef-server 0.7.16-1 Merb
application providing centralized manag
ii chef-server-sl 0.7.16-1 Merb app slice
providing centralized managem
ii libchef-ruby 0.7.16-1 Ruby libraries
for Chef configuration manage
ii libchef-ruby1. 0.7.16-1 Ruby 1.8
libraries for Chef configuration ma

Looking closer the increase in RAM usage is not
sudden. I was looking at a chart of "free memory" blush
so the "spikes" were really reboots resulting in sudden
increase of RAM.

However, the merb worker was using an rss of ~2.3 GiB

$ ps vax --sort=-rss |head
PID TTY STAT TIME MAJFL TRS DRS
RSS %MEM COMMAND
29513 ? R 2192:16 1141997 2 2549253
2352616 79.9 merb : worker (port 4000)
9882 ? S 39:31 31673 2
172461 111188 3.7 ruby /usr/bin/chef-indexer -d -c
/etc/chef/indexer.rb
29514 ? S 20:39 55304 2
132805 58284 1.9 merb : worker (port 4001)
29512 ? S 811:15 11118 2
104457 29028 0.9 merb : spawner (ports 4000)

Since the reboot the rss of the restarted merb worker
has climbed steadily from about 185 MiB to 320 MiB

$ ps vax --sort=-rss |head -n 5
PID TTY STAT TIME MAJFL TRS DRS
RSS %MEM COMMAND
1728 ? S 105:02 1 2
400393 322248 6.6 merb : worker (port 4000)
1613 ? S 0:12 0 2
128161 78968 1.6 /usr/bin/ruby1.8 /usr/bin/stompserver -C
/etc/stompserver.conf
1729 ? S 1:08 0 2
109313 36864 0.7 merb : worker (port 4001)
1727 ? S 41:55 3 2
105145 32532 0.6 merb : spawner (ports 4000)

Does it have a memory leak or is it merely a resource
hog?

Thanks,
Ken

--- On Wed, 2/3/10, Adam Jacob adam@opscode.com
wrote:

From: Adam Jacob adam@opscode.com
Subject: [chef] Re: Sudden Increase in Merb RAM
Use
To: "chef@lists.opscode.com"
chef@lists.opscode.com
Cc: "chef@lists.opscode.com"
chef@lists.opscode.com
Date: Wednesday, February 3, 2010, 11:16 AM
What's the frequency of the
spike? Does it correlate to a particular set of
nodes
checking in?
Just to be sure, this is on 0.7.x?

Sent from my iPhone
On Feb 3, 2010, at 8:10 AM, Kenneth Stailey kstailey@yahoo.com
wrote:

Hi,

merb RAM usage (rss) appears to stay flat for a
long time
and then suddenly shoots up.

After the RAM usage has spiked it looks like
this:

$ ps vax --sort=-rss |head
PID TTY
STAT TIME MAJFL
TRS DRS RSS %MEM COMMAND
29513 ?
R 2192:16 1141997 2 2549253
2352616 79.9 merb : worker (port

9882 ?
S 39:31
31673 2 172461 111188 3.7 ruby
/usr/bin/chef-indexer -d -c /etc/chef/indexer.rb
29514 ?
S 20:39
55304 2 132805 58284 1.9 merb
: worker (port
4001)

29512 ?
S 811:15
11118 2 104457 29028 0.9 merb
: spawner (ports

10213 ?
S 3:37
119973 2 301561 18080 0.6
/usr/bin/ruby1.8 /usr/bin/stompserver -C
/etc/stompserver.conf
15666 ?
Sl 0:15 1473 1617
111930 10768 0.3
/usr/lib/erlang/erts-5.7.2/bin/beam.smp -Bd -K
true -- -root
/usr/lib/erlang -progname erl -- -home
/var/lib/couchdb
-noshell -noinput -smp auto -sasl errlog_type
error -pa
/usr/lib/couchdb/erlang/lib/couch-0.10.0/ebin
/usr/lib/couchdb/erlang/lib/mochiweb-r97/ebin
/usr/lib/couchdb/erlang/lib/ibrowse-1.5.2/ebin
/usr/lib/couchdb/erlang/lib/erlang-oauth/ebin
-eval
application:load(ibrowse) -eval
application:load(oauth)
-eval application:load(crypto) -eval
application:load(couch)
-eval crypto:start() -eval ssl:start() -eval
ibrowse:start()
-eval couch_server:start([
"/etc/couchdb/default.ini",
"/etc/couchdb/local.ini",
"/etc/couchdb/default.ini",
"/etc/couchdb/local.ini"]), receive done ->
done end. -pidfile /var/run/couchdb/couchdb.pid
-heart

Bumped up VM that is running Chef server from 3 GB
RAM to 5
GB RAM.

After the reboot merb is only using about 185 MiB
rss vs.
2.3 GiB

$ ps vax --sort=-rss |head
PID TTY
STAT TIME MAJFL
TRS DRS RSS %MEM COMMAND
29513 ?
R 2192:16
1141997 2 2549253 2352616 79.9 merb :
worker
(port
4000)

9882 ?
S 39:31
31673 2 172461 111188 3.7 ruby
/usr/bin/chef-indexer -d -c /etc/chef/indexer.rb
29514 ?
S 20:39
55304 2 132805 58284 1.9 merb
: worker (port

29512 ?
S 811:15
11118 2 104457 29028 0.9 merb
: spawner (ports
4000)

10213
?
S 3:37
119973 2 301561 18080 0.6
/usr/bin/ruby1.8 /usr/bin/stompserver -C
/etc/stompserver.conf
15666 ?
Sl 0:15 1473 1617
111930 10768 0.3
/usr/lib/erlang/erts-5.7.2/bin/beam.smp -Bd -K
true -- -root
/usr/lib/erlang -progname erl -- -home
/var/lib/couchdb
-noshell -noinput -smp auto -sasl errlog_type
error -pa
/usr/lib/couchdb/erlang/lib/couch-0.10.0/ebin
/usr/lib/couchdb/erlang/lib/mochiweb-r97/ebin
/usr/lib/couchdb/erlang/lib/ibrowse-1.5.2/ebin
/usr/lib/couchdb/erlang/lib/erlang-oauth/ebin
-eval
application:load(ibrowse) -eval
application:load(oauth)
-eval application:load(crypto) -eval
application:load(couch)
-eval crypto:start() -eval ssl:start() -eval
ibrowse:start()
-eval couch_server:start([
"/etc/couchdb/default.ini",
"/etc/couchdb/local.ini",
"/etc/couchdb/default.ini",
"/etc/couchdb/local.ini"]), receive done ->
done end. -pidfile /var/run/couchdb/couchdb.pid
-heart

We are keeping charts of RAM use over time via
Nagios/RRD
and they show the increase in RAM usage happens
suddenly not
gradually. It also repeats over time.

Do other people see this issue?

Thanks,
Ken

--
Opscode, Inc.
Adam Jacob, CTO
T: (206) 508-7449 E: adam@opscode.com

--
Opscode, Inc.
Adam Jacob, CTO
T: (206) 508-7449 E: adam@opscode.com

--
Opscode, Inc.
Adam Jacob, CTO
T: (206) 508-7449 E: adam@opscode.com

http://tickets.opscode.com/browse/CHEF-920

--- On Wed, 2/3/10, Adam Jacob adam@opscode.com wrote:

From: Adam Jacob adam@opscode.com
Subject: [chef] Re: Re: Re: Re: Re: Sudden Increase in Merb RAM Use
To: chef@lists.opscode.com
Date: Wednesday, February 3, 2010, 5:45 PM
Also, can you file a ticket?
Lets move this into the ticket tracking
system, so we can keep all the artifacts in the same
place.

Adam

On Wed, Feb 3, 2010 at 2:44 PM, Adam Jacob adam@opscode.com
wrote:

Next step here is to get this running under something
that can give us
some memory profiling - may I recommend bleak_house?

Evan Weaver

Is anyone else seeing this kind of growth on 64bit?

It's "normal" to see ruby use roughly double the
memory on 64bit
systems, at least in my experience. We tend to see
roughly 50-60MB
resident across the board on 32bit, and it's pretty
stable... but we
don't have any production systems at 9.10.

Adam

On Wed, Feb 3, 2010 at 2:25 PM, Kenneth Stailey kstailey@yahoo.com
wrote:

Hi Adam,

$ dpkg -l | grep "ii ruby" | awk '{print $2 " "
$3 }'
ruby 4.2
ruby1.8 1.8.7.174-1ubuntu1
rubygems 1.3.5-1ubuntu2
rubygems1.8 1.3.5-1ubuntu2

$ ruby --version
ruby 1.8.7 (2009-06-12 patchlevel 174)
[x86_64-linux]

Actually looking at the charts shows a gradual
decline in free RAM for a while (hours and hours) and then a
sudden decline.

There are 107 chef clients which go off twice an
hour but not all at once since they use splayed cron jobs.
The splay is done by using a modulo of the last octet of
the IP address not by any Chef splaying mechanism. This
yields an average of 2-3 a minute that check in nearly
simultaneously, maybe a few seconds apart.

The Chef server is running in a VM which is not
hosting any other major applications. It has sshd etc.

Thanks,
Ken

--- On Wed, 2/3/10, Adam Jacob adam@opscode.com
wrote:

From: Adam Jacob adam@opscode.com
Subject: [chef] Re: Re: Re: Sudden Increase in
Merb RAM Use
To: chef@lists.opscode.com
Date: Wednesday, February 3, 2010, 5:06 PM
Well, it certainly smells like a leak

  • I'm not seeing this behavior,
    though, and we haven't had any other reports
    of this kind
    of growth
    with the chef server. I'm also only running
    32bit
    ruby, which may be
    something.

My next request will be to line up the growth
in your
graphs with the
timing of particular servers checking in.

If nothing obvious comes from that, it'll be
time to start
talking
about breaking out a debugger and seeing
what's what.

Can you send the version of ruby your running
along as
well?

Best,
Adam

On Wed, Feb 3, 2010 at 1:58 PM, Kenneth
Stailey kstailey@yahoo.com
wrote:

Particulars:

$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 9.10
Release: 9.10
Codename: karmic

$ uname -m
x86_64

$ COLUMNS=40 dpkg -l | grep chef
ii chef 0.7.16-1
configuration management system written in R
ii chef-indexer 0.7.16-1
Creates search
indexes of Chef node attribut
ii chef-server 0.7.16-1
Merb
application providing centralized manag
ii chef-server-sl 0.7.16-1
Merb app slice
providing centralized managem
ii libchef-ruby 0.7.16-1
Ruby libraries
for Chef configuration manage
ii libchef-ruby1. 0.7.16-1
Ruby 1.8
libraries for Chef configuration ma

Looking closer the increase in RAM usage
is not
sudden. I was looking at a chart of "free
memory" blush
so the "spikes" were really reboots resulting
in sudden
increase of RAM.

However, the merb worker was using an rss
of ~2.3 GiB

$ ps vax --sort=-rss |head
PID TTY STAT TIME MAJFL
TRS DRS
RSS %MEM COMMAND
29513 ? R 2192:16 1141997
2 2549253
2352616 79.9 merb : worker (port 4000)
9882 ? S 39:31 31673
2
172461 111188 3.7 ruby /usr/bin/chef-indexer
-d -c
/etc/chef/indexer.rb
29514 ? S 20:39 55304
2
132805 58284 1.9 merb : worker (port 4001)
29512 ? S 811:15 11118
2
104457 29028 0.9 merb : spawner (ports

Since the reboot the rss of the restarted
merb worker
has climbed steadily from about 185 MiB to 320
MiB

$ ps vax --sort=-rss |head -n 5
PID TTY STAT TIME MAJFL
TRS DRS
RSS %MEM COMMAND
1728 ? S 105:02
1 2
400393 322248 6.6 merb : worker (port 4000)
1613 ? S 0:12
0 2
128161 78968 1.6 /usr/bin/ruby1.8
/usr/bin/stompserver -C
/etc/stompserver.conf
1729 ? S 1:08
0 2
109313 36864 0.7 merb : worker (port 4001)
1727 ? S 41:55
3 2
105145 32532 0.6 merb : spawner (ports

Does it have a memory leak or is it
merely a resource
hog?

Thanks,
Ken

--- On Wed, 2/3/10, Adam Jacob adam@opscode.com
wrote:

From: Adam Jacob adam@opscode.com
Subject: [chef] Re: Sudden Increase
in Merb RAM
Use
To: "chef@lists.opscode.com"
chef@lists.opscode.com
Cc: "chef@lists.opscode.com"
chef@lists.opscode.com
Date: Wednesday, February 3, 2010,
11:16 AM
What's the frequency of the
spike? Does it correlate to a
particular set of
nodes
checking in?
Just to be sure, this is on 0.7.x?

Sent from my iPhone
On Feb 3, 2010, at 8:10 AM, Kenneth
Stailey kstailey@yahoo.com
wrote:

Hi,

merb RAM usage (rss) appears to stay
flat for a
long time
and then suddenly shoots up.

After the RAM usage has spiked it
looks like
this:

$ ps vax --sort=-rss |head
PID TTY
STAT TIME MAJFL
TRS DRS RSS %MEM COMMAND
29513 ?
R 2192:16 1141997 2
2549253
2352616 79.9 merb : worker (port

9882 ?
S 39:31
31673 2 172461 111188 3.7
ruby
/usr/bin/chef-indexer -d -c
/etc/chef/indexer.rb
29514 ?
S 20:39
55304 2 132805 58284 1.9
merb
: worker (port
4001)

29512 ?
S 811:15
11118 2 104457 29028 0.9
merb
: spawner (ports

10213 ?
S 3:37
119973 2 301561 18080 0.6
/usr/bin/ruby1.8 /usr/bin/stompserver
-C
/etc/stompserver.conf
15666 ?
Sl 0:15 1473 1617
111930 10768 0.3

/usr/lib/erlang/erts-5.7.2/bin/beam.smp -Bd -K

true -- -root

/usr/lib/erlang -progname erl --
-home
/var/lib/couchdb
-noshell -noinput -smp auto -sasl
errlog_type
error -pa

/usr/lib/couchdb/erlang/lib/couch-0.10.0/ebin

/usr/lib/couchdb/erlang/lib/mochiweb-r97/ebin

/usr/lib/couchdb/erlang/lib/ibrowse-1.5.2/ebin

/usr/lib/couchdb/erlang/lib/erlang-oauth/ebin

-eval

application:load(ibrowse) -eval
application:load(oauth)
-eval application:load(crypto) -eval
application:load(couch)
-eval crypto:start() -eval
ssl:start() -eval
ibrowse:start()
-eval couch_server:start([
"/etc/couchdb/default.ini",
"/etc/couchdb/local.ini",
"/etc/couchdb/default.ini",
"/etc/couchdb/local.ini"]), receive
done ->
done end. -pidfile
/var/run/couchdb/couchdb.pid
-heart

Bumped up VM that is running Chef
server from 3 GB
RAM to 5
GB RAM.

After the reboot merb is only using
about 185 MiB
rss vs.
2.3 GiB

$ ps vax --sort=-rss |head
PID TTY
STAT TIME MAJFL
TRS DRS RSS %MEM COMMAND
29513 ?
R 2192:16
1141997 2 2549253 2352616 79.9
merb :
worker
(port
4000)

9882 ?
S 39:31
31673 2 172461 111188 3.7
ruby
/usr/bin/chef-indexer -d -c
/etc/chef/indexer.rb
29514 ?
S 20:39
55304 2 132805 58284 1.9
merb
: worker (port

29512 ?
S 811:15
11118 2 104457 29028 0.9
merb
: spawner (ports
4000)

10213
?
S 3:37
119973 2 301561 18080 0.6
/usr/bin/ruby1.8 /usr/bin/stompserver
-C
/etc/stompserver.conf
15666 ?
Sl 0:15 1473 1617
111930 10768 0.3

/usr/lib/erlang/erts-5.7.2/bin/beam.smp -Bd -K

true -- -root

/usr/lib/erlang -progname erl --
-home
/var/lib/couchdb
-noshell -noinput -smp auto -sasl
errlog_type
error -pa

/usr/lib/couchdb/erlang/lib/couch-0.10.0/ebin

/usr/lib/couchdb/erlang/lib/mochiweb-r97/ebin

/usr/lib/couchdb/erlang/lib/ibrowse-1.5.2/ebin

/usr/lib/couchdb/erlang/lib/erlang-oauth/ebin

-eval

application:load(ibrowse) -eval
application:load(oauth)
-eval application:load(crypto) -eval
application:load(couch)
-eval crypto:start() -eval
ssl:start() -eval
ibrowse:start()
-eval couch_server:start([
"/etc/couchdb/default.ini",
"/etc/couchdb/local.ini",
"/etc/couchdb/default.ini",
"/etc/couchdb/local.ini"]), receive
done ->
done end. -pidfile
/var/run/couchdb/couchdb.pid
-heart

We are keeping charts of RAM use over
time via
Nagios/RRD
and they show the increase in RAM
usage happens
suddenly not
gradually. It also repeats over
time.

Do other people see this issue?

Thanks,
Ken

--
Opscode, Inc.
Adam Jacob, CTO
T: (206) 508-7449 E: adam@opscode.com

--
Opscode, Inc.
Adam Jacob, CTO
T: (206) 508-7449 E: adam@opscode.com

--
Opscode, Inc.
Adam Jacob, CTO
T: (206) 508-7449 E: adam@opscode.com