As a quick litmus, a benchmark I did with an early 11.x open source Omnibus
with Ubuntu 10.4 LTS on an EC2 m1.xlarge can easily support 500 nodes with
hundreds of resources per node and searches used for inter-tier discovery.
We also concluded there is likely headroom to double that client number
before CPU becomes a bottleneck on that particular platform. For node
volumes beyond that my experience has mainly been 10.x clustered, so I
can't comment on 11.x as the game changes with the introduction of the
Erlang API.
For us on "moderate" hardware disk space consumption has never been a
concern in either set-up. Incidentally, the above benchmark was performed
with the "worst possible" disk layout of everything-including-the-database
on the root ephemeral volume, and I/O wasn't going to be a limiting factor
for quite some time and certainly not before other parameters held things
up.
For enterprise, other clouds, newer releases, real server catalog numbers
and so-on I defer to others and the Opscode folks and their sizing guides,
and of course with hosted enterprise it isn't a concern. Of course, every
estate and implementation is different, YMMV and so-on.
I have looked at that page, but it doesn't say much about disk requirements. Wouldn't the size of the /opt and /var be directly related to the number of nodes being served?
I have looked at that page, but it doesn't say much about disk requirements. Wouldn't the size of the /opt and /var be directly related to the number of nodes being served?
Cool, thought I'd point it out as the information on the page was a
bit difficult to find.
It depends on how many nodes but also which cookbooks you use. The
more cookbooks, the more attributes on top of what ohai creates.
There's quite a lot of additional data stored per node with a typical
LAMP stack.