in our environment, the 'dns server' node searches chef and generates DNS
configs based on the search (no LWRP, everything goes in there).
we use the same node for managing our load balancers... one data bag item
for each pool in the load balancer, which defines the pool configuration
and which role and environment should be used to search, then it just
searches chef for matching nodes and adds them.
its a different model than having each node update itself, but with 500
nodes we found it was much more efficient (that, and the load balancer
didn't like getting hundreds of nodes every few minutes checking in)
-jesse
On Mon, Mar 25, 2013 at 8:45 AM, Cassiano Leal cassianoleal@gmail.comwrote:
You could make your recipe more intelligent.
Have an attribure default[:dns][:set] = false, then node.set[:dns][:set]
= true in the first run.
Check that on the recipe and avoid hitting the API if true.
On Monday, March 25, 2013 at 09:04, Brian Akins wrote:
Let's take DNS (with route53) as an example:
Each node uses an LWRP (based on HW's route53 cookbook) to check
route53 and add itself to DNS if needed. This seems like a common
patter and is all good.
However, what about when you have, say, 5000 nodes? It just seems
absolutely silly to have each node do this every hour. While it does
make sure that new nodes get added to DNS right away - it just seems
unnecessary to do this every chef run.
Now, imagine the above but with 3 or 4 services - API calls for
monitoring, load balancing, etc. The "LWRP every chef run" is easy
and makes sense when you have a relatively few number of nodes.
How are other large installs handling this?
I was thinking that a script that once every x minutes scraped route53
and chef and just applied the "diff" would be more suitable for
"large" installs.
Or am I just fretting over nothing?
--Brian