On Wed, Aug 17, 2011 at 5:47 PM, Adam Jacob firstname.lastname@example.org wrote:
Also, I’ve been playing around a bit with getting the size of the node
objects saved to be smaller. The current leading contender is to whitelist
the node attributes you actually want saved at the end of the run - this way
all the attributes from Ohai + Cookbooks are available during the run, but
only the ones you actually care about for search are saved. A working
prototype is here:
This should help in the near term - slim down the size of the objects to
the parts you care about, and the aggregate size of the results will shrink
as well. We should still fix the core issue of only returning full objects
on search, but this should help in the interim.
On Wed, Aug 17, 2011 at 4:53 PM, Adam Jacob email@example.com wrote:
Hi Justin - this is high on our roadmap.
Let’s talk about what the high level interface will look like, and then we
can dig in to what a particular implementation would entail. Can you write
up what you are thinking about at Wikia?
It wasn’t clear how high this was on your roadmap, if you guys can lend some
expertise, we can invest some time to collaborate on and test a solution. I
launched a fancy new chef server today on SSDs.
A slightly intertwined issue is that we have some inconsistent search
results based on nodes. It’s been suggested this could have to do with 0.9
/ 0.10 mixture in our client base (we still have quite a few karmic hosts,
all lucid are 0.10), but I can’t tell how. To workaround this, in some
places we have searches which grab all nodes and enumerate them.
Obviously this is horrid.
Another thing I thought of today is, what if each non-self node object were
kept in a central registry in memory, so that if multiple searches in a chef
run return the same nodes, the same item is references in both cases.
Let’s chat soon, this is a high priority for us and source of much
Thanks again for your enthusiasm in collaborating!