Solr replication works well, it does a master / slave pull that can be setup for reversal with a restart. My goal was an full active / active configuration. The solr cloud feature currently in trunk does distributed indexing via shards & seems like a nice option once its baked, that’s a bit down the road tho.
How are others handling redundant chef configs? Requests to chef-server can be balanced and couch & rabbit happily cluster themselves, that all seems fairly obvious. It’s the expander / solr bit that’s got me scratching my head on the best approach here.
Wil Reichert | Systems Operations
Wil.Reichert@monster.com | T : 415-820-3430 | M : 703-200-4607 | F : 415-820-0552
MILITARY.COM 799 Market Street, Suite 700, San Francisco, CA 94103
-----Original Message-----
From: Daniel DeLeo [mailto:ddeleo@kallistec.com] On Behalf Of Daniel DeLeo
Sent: Thursday, April 05, 2012 8:21 AM
To: chef@lists.opscode.com
Subject: [chef] Re: expander redundancy
On Wednesday, April 4, 2012 at 6:20 PM, Reichert, Wil wrote:
I’m running chef .10.8 and trying to configure 2 expanders to each pull messages off rabbit & populate 2 independent solr instances. Documentation mentions separate 1 to n and 1 to (1 of n) behaviors but it’s not clear if this is still the case with chef .10. Regardless of what I set amqp_consumer_id, index, or node_count to I get the same behavior – the first expander subscribes to all 1024 vnodes & the second refuses to do anything as there’s already a consumer on the queues. Chef-expanderctl node-status confirms the log messages. Searching hasn’t pulled up much:
http://wiki.opscode.com/display/chef/Chef+Configuration+Settings doesn’t have anything regarding the expander
http://wiki.opscode.com/display/chef/Chef+Indexer covers the some settings but they don’t seem to work or aren’t applicable to this version of chef.
Is this type of configuration possible?
I don’t think this is possible any longer. As you’ve seen, expander refuses to operate when there are other consumers on the queues, this is a safety feature to prevent data loss if the cluster somehow gets in a bad state.
I’d look in to Solr replication, it should be much more sane with the version of Solr in Chef 0.10.x
–
Dan DeLeo