Yo,
The quorum functionality behaves like I think:
"The minimum number of nodes that match the search criteria, are
available, and acknowledge the job request."
What makes you think that specifying quorum of 50% causes the job to
be accepted by 100% of the nodes? I would assert that if 50% of the
nodes were unavailable and you specified 100%, the job would not
launch. If you specified 50%, the job would launch, but (likely)
potentially not run on 100% of machines.
Could you clarify? Are you suggesting that if a job is launched with
quorum of 50%, 100% of machines will run it , despite
their temporal unavailability?
In your opinion where does that recipe that runs the push_jobs
execute? A workstation? Some authorized God node?
ta,
--aj
On Tue, Nov 25, 2014 at 4:48 AM, Cerny,Nathan Nathan.Cerny@cerner.com wrote:
I don’t think the quorum functionality behaves like you think. The quorum
essentially says “If at least X% is available, run on all”.
In my opinion, a better pattern here would be to create an orchestration
recipe that depends on the push_jobs cookbook (and thus gets access to the
push_jobs resource). Within that recipe, you would do a search to grab all
of your potential nodes, then loop over that resource N nodes at time (being
sure to set the parameter to wait for the job to finish before moving on).
On Sat, Nov 22, 2014 at 1:40 AM, AJ Christensen
aj@junglistheavy.industries wrote:
You may be able to use the Quorum functionality to half-bake this [0]:
knife job start --quorum 90% 'chef-client' --search 'role:webapp'
The minimum number of nodes that match the search criteria, are
available, and acknowledge the job request. This can be expressed as a
percentage (e.g. 50%) or as an absolute number of nodes (e.g. 145).
Default value: 100%
I'd try values like 50% of your available foobars and see how it works
out for ya.
Good Luck!
cheers,
--aj
[0] GitHub - chef-boneyard/knife-push: knife commands for Chef Push Jobs
On Sat, Nov 22, 2014 at 8:56 AM, Bryan Baugher bjbq4d@gmail.com wrote:
Great, thanks for the links. Somehow google wasn't being helpful.
My use case is fairly simple. I want to run chef-client on my nodes in a
particular environment but I want to ensure that,
- I don't end up restarting all the services at once
- A bad config doesn't take out the whole service
The simplest version of this would just be to group the nodes into X
number
of groups and run chef-client one group at a time. Knife ssh has a
concurrency option which limits the number of ssh connections which
would
also achieve the same goal.
On Fri Nov 21 2014 at 1:45:45 PM AJ Christensen
aj@junglistheavy.industries wrote:
GitHub - chef-boneyard/omnibus-pushy: DEPRECATED: omnibus package builder for pushy
GitHub - chef-boneyard/oc-pushy-pedant: DEPRECATED: Opscode Pushy API test runner
GitHub - chef-boneyard/omnibus-push-jobs-client: THIS PROJECT HAS BEEN MIGRATED TO OMNIBUS-CHEF!
GitHub - chef-boneyard/opscode-pushy-server: Chef Push Jobs Server
GitHub - chef-boneyard/opscode-pushy-simulator: Pushy client simulator
GitHub - chef-boneyard/pushy_common: Pushy common code
GitHub - chef-boneyard/knife-push: knife commands for Chef Push Jobs
GitHub - chef-boneyard/opscode-pushy-client: Client API for Pushy
Try searching through the issues, or logging an issue or feature
request or RFC. "don't run command X on all nodes at the same time"
sounds a little hard to implement. Do you mean mutually exclusive,
configurable contention, locking of commands?
Can you describe your use-case? An independent team of Chef operators
has been evaluating use cases and building tests/example cases for
Pushy and tools in the same field (our cases thus far have been around
Cassandra ring bootstrap, expansion and contraction)
cheers,
--aj
On Sat, Nov 22, 2014 at 8:39 AM, Bryan Baugher bjbq4d@gmail.com
wrote:
Hello everyone,
Is the push jobs code available on github anywhere? Also are there
any
plans
to add a kind of concurrency option (i.e. don't run command X on all
nodes
at the same time)?
Bryan
CONFIDENTIALITY NOTICE This message and any included attachments are from
Cerner Corporation and are intended only for the addressee. The information
contained in this message is confidential and may constitute inside or
non-public information under international, federal, or state securities
laws. Unauthorized forwarding, printing, copying, distribution, or use of
such information is strictly prohibited and may be unlawful. If you are not
the addressee, please promptly delete this message and notify the sender of
the delivery error by e-mail or you may call Cerner's corporate offices in
Kansas City, Missouri, U.S.A at (+1) (816)221-1024.