there are several ways you can do this with chef. before is jot down three
main approaches, not that chef the configuration management system itself
does not provide any mechanism to orderly execute commands across a set of
node, so solutions involving only core chef might look hacky, but they are
certainly doable. Chef ecosystem now has few tools to do the orchestration
bit, but their stability and usage are slightly different,
ok, so here are 3 ways to setup db2 hadr
Approach A: Chef only. Model DB2 installation as standard chef recipe, have
master and standby two different recipe, each dropping a script. In case of
master the script will perform step 1 -5 , in case of standby node it will
execute step 5-9. The tricky bit is to make these steps trigger in order.
You can predicate the first script with search (trigger only if search
returns non-empty standby role, and set an attribute using the ruby block
to indicate the script is triggered), then on the standby role you can
predicate the step 6-9 script by search (which uses the attribute you set
on master node). This will be complicated, and you’ll require 2 chef runs
on each node.
Approach B: Model db2 installation as installation recipe, step 1-5 as
master recipe (consisting of 5 execute statements), step 6-9 as standby
recipe (4 execute resources), use chef metal to spawn the whole cluster,
first use the db2::installation recipe to provision the two boxes, followed
by redeclaring the machines with db2::master and db2::standby recipe (which
will execute those command), and finally assign back db2::install (or even
db2::config) recipe as run list for both of the nodes. Chef metal allows
you to declare machines as resources, but this also means your provisioning
script itself will be modeled as chef recipe.
Approach C: Use db2::install recipe to provision the nodes, then use an
additional orchestration tool like pushy or ansible or blender to execute
the commands. All of these will alow you to decouple the configuration vs
orchestration logic. I maintain several zookeeper, cassandra, xtradb
installations, and they require exact similar workflows. We wrote
blender[1] for similar reason.
This has been a recurring requirement for some time, and i think now we
have several working implementations, that we should be able to generalize
them and a reference implementation in place via the RFC process
hope this helps,
ranjib