Just wondering how others approach this situation. Elastic IPs aren’t
viable as we’ll have nodes in autoscaling groups, etc.
One approach mentioned in #chef was to launch the EC2 nodes inside VPC, and
then link the regions via VPC. We’d end up paying for that, but it’s an
option. Any others? I’m not sure I want to (on short notice) add VPC into
the mix while I’m on a deadline, though.
Does Chef support the concept of slave servers, where I could have a server
host in each region with an ElasticIP that pulls from the master host?
–
~~ StormeRider ~~
“Every world needs its heroes […] They inspire us to be better than we
are. And they protect from the darkness that’s just around the corner.”
Yes, there actually a few solutions which i'm employing. first, you
could use https with your chef server and expose the chef-server
cross-region with elastic ip or DNS endpoints. this solution is very
easy but can slow down chef-runs, especially the first where files
are not cached yet. This can be quite annoying if you are using
autoscaling.
second, you can use two separate chef servers and manage them using
different knife.rb instances. this add some overhead but can be
beneficial if you want to separate your systems.
lastly, chef servers can operate in a various cluster modes:
master-master, master-slave and various mix n' match modes; e.g. you
could have master/slave where the slave is read only (classic) or
(my favorite for wan replication) master/master for couchdb,
master/slave for solr, master slave for files
(/var/lib/chef/cookbook_index) and federation/shovel for rabbitmq.
there are other combination that will work well over wan, but i find
this combo to be the easiest to setup and maintain. I'm currently
using this setup with the master in eu-west-1 and a slave in
us-east-1 and this chopped 3 minutes off the chef run. it also
allows high availability if you combine this with some failover
logic (global dns traffic manager or client side logic).
Regards,
Avishai
On 08/08/12 10:18, Morgan Blackthorne
wrote:
Just wondering how others approach this situation.
Elastic IPs aren't viable as we'll have nodes in autoscaling
groups, etc.
One approach mentioned in Chef Infra (archive) was to launch the EC2 nodes
inside VPC, and then link the regions via VPC. We'd end up
paying for that, but it's an option. Any others? I'm not sure I
want to (on short notice) add VPC into the mix while I'm on a
deadline, though.
Does Chef support the concept of slave servers, where I could
have a server host in each region with an ElasticIP that pulls
from the master host?
--
~*~ StormeRider ~*~
"Every world needs its heroes [...] They inspire us to be
better than we are. And they protect from the darkness
that's just around the corner."
Ultimately, I just went with a nginx proxy in each region that directs over
to the master chef server and nodes can only reach the master by going
through the nginx proxy (and thus the authorized ElasticIP). It seemed like
the easiest solution to implement; autoscaling nodes usually have their
basic AMI baked in and I just use Chef for post-startup updates right now
(that may change down the road). Using knife ec2 server create I can add
the --server-url parameter to tell the new nodes to use the appropriate
proxy URL for them.
I will have to look into clustering, however. I didn't realize that was an
option. So much more to learn!
--
~~ StormeRider ~~
"Every world needs its heroes [...] They inspire us to be better than we
are. And they protect from the darkness that's just around the corner."
Yes, there actually a few solutions which i'm employing. first, you could
use https with your chef server and expose the chef-server cross-region
with elastic ip or DNS endpoints. this solution is very easy but can slow
down chef-runs, especially the first where files are not cached yet. This
can be quite annoying if you are using autoscaling.
second, you can use two separate chef servers and manage them using
different knife.rb instances. this add some overhead but can be beneficial
if you want to separate your systems.
lastly, chef servers can operate in a various cluster modes:
master-master, master-slave and various mix n' match modes; e.g. you could
have master/slave where the slave is read only (classic) or (my favorite
for wan replication) master/master for couchdb, master/slave for solr,
master slave for files (/var/lib/chef/cookbook_index) and federation/shovel
for rabbitmq. there are other combination that will work well over wan, but
i find this combo to be the easiest to setup and maintain. I'm currently
using this setup with the master in eu-west-1 and a slave in us-east-1 and
this chopped 3 minutes off the chef run. it also allows high availability
if you combine this with some failover logic (global dns traffic manager or
client side logic).
Regards,
Avishai
On 08/08/12 10:18, Morgan Blackthorne wrote:
Just wondering how others approach this situation. Elastic IPs aren't
viable as we'll have nodes in autoscaling groups, etc.
One approach mentioned in Chef Infra (archive) was to launch the EC2 nodes inside VPC,
and then link the regions via VPC. We'd end up paying for that, but it's an
option. Any others? I'm not sure I want to (on short notice) add VPC into
the mix while I'm on a deadline, though.
Does Chef support the concept of slave servers, where I could have a
server host in each region with an ElasticIP that pulls from the master
host?
--
~~ StormeRider ~~
"Every world needs its heroes [...] They inspire us to be better than we
are. And they protect from the darkness that's just around the corner."
Not sure if theres an easy solution to this. You can consider a NAT box or
ssh tunnel setup in one region , and every chef clients from that region
used this proxy to access the chef server (which is hosted in another
location). This proxy server in turn needs to be whitelisted in chef
server security group. You need to have one proxy server in every zone
though.
Just wondering how others approach this situation. Elastic IPs aren't
viable as we'll have nodes in autoscaling groups, etc.
One approach mentioned in Chef Infra (archive) was to launch the EC2 nodes inside VPC,
and then link the regions via VPC. We'd end up paying for that, but it's an
option. Any others? I'm not sure I want to (on short notice) add VPC into
the mix while I'm on a deadline, though.
Does Chef support the concept of slave servers, where I could have a
server host in each region with an ElasticIP that pulls from the master
host?
--
~~ StormeRider ~~
"Every world needs its heroes [...] They inspire us to be better than we
are. And they protect from the darkness that's just around the corner."
Not sure if theres an easy solution to this. You can consider a NAT box or
ssh tunnel setup in one region , and every chef clients from that region
used this proxy to access the chef server (which is hosted in another
location). This proxy server in turn needs to be whitelisted in chef
server security group. You need to have one proxy server in every zone
though.
Just wondering how others approach this situation. Elastic IPs aren't
viable as we'll have nodes in autoscaling groups, etc.
One approach mentioned in Chef Infra (archive) was to launch the EC2 nodes inside VPC,
and then link the regions via VPC. We'd end up paying for that, but it's an
option. Any others? I'm not sure I want to (on short notice) add VPC into
the mix while I'm on a deadline, though.
Does Chef support the concept of slave servers, where I could have a
server host in each region with an ElasticIP that pulls from the master
host?
--
~~ StormeRider ~~
"Every world needs its heroes [...] They inspire us to be better than we
are. And they protect from the darkness that's just around the corner."
In terms of proxying, we already have a use for a proxy server to connect
to our datastore cluster which is offsite, and using proxies with
ElasticIPs ensures that we can generate a knowable list of hosts traffic
should come from. I'm assuming that I should easily be able to add another
stanza to the config to push a path to the master host, which then
whitelists the same hosts that the datastore clusters do, just on a
different port.
Also does Chef's API interface (4000) operate over SSL? Less concerned
about the server gui (4040) as that will be more tightly restricted and I
can toss Apache proxying around it, or a one node ELB to downstep the ssl.
(Thanks, and apologies for any typoes or other oddities; my weird vision
right now is a side effect of the Ambien telling me to go to sleep.)
--
~~ StormeRider ~~
"Every world needs its heroes [...] They inspire us to be better than we
are. And they protect from the darkness that's just around the corner."
Not sure if theres an easy solution to this. You can consider a NAT box
or ssh tunnel setup in one region , and every chef clients from that region
used this proxy to access the chef server (which is hosted in another
location). This proxy server in turn needs to be whitelisted in chef
server security group. You need to have one proxy server in every zone
though.
Just wondering how others approach this situation. Elastic IPs aren't
viable as we'll have nodes in autoscaling groups, etc.
One approach mentioned in Chef Infra (archive) was to launch the EC2 nodes inside VPC,
and then link the regions via VPC. We'd end up paying for that, but it's an
option. Any others? I'm not sure I want to (on short notice) add VPC into
the mix while I'm on a deadline, though.
Does Chef support the concept of slave servers, where I could have a
server host in each region with an ElasticIP that pulls from the master
host?
--
~~ StormeRider ~~
"Every world needs its heroes [...] They inspire us to be better than we
are. And they protect from the darkness that's just around the corner."
In terms of proxying, we already have a use for a proxy server to connect
to our datastore cluster which is offsite, and using proxies with
ElasticIPs ensures that we can generate a knowable list of hosts traffic
should come from. I'm assuming that I should easily be able to add another
stanza to the config to push a path to the master host, which then
whitelists the same hosts that the datastore clusters do, just on a
different port.
Also does Chef's API interface (4000) operate over SSL? Less concerned
about the server gui (4040) as that will be more tightly restricted and I
can toss Apache proxying around it, or a one node ELB to downstep the ssl.
(Thanks, and apologies for any typoes or other oddities; my weird vision
right now is a side effect of the Ambien telling me to go to sleep.)
--
~~ StormeRider ~~
"Every world needs its heroes [...] They inspire us to be better than we
are. And they protect from the darkness that's just around the corner."
Not sure if theres an easy solution to this. You can consider a NAT box
or ssh tunnel setup in one region , and every chef clients from that region
used this proxy to access the chef server (which is hosted in another
location). This proxy server in turn needs to be whitelisted in chef
server security group. You need to have one proxy server in every zone
though.
Just wondering how others approach this situation. Elastic IPs aren't
viable as we'll have nodes in autoscaling groups, etc.
One approach mentioned in Chef Infra (archive) was to launch the EC2 nodes inside VPC,
and then link the regions via VPC. We'd end up paying for that, but it's an
option. Any others? I'm not sure I want to (on short notice) add VPC into
the mix while I'm on a deadline, though.
Does Chef support the concept of slave servers, where I could have a
server host in each region with an ElasticIP that pulls from the master
host?
--
~~ StormeRider ~~
"Every world needs its heroes [...] They inspire us to be better than
we are. And they protect from the darkness that's just around the corner."