Jenkins / Docker / EtcD / Chef - Prod Deployment Question


#1

Hi,

I have a deployment scenario which I would much appreciate some Chef user
community feedback on.

Context Overview:

The context is a server running a node.js web app which connects to a
mongolab.com database instance.

What I’m looking at is using Jenkins CI to build a docker image after each
successful master branch build.

Probably on the same machine as the Jenkins CI server I’d run a private
docker registry.

Jenkins would build a node. j’s app image and push to the local private
docker registry.

Lets say I have 5 node web app servers behind a Load Balancer.

A Chef run production deployment would then pull from the private docker
registry for each

of the five node web app servers.

Question:

My question is at the point where Jenkins builds the new docker image after
a successful master branch build.

Jenkins is building from a git repo but that git repo, currently, does not
have sensitive production config data.

Config data like the API keys for external services like mongolab, datadog,
loggly.com etc.

What I’m thinking is having a step where a Jenkins task might hit something
like an etcd server to obtain that production config data and update the
node.js web app source config file with it before the docker build.

Does this sound like a good option?

For example this is what is pulled from the git repo:

“loggly”: {
“inputToken”: “your-lggoy-token”,
“subdomain”: “your-domain.loggly.com”,
“auth”: {
“username”: “your-loggly-username”,
“password”: “your-loggly-password”
}
}

This needs to be updated before the image is built, so that when a
container is fired up in production the application can connect to the
external loggly resource.

Or … should the container instance query etcd at the point it starts up?

I hope I explained all that OK.

Any feedback would be muchly appreciated :slight_smile:

Thanks!


#2

If you’re interested in reducing the number of moving parts, it looks
like you could probably replace the use of etcd with a data bag on the
Chef server. Jenkins pulls the latest information from the data bag
and builds the docker image with that config, then runs without any
further management. My own preference would be to have the docker
instances run the chef-client and configure themselves when they start
up, but I’ve seen both patterns.

Thanks,
Matt Ray
Cloud Integrations Product Lead :: Chef
512.731.2218 :: matt@getchef.com
mattray :: GitHub :: IRC :: Twitter

On Sat, Feb 8, 2014 at 8:50 AM, Rudi ooly.me@gmail.com wrote:

Hi,

I have a deployment scenario which I would much appreciate some Chef user
community feedback on.

Context Overview:

The context is a server running a node.js web app which connects to a
mongolab.com database instance.

What I’m looking at is using Jenkins CI to build a docker image after each
successful master branch build.

Probably on the same machine as the Jenkins CI server I’d run a private
docker registry.

Jenkins would build a node. j’s app image and push to the local private
docker registry.

Lets say I have 5 node web app servers behind a Load Balancer.

A Chef run production deployment would then pull from the private docker
registry for each

of the five node web app servers.

Question:

My question is at the point where Jenkins builds the new docker image after
a successful master branch build.

Jenkins is building from a git repo but that git repo, currently, does not
have sensitive production config data.

Config data like the API keys for external services like mongolab, datadog,
loggly.com etc.

What I’m thinking is having a step where a Jenkins task might hit something
like an etcd server to obtain that production config data and update the
node.js web app source config file with it before the docker build.

Does this sound like a good option?

For example this is what is pulled from the git repo:

“loggly”: {
“inputToken”: “your-lggoy-token”,
“subdomain”: “your-domain.loggly.com”,
“auth”: {
“username”: “your-loggly-username”,
“password”: “your-loggly-password”
}
}

This needs to be updated before the image is built, so that when a container
is fired up in production the application can connect to the external loggly
resource.

Or … should the container instance query etcd at the point it starts up?

I hope I explained all that OK.

Any feedback would be muchly appreciated :slight_smile:

Thanks!


#3

Hey Matt,

That’s good advice - many thanks for taking the time to tap that out!

My own preference would be to have the docker instances run the
chef-client and configure themselves when they start up

I also had that thought, but didn’t include in the original email in this
thread.

In this case it’s a node.js web app, how would it configure itself?

The node app (javascript code) could query a databag or etcd ?

On Tue, Feb 11, 2014 at 10:47 PM, Matt Ray matt@getchef.com wrote:

If you’re interested in reducing the number of moving parts, it looks
like you could probably replace the use of etcd with a data bag on the
Chef server. Jenkins pulls the latest information from the data bag
and builds the docker image with that config, then runs without any
further management. My own preference would be to have the docker
instances run the chef-client and configure themselves when they start
up, but I’ve seen both patterns.

Thanks,
Matt Ray
Cloud Integrations Product Lead :: Chef
512.731.2218 :: matt@getchef.com
mattray :: GitHub :: IRC :: Twitter

On Sat, Feb 8, 2014 at 8:50 AM, Rudi ooly.me@gmail.com wrote:

Hi,

I have a deployment scenario which I would much appreciate some Chef user
community feedback on.

Context Overview:

The context is a server running a node.js web app which connects to a
mongolab.com database instance.

What I’m looking at is using Jenkins CI to build a docker image after
each
successful master branch build.

Probably on the same machine as the Jenkins CI server I’d run a private
docker registry.

Jenkins would build a node. j’s app image and push to the local private
docker registry.

Lets say I have 5 node web app servers behind a Load Balancer.

A Chef run production deployment would then pull from the private docker
registry for each

of the five node web app servers.

Question:

My question is at the point where Jenkins builds the new docker image
after
a successful master branch build.

Jenkins is building from a git repo but that git repo, currently, does
not
have sensitive production config data.

Config data like the API keys for external services like mongolab,
datadog,
loggly.com etc.

What I’m thinking is having a step where a Jenkins task might hit
something
like an etcd server to obtain that production config data and update the
node.js web app source config file with it before the docker build.

Does this sound like a good option?

For example this is what is pulled from the git repo:

“loggly”: {
“inputToken”: “your-lggoy-token”,
“subdomain”: “your-domain.loggly.com”,
“auth”: {
“username”: “your-loggly-username”,
“password”: “your-loggly-password”
}
}

This needs to be updated before the image is built, so that when a
container
is fired up in production the application can connect to the external
loggly
resource.

Or … should the container instance query etcd at the point it starts up?

I hope I explained all that OK.

Any feedback would be muchly appreciated :slight_smile:

Thanks!


#4

Ah … I think I replied to soon.

It sinks in now for the chef client to run and do it’s config.

I’ve tinkered with chef server before but currently this is all chef solo.

When it comes to “configuring itself” that brings up lots of options for
secrets management (which I’ve also not settled on what I prefer).

On Tue, Feb 11, 2014 at 11:16 PM, Rudi ooly.me@gmail.com wrote:

Hey Matt,

That’s good advice - many thanks for taking the time to tap that out!

My own preference would be to have the docker instances run the
chef-client and configure themselves when they start up

I also had that thought, but didn’t include in the original email in this
thread.

In this case it’s a node.js web app, how would it configure itself?

The node app (javascript code) could query a databag or etcd ?

On Tue, Feb 11, 2014 at 10:47 PM, Matt Ray matt@getchef.com wrote:

If you’re interested in reducing the number of moving parts, it looks
like you could probably replace the use of etcd with a data bag on the
Chef server. Jenkins pulls the latest information from the data bag
and builds the docker image with that config, then runs without any
further management. My own preference would be to have the docker
instances run the chef-client and configure themselves when they start
up, but I’ve seen both patterns.

Thanks,
Matt Ray
Cloud Integrations Product Lead :: Chef
512.731.2218 :: matt@getchef.com
mattray :: GitHub :: IRC :: Twitter

On Sat, Feb 8, 2014 at 8:50 AM, Rudi ooly.me@gmail.com wrote:

Hi,

I have a deployment scenario which I would much appreciate some Chef
user
community feedback on.

Context Overview:

The context is a server running a node.js web app which connects to a
mongolab.com database instance.

What I’m looking at is using Jenkins CI to build a docker image after
each
successful master branch build.

Probably on the same machine as the Jenkins CI server I’d run a private
docker registry.

Jenkins would build a node. j’s app image and push to the local private
docker registry.

Lets say I have 5 node web app servers behind a Load Balancer.

A Chef run production deployment would then pull from the private docker
registry for each

of the five node web app servers.

Question:

My question is at the point where Jenkins builds the new docker image
after
a successful master branch build.

Jenkins is building from a git repo but that git repo, currently, does
not
have sensitive production config data.

Config data like the API keys for external services like mongolab,
datadog,
loggly.com etc.

What I’m thinking is having a step where a Jenkins task might hit
something
like an etcd server to obtain that production config data and update the
node.js web app source config file with it before the docker build.

Does this sound like a good option?

For example this is what is pulled from the git repo:

“loggly”: {
“inputToken”: “your-lggoy-token”,
“subdomain”: “your-domain.loggly.com”,
“auth”: {
“username”: “your-loggly-username”,
“password”: “your-loggly-password”
}
}

This needs to be updated before the image is built, so that when a
container
is fired up in production the application can connect to the external
loggly
resource.

Or … should the container instance query etcd at the point it starts
up?

I hope I explained all that OK.

Any feedback would be muchly appreciated :slight_smile:

Thanks!


#5

I’m actually doing something very similar to what you are doing and am using a data bag like Matt suggested. In my case, I use Chef to manage the configuration file as a template and store it in a config directory on the Docker host, not in the container directly. Then, I share the directory the config file is in with the Docker container when the container is run using ‘-v /path/to/config/on/host:/config’. Finally, I set the ENTRYPOINT for the container to be a simple shell script that moves my mounted config file where my app expects it to be, then runs the app. This sounds a bit convoluted, but actually works very well and is not as complicated as it sounds. Check out the docker-registry’s Dockerfile and config setup for a similar approach: https://github.com/dotcloud/docker-registry/blob/master/Dockerfile.

Using this method, I can build all my containers using Jenkins without having to store sensitive information in them and can manage their configuration using Chef and encrypted data bags.


Ryan Walker
Rackspace Hosting

On February 11, 2014 at 9:52:14 AM, Rudi (ooly.me@gmail.commailto:ooly.me@gmail.com) wrote:

Hey Matt,

That’s good advice - many thanks for taking the time to tap that out!

My own preference would be to have the docker instances run the chef-client and configure themselves when they start up

I also had that thought, but didn’t include in the original email in this thread.

In this case it’s a node.js web app, how would it configure itself?

The node app (javascript code) could query a databag or etcd ?

On Tue, Feb 11, 2014 at 10:47 PM, Matt Ray <matt@getchef.commailto:matt@getchef.com> wrote:
If you’re interested in reducing the number of moving parts, it looks
like you could probably replace the use of etcd with a data bag on the
Chef server. Jenkins pulls the latest information from the data bag
and builds the docker image with that config, then runs without any
further management. My own preference would be to have the docker
instances run the chef-client and configure themselves when they start
up, but I’ve seen both patterns.

Thanks,
Matt Ray
Cloud Integrations Product Lead :: Chef
512.731.2218 :: matt@getchef.commailto:matt@getchef.com
mattray :: GitHub :: IRC :: Twitter

On Sat, Feb 8, 2014 at 8:50 AM, Rudi <ooly.me@gmail.commailto:ooly.me@gmail.com> wrote:

Hi,

I have a deployment scenario which I would much appreciate some Chef user
community feedback on.

Context Overview:

The context is a server running a node.js web app which connects to a
mongolab.comhttp://mongolab.com database instance.

What I’m looking at is using Jenkins CI to build a docker image after each
successful master branch build.

Probably on the same machine as the Jenkins CI server I’d run a private
docker registry.

Jenkins would build a node. j’s app image and push to the local private
docker registry.

Lets say I have 5 node web app servers behind a Load Balancer.

A Chef run production deployment would then pull from the private docker
registry for each

of the five node web app servers.

Question:

My question is at the point where Jenkins builds the new docker image after
a successful master branch build.

Jenkins is building from a git repo but that git repo, currently, does not
have sensitive production config data.

Config data like the API keys for external services like mongolab, datadog,
loggly.comhttp://loggly.com etc.

What I’m thinking is having a step where a Jenkins task might hit something
like an etcd server to obtain that production config data and update the
node.js web app source config file with it before the docker build.

Does this sound like a good option?

For example this is what is pulled from the git repo:

“loggly”: {
“inputToken”: “your-lggoy-token”,
“subdomain”: “your-domain.loggly.comhttp://your-domain.loggly.com”,
“auth”: {
“username”: “your-loggly-username”,
“password”: “your-loggly-password”
}
}

This needs to be updated before the image is built, so that when a container
is fired up in production the application can connect to the external loggly
resource.

Or … should the container instance query etcd at the point it starts up?

I hope I explained all that OK.

Any feedback would be muchly appreciated :slight_smile:

Thanks!