I’m actually doing something very similar to what you are doing and am using a data bag like Matt suggested. In my case, I use Chef to manage the configuration file as a template and store it in a config directory on the Docker host, not in the container directly. Then, I share the directory the config file is in with the Docker container when the container is run using ‘-v /path/to/config/on/host:/config’. Finally, I set the ENTRYPOINT for the container to be a simple shell script that moves my mounted config file where my app expects it to be, then runs the app. This sounds a bit convoluted, but actually works very well and is not as complicated as it sounds. Check out the docker-registry’s Dockerfile and config setup for a similar approach: https://github.com/dotcloud/docker-registry/blob/master/Dockerfile.
Using this method, I can build all my containers using Jenkins without having to store sensitive information in them and can manage their configuration using Chef and encrypted data bags.
On February 11, 2014 at 9:52:14 AM, Rudi (firstname.lastname@example.org:email@example.com) wrote:
That’s good advice - many thanks for taking the time to tap that out!
My own preference would be to have the docker instances run the chef-client and configure themselves when they start up
I also had that thought, but didn’t include in the original email in this thread.
In this case it’s a node.js web app, how would it configure itself?
On Tue, Feb 11, 2014 at 10:47 PM, Matt Ray <firstname.lastname@example.org:email@example.com> wrote:
If you’re interested in reducing the number of moving parts, it looks
like you could probably replace the use of etcd with a data bag on the
Chef server. Jenkins pulls the latest information from the data bag
and builds the docker image with that config, then runs without any
further management. My own preference would be to have the docker
instances run the chef-client and configure themselves when they start
up, but I’ve seen both patterns.
Cloud Integrations Product Lead :: Chef
512.731.2218 :: firstname.lastname@example.org:email@example.com
mattray :: GitHub :: IRC :: Twitter
On Sat, Feb 8, 2014 at 8:50 AM, Rudi <firstname.lastname@example.org:email@example.com> wrote:
I have a deployment scenario which I would much appreciate some Chef user
community feedback on.
The context is a server running a node.js web app which connects to a
mongolab.comhttp://mongolab.com database instance.
What I’m looking at is using Jenkins CI to build a docker image after each
successful master branch build.
Probably on the same machine as the Jenkins CI server I’d run a private
Jenkins would build a node. j’s app image and push to the local private
Lets say I have 5 node web app servers behind a Load Balancer.
A Chef run production deployment would then pull from the private docker
registry for each
of the five node web app servers.
My question is at the point where Jenkins builds the new docker image after
a successful master branch build.
Jenkins is building from a git repo but that git repo, currently, does not
have sensitive production config data.
Config data like the API keys for external services like mongolab, datadog,
What I’m thinking is having a step where a Jenkins task might hit something
like an etcd server to obtain that production config data and update the
node.js web app source config file with it before the docker build.
Does this sound like a good option?
For example this is what is pulled from the git repo:
This needs to be updated before the image is built, so that when a container
is fired up in production the application can connect to the external loggly
Or … should the container instance query etcd at the point it starts up?
I hope I explained all that OK.
Any feedback would be muchly appreciated