Using nginx on chef-server

Can anyone share their experience sharing arbitrary files using the chef-server ?

We will generate nightly backup tarballs from our chef server outside our corporate firewall.

We want to move these tarballs to a nfs mount inside our corporate firewall. From inside our corporate network, we have ONLY https access to our off premises chef-server for uploading cookbooks, and the like. We don't have ssh access from inside our corporate network.

To manage off-prem servers, we connect a workstation with a 2nd network tap outside the corporate network to vpn from the workstation to the off-prem servers.

The off-prem chef sever can't initiate connections to inside our corporate network.

My first inclination is to find the ngnix config on the off-prem chef server and modify that to use httppasswd to protect a directory and dump tarballs to that directory every night.

My second inclination is to try to run the knife ec backup plugin from a management/ box/chef workstation to hit the off-prem chef server.

My third inclination is to think about remote_file , cookbook_file resources


I... think you need to talk to your internal security people. "Arbitrary files" covers a lot of turf, including data bags and individual node configurations with sensitive data, including non-encrypted passwords if you've not been careful. Many database and SSH cookbooks are notoriously casual about storing passwords or private keys in clear text.

Your internal security people may already have an approach for publishing sensitive data to a 3rd party or to an exposable server in a DMZ such as you shold be using. These may include encrypted S3, an SFTP server, WebDAV over HTTPS, NFSv4, or just plain a web server with HTTPS and restrictive firewall access rules. Talk to them. But where are are you configuring your nodes? If you do it on the master chef server, the node information stored on that DMZ replica is always going to be split-brain with the master after every client runs. you're going to want those backed up by other means after every client push, I think. And you'll want to be careful about publishing those back to the master.

One approach that I've found effective is to use a git repo for all chef server configurations, with chef-zero applied ot the git repo for local testing. This lets me run new cookbooks and roles and node configurations on a test server, with no uploads to the chef server until it is debugged and code reviewed and tagged in git. It also supports loading the release version of the git repo, possibly with different git submodules for different sets of nodes, to different classes of chef server such as dev, qa, staging, or production.

This avoids most of the "backup" problems, except possibly storing chef-server updated "node" attributes, which may require a "git push" operations for this approach. It also gets you away from "tarballs" to a backup system that records changes and allows comparisons, and that allows finer-grained control of who has access to what components of the chef server backup. If you'd like to see my old example, designed originally for "chef-solo" operations independent of any chef server whatsoever, it's published at

It needs an update to use chef-zero instead of chef-local. I'd welcome updates, and may try to pry some time free to update it myself.

Hi Nico

Thank you for thinking about our problem.

Our goal is disaster recovery. When the external chef server blows up , we want to restore to a new server and be able to do:

knife ssh 'name:FOO' 'sudo chef-client

..from the same external workstation.

It looks like your make file runs on each node ? I'm not sure how that meets our goal.

Our backup process with 'knife ec backup' creates tarballs that we can feed to a restore process with 'knife ec restore'. We want to move these tarballs off the server that is being backed up.

Chef server includes a web server, this might save us a few steps in pulling them off.

What you describe is a laudable goal. Part of the difficulty is that you'd have to maintain consistent state and avoid "split-brain" between the two chef servers. Since each chef client registers public keys of the chef server and registers its own public key with the chef server, you'd need to synchronize that. If you re-initialize the chef clients to exchange keys and register with a distinct chef server, well, you'd have to reload the node's run-list when you re-register with the new server.

It sounds like you're trying to re-invent the wheel of the commercial Chef "HA" setup, which requires a Chef subscription. If you use the approach I laid out, the git repo is the high availability service and each node can run distinctly with no Chef server whatsoever. It might be worth looking at if you don't care to invest in the price of a Chef HA setup, or in the time and work to build your own.

I am not doing a good job describing our problem.

We have 2 chef servers, managing 2 different sets of servers. No split brain. We can ignore the chef server on our internal network. It isn't relevant to our need.

For this problem, we have (1) chef server that could fail and no longer be usable. We can conveniently access this (1) off site chef server through https aka the chef REST interface.

We'd like to move backups from the external data server to a safe place on our internal network. If our external chef server blows up, we'd setup a replacement server using the backups we've been storing in a save place. in our tests this is a simple process.

There would be no split brain, only 1 server either original or restored. The client public keys registered with the chef server are stored in the backup. When I backu a production backup, restore it to VM and change my knife.rb file to point to the VM , I don't have to change client certs.

Our goal is to get the backup tarballs from a server we can access conveniently only from the CHEF webUI.

OK. Things got very confused, I think, with the insistence that the internal Chef server has anything to do with the external Chef server. Since they're not consistent and never will be, treat them distinctly. They may have common recipes from common git repositories: that's an internal workflow you can deal with separately.

But for whichever system you want to be highly reliable and available 24x7, providing that via your own set of servers and backups and reloads and switchovers to the alternative, possibly newly built host, etc. is re-inventing the wheel. Those features are built into the commercial Chef HA setup, and you can save dozens if not hundreds of hours of your local development time just paying for that and setting it up with Chef's professional services.

If you want to do it yourself, well, you can spend the cycles to build this from scratch. It's important to remember that a few components of the server, such as the SSL certificates and private data bag encryption keys are not part of a normal Chef backup operation. And it's difficult to guess what components of your backups you're actually using or needing. If you're prepared to consistently synchronize node, environments, roles, cookbooks, etc. from a git repository and use an "infrastrucure as code" approach, it can be possible to avoid a great deal of the "tarball" requirement. Simply redeploy from the git repository, and apply the components that aren't manually loadable such as the host SSL sertificates and other private encryption keys.

How you access those backups..... depends on your backup tools. I like git repos for this content: it allows tracking changes far, far more easily, and tagging releases. It's especially useful for detecting when some smart !@#!@# decides to hand-edit the node, role, or environment attributes on the Chef server and tracking when it happened. I've encountered Chef maintainers who would manually do such operations and not put them in the relevant git repo, leading to enormous confusion when things worked in staging and not in production.

Where the git repo, or tarball, lives and is accessible to the exposed Chef server..... is not a Chef problem. It's a network space and security problem. With the right setup, you might even be able to use or a more local git server to store this content, which has little benefit from being tarballed. It's typically not that bulky unless you are stuffing large binaries in your cookbooks or data bags somewhere. I do think you want to get if off the Chef server itself, in case the host gets completely taken out by some tragic error.

It's partly why I've pointed you to my nkadel-chef-local-wrapper tools. You can slap those on a separate test server, set up a git repo with your backups, and test the layouts locally without having to build up a new Chef server.