Hi,
I was thinking about putting my ssh keys in the disk image. It seems
the objection in terms of security is that someone else could use
those public ssh keys and pretend to be one of your node(s). So a
login to your central security server would allow an outsider to get
those other secret security information (namely the MYSQL db
password).
So its expected that central security server also needs also some DNS
based (or IP address based) method to tell the true nodes from the
fake ones?
Is it any more secure to be initially logging into each node with a
capistrano script (multiple-hosts, password-based ssh), so to upload
the ssh keys? Then people couldn't get them from your master disk
image. Of course they (id_rsa.pub) would still be chmod 700 in ~/.ssh
directory on the nodes.
Perhaps a more custom scheme is preferred as they are harder to break
when critical security elements
arent in the standard places where people usually expect?
Of course including an obfruscation approach also by definition makes
such a scheme harder for us to understand. But tell me, can you think
of a more secure scheme whereby a central server must hold all of the
sensitive password data?
It strikes me also that in many traditional setups, there might be a
dedicated set of DB servers. And a dedicated set of Webservers, LDAP
server etc. Well in that case the passwords are role-specific, and a
server which belongs to one 'role', wont need to know the passwords
that exist for other roles and classes of service-provisioning
machines.
I guess that means if you do use a private git repository, then should
delete those repository data locally after provisioning the node with
the passwords for its role(s)? Otherwise any node of 1 role is going
to know the secrets for all of the other roles. Ie if you think
someone might break the weakest role type of your nodes? By their very
nature, certain server roles will present more of a security risk and
be more open to attackers than others.
On Sat, Jun 19, 2010 at 1:05 AM, Ruby Newbie rubynewbie@me.com wrote:
Ohai, Michael.
On Jun 18, 2010, at 3:09 PM, Michael Guterl wrote:
I have created a gist with the bootstrap.sh script I'm using:
gist:557d9694a9606b9edbeb · GitHub
OK, now it's clear what you're trying to do -- thanks.
...
however, storing the text of the keys in the bootstrap script seems
wrong.
"Wrong" from a security standpoint, correct? In my opinion, storing the private key data in your bootstrap script is not so bad -- if the bootstrap script is not part of your generic disk image, and is instead copied over secure a channel to the node at boot time. In this case, you're really treating the whole bootstrap script as "secret data". Since the script is not that long or complex (and therefore presumably not too subject to rapid change), this may work OK for you.
That said, most of us would try to separate the secrets from the code that acts on them, for reasons mentioned earlier in this thread.
Is anyone dealing with anything similar?
Yes -- I think that everyone who makes substantial use of Chef must eventually confront the following issue, whether using chef solo or full client-server...
Chef needs to handle the secret data that your applications use (regardless of when that data gets inserted into your workflow). So we all use some kind of "meta-secret" to protect that data. In the client-server setup, the validation.pem (or individual client.pem) represents the meta-secret. In your case, it's the private half of your GitHub repo. In either case, if this data is compromised, so is all the secret data that Chef works with to configure your nodes.
For this reason, the best practice it to NOT store this meta-secret (or indeed any secret data) on your generic disk image, but to instead pass the data to the node at boot time, and if possible, to remove it after convergence. That's why Chef's knife tool can generate Amazon EC2 launch data containing the "meta-secret". You can follow the same security model with chef-solo, if you make a rake task in your Chef repo that carries out the following steps:
- Use your cloud provder's API, or your virtualization host's API, or whatever else you need to boot a fresh system, and store it's IP address.
- Use 'scp' (or Ruby's Net::SCP, or whatever) to transfer the private key data (or the entire bootstrap script, if you must) to the node.
- Use 'ssh' (or Net:SSH) to run you bootstrap script.
- Use 'ssh' to DELETE THE GITHUB PRIVATE KEY FROM THE NODE! (or do so from the bootstrap script)
Make sense?
-RN