Recently, I've found an academic research about bad coding practices in
infrastructure as code scripts which can lead to security issues. I found it
interesting, but I notice that the practices the researchers pointed out aren't
specific to infrastructure as code applications, but general to any kind of
programming language or application. So I wonder if is there any specific
material available, specially to the newcomers, talking about bad practices,
specially the ones that can lead to security weaknesses.
You could consider the CIS benchmarks that are included with InSpec as some very specific recommendations for infrastructure code focused on environment: https://www.cisecurity.org/cis-benchmarks/
Also it’s helpful to incorporate threat modeling and security by design, which are more abstract, but you can find specific recommendations there as well. For example the things OWASP recommends for mobile security coding and web coding are very helpful. They define practices/smells in infrastructure code by identifying the most critical risks: https://owasp.org/ Very practical advice on handling passwords/cookies, understanding things like SQL injection in web/mobile stuff, for example.
Chris Romeo from Cisco also started a company called Security Journey that packages some of the stuff he helped develop internally at Cisco to introduce newcomers to security awareness. Some of the treatments/videos are a little cringeworthy, but it does do a pretty go job of focusing on coding practices that need to security weaknesses and has curricula with levels arranged by belts, like in martial arts, which can be a helpful way of framing/chunking the concepts: https://www.securityjourney.com/
Thanks for the question and the link to the paper. I enjoyed the read. As Mischa points out, the most effective mitigation to this type of risk is a strong culture of security. Due to the nature of this kind of threat, two-person review and strong understanding of security principles is your best defense against this.
The concern with automated tools in this space is a false sense of security. If you are trusting your tool to catch all hard coded secrets, you're losing that culture of security. You can see some of that in the practitioners responses to the researchers "Its a training module, thus its not important".
I think a more applicable testing suite are penetration tests, secrets detection, etc on the system rather than within the IaC itself. Lets focus on securing the systems rather than trying to unit test all the things.
To that end, I again recommend the CIS benchmarks as a good starting point, as is OWASP.
Lucas - first as someone who has been in the IT weeds for a long time, I genuinely appreciate your attention to the security of code at the front end, and as a “newcomer” rather than treating it as an afterthought. You’re already ahead of many of your peers, and a bunch people senior to you as well.
While we’re obviously not writing Perl here, the Perl Best Practices book (O’Reilly) had a huge impact on my attitude and how I thought about the code I was writing. I think though, that was the point the author was trying to make. IMHO he wasn’t writing a Perl style guide, he wrote a how to think about what you’re doing guide. He also wrote a good bit about other people reading your code later. By writing code intentionally, with a structure and purpose, rather than haphazardly, I think we’re less likely to make sloppy mistakes that lead to security issues. It’s an old book by today’s standards, but you might find it helpful. I found it incredibly interesting from a psychology perspective.
Some writing more hardened code will come with experience. A multilayered security approach is often best - a combination of good code, good architecture (ie don’t put your internal databases facing the interwebs), appropriately configured firewall, and so on. Hardening at multiple layers reduces the risk of a bug in the code to be the one thing that gets you hacked.
Lastly, just a few general areas of code security to think about, some already mentioned:
secrets should never be hardcoded. this hasn’t always been an option, but with tools like Hashicorp Vault, data bags, and maybe Chef’s own vault product (not sure haven’t used it) it’s much easier to avoid leaving your code full of things the world shouldn’t see.
never trust user input. I find this doesn’t come up as much in Chef, but it does occasionally. If you’re taking input from a source you don’t own or control, think very hard about the right way to sanitize that input. Most languages will have built in mechanisms you can use to, for example, strip or escape HTML. Be insanely careful (or do everything to avoid) if you’re going to execute code or make a system call that contains any user input.
use file checksums, especially if you’re pulling content - ie packages - from a third party, Chef has gotten better at making this a first-class property for built-in resources like remote_file. Take advantage of that.
less Chef/code and more system related, but turn off services you’re not using and avoid starting up random services - or punching truck-sized holes in your firewall - without a good reason.
If something doesn’t look right to you, speak up. I know there can be that one person who gets angry if you ask questions like you’re challenging them or whatever, but I think most are happy (I am) to walk through their code with you and explain what’s happening, and why that thing you’re looking at isn’t a security issue, or - “oh man, good catch, Lucas. I didn’t see that. Want to help me fix it?”
Thank you guys, sorry for the late response. That's a lot of very useful recommendations, more than I was expecting. I hope more people see this thread in the future and learn as much as I did.