Right now we’re still largely using an old style workflow, with “:site opscode” in the Berksfile and “path: …/blah” nonsense to refer to our local cookbooks we store in a git repo. This is bad because then obviously you can’t do things like chef version tagging in local dev, you have to have ALL local cookbooks on your filesystem, and it’s just generally messy.
So, instead I want to engage more actively with our internal chef servers. To do this I want to be able to have a berkshelf api service running with the following endpoints (and in this order):
- “stable” internal chef server, source of truth for cookbooks, data bags and roles and only written to via CI system (this is in place now)
- The Chef marketplace (for any community cookbook dependencies)
- A “dev” internal chef server, that anybody in the org can upload whatever to as part of the dev/testing process.
However, this doesn’t work for us because we use different credentials between our two internal chef servers and even though it seems like we’re providing the berkshelf API service itself with credentials to use, it only ever seems to use what the user has locally.
Am I totally insane for thinking this is a good idea? Is there a better approach here? Looking for wisdom…
PS - By the way, I’m trying to hire another body to help us work on this and generally help build a cool deployment pipeline infrastructure if anybody’s looking: