Question around chef/berkshelf workflow


#1

Hi folks,

Right now we’re still largely using an old style workflow, with “:site opscode” in the Berksfile and “path: …/blah” nonsense to refer to our local cookbooks we store in a git repo. This is bad because then obviously you can’t do things like chef version tagging in local dev, you have to have ALL local cookbooks on your filesystem, and it’s just generally messy.

So, instead I want to engage more actively with our internal chef servers. To do this I want to be able to have a berkshelf api service running with the following endpoints (and in this order):

  1. “stable” internal chef server, source of truth for cookbooks, data bags and roles and only written to via CI system (this is in place now)
  2. The Chef marketplace (for any community cookbook dependencies)
  3. A “dev” internal chef server, that anybody in the org can upload whatever to as part of the dev/testing process.

However, this doesn’t work for us because we use different credentials between our two internal chef servers and even though it seems like we’re providing the berkshelf API service itself with credentials to use, it only ever seems to use what the user has locally.

Am I totally insane for thinking this is a good idea? Is there a better approach here? Looking for wisdom…

Thanks!
-Tara

PS - By the way, I’m trying to hire another body to help us work on this and generally help build a cool deployment pipeline infrastructure if anybody’s looking:

http://www.lithium.com/company/careers/job-listing?jvi=ol5HZfwQ,Job


#2

With Chef 12, you will only need one server since it has the built-in concept of “Organizations” and cookbooks in each organization are segregated from each other.

Chris


From: Tara Hernandez [tara.hernandez@lithium.com]
Sent: Monday, October 27, 2014 9:07 PM
To: chef@lists.opscode.com
Subject: [chef] question around chef/berkshelf workflow

Hi folks,

Right now we’re still largely using an old style workflow, with “:site opscode” in the Berksfile and “path: …/blah” nonsense to refer to our local cookbooks we store in a git repo. This is bad because then obviously you can’t do things like chef version tagging in local dev, you have to have ALL local cookbooks on your filesystem, and it’s just generally messy.

So, instead I want to engage more actively with our internal chef servers. To do this I want to be able to have a berkshelf api service running with the following endpoints (and in this order):

  1. “stable” internal chef server, source of truth for cookbooks, data bags and roles and only written to via CI system (this is in place now)
  2. The Chef marketplace (for any community cookbook dependencies)
  3. A “dev” internal chef server, that anybody in the org can upload whatever to as part of the dev/testing process.

However, this doesn’t work for us because we use different credentials between our two internal chef servers and even though it seems like we’re providing the berkshelf API service itself with credentials to use, it only ever seems to use what the user has locally.

Am I totally insane for thinking this is a good idea? Is there a better approach here? Looking for wisdom…

Thanks!
-Tara

PS - By the way, I’m trying to hire another body to help us work on this and generally help build a cool deployment pipeline infrastructure if anybody’s looking:

http://www.lithium.com/company/careers/job-listing?jvi=ol5HZfwQ,Job


#3

Is there a link to find out more information about Organizations in Chef 12
and how they work?

On Tue, Oct 28, 2014 at 7:14 AM, Fouts, Chris Chris.Fouts@sensus.com
wrote:

With Chef 12, you will only need one server since it has the built-in
concept of “Organizations” and cookbooks in each organization are
segregated from each other.

Chris


From: Tara Hernandez [tara.hernandez@lithium.com]
Sent: Monday, October 27, 2014 9:07 PM
To: chef@lists.opscode.com
Subject: [chef] question around chef/berkshelf workflow

Hi folks,

Right now we’re still largely using an old style workflow, with “:site
opscode” in the Berksfile and “path: …/blah” nonsense to refer to our
local cookbooks we store in a git repo. This is bad because then obviously
you can’t do things like chef version tagging in local dev, you have to
have ALL local cookbooks on your filesystem, and it’s just generally messy.

So, instead I want to engage more actively with our internal chef
servers. To do this I want to be able to have a berkshelf api service
running with the following endpoints (and in this order):

  1. “stable” internal chef server, source of truth for cookbooks, data
    bags and roles and only written to via CI system (this is in place now)
  2. The Chef marketplace (for any community cookbook dependencies)
  3. A “dev” internal chef server, that anybody in the org can upload
    whatever to as part of the dev/testing process.

However, this doesn’t work for us because we use different credentials
between our two internal chef servers and even though it seems like we’re
providing the berkshelf API service itself with credentials to use, it only
ever seems to use what the user has locally.

Am I totally insane for thinking this is a good idea? Is there a better
approach here? Looking for wisdom…

Thanks!
-Tara

PS - By the way, I’m trying to hire another body to help us work on this
and generally help build a cool deployment pipeline infrastructure if
anybody’s looking:

http://www.lithium.com/company/careers/job-listing?jvi=ol5HZfwQ,Job


#4

Ohai Tara

That should work fine all your berkshelf clients need to use a credentials
that are vaild on both your internal endpoints.
On 28 Oct 2014 01:08, “Tara Hernandez” tara.hernandez@lithium.com wrote:

Hi folks,

Right now we’re still largely using an old style workflow, with “:site
opscode” in the Berksfile and “path: …/blah” nonsense to refer to our
local cookbooks we store in a git repo. This is bad because then obviously
you can’t do things like chef version tagging in local dev, you have to
have ALL local cookbooks on your filesystem, and it’s just generally messy.

So, instead I want to engage more actively with our internal chef
servers. To do this I want to be able to have a berkshelf api service
running with the following endpoints (and in this order):

  1. “stable” internal chef server, source of truth for cookbooks, data
    bags and roles and only written to via CI system (this is in place now)
  2. The Chef marketplace (for any community cookbook dependencies)
  3. A “dev” internal chef server, that anybody in the org can upload
    whatever to as part of the dev/testing process.

However, this doesn’t work for us because we use different credentials
between our two internal chef servers and even though it seems like we’re
providing the berkshelf API service itself with credentials to use, it only
ever seems to use what the user has locally.

Am I totally insane for thinking this is a good idea? Is there a better
approach here? Looking for wisdom…

Thanks!
-Tara

PS - By the way, I’m trying to hire another body to help us work on this
and generally help build a cool deployment pipeline infrastructure if
anybody’s looking:

http://www.lithium.com/company/careers/job-listing?jvi=ol5HZfwQ,Job


#5

On Oct 31, 2014, at 4:01 PM, Greg Barker fletch@fletchowns.net wrote:

Is there a link to find out more information about Organizations in Chef 12 and how they work?

Depends what kind of info you’re looking for, but this should give you an overview.

https://docs.getchef.com/server_orgs.html https://docs.getchef.com/server_orgs.html

  • Julian