So, I’ve been struggling with getting Berkshelf to do exactly what I
want it to do, to the extent that I’m starting to wonder if what I’m
trying to do is the Wrong Thing™. It seems to work great for cookbook
development, but where I’m running into trouble is when I try to
assemble all of our disparate cookbooks sources (community, github,
internal git repos) into a chef repo and upload to the server. I’m
currently using Berkshelf 1.2.1.
The workflow I’d like to have is this:
cookbooks have their separate release cycles and work independently,
tagging released versions as they progress. Their dependencies are
resolved first from a central chef server, then from the community api.
The cookbooks (hopefully) properly list their dependencies in metadata,
and use pessimistic constraints where necessary.
For each platform (here defined nebulously as a set of applications
and services that form a coherent whole), there is a chef repository
that contains (mostly) only roles, data bags, and environments, and a
Berksfile.
When the time comes to make a platform release, we update cookbooks
to the latest versions as determined by Berkshelf. The platform
Berksfile contains mostly top-level cookbooks (i.e. cookbooks that are
directly mentioned in role runlists; ideally application and role
cookbooks, but we’re not there yet), sourced from the community site
where possible, and from git with specific refs where it’s not.
Internally developed cookbooks are always given a version constraint in
the Berksfile, external cookbooks only when necessary.
After resolving the dependencies, we generate a set of environment
cookbook constraints that constrain the test environment to the resolved
versions. We upload the cookbooks and then upload the new environment to
allow the test platform to converge to the new configuration. The new
configurations are given a “thorough” acceptance test. When the
acceptance test passes, we approve the new configuration for deployment.
When the configuration is approved, we modify the production
environment to have the same constraints as test, the production nodes
happily converge to their new configurations, and celebrations are had
by all.
The problem I’m running into is between steps 3 and 4. I’m having
trouble getting repeatable results out of the cookbook resolution. I’m
OK if things change between executions of berks update. The problem is
that if I do a berks update on my machine, commit both the Berksfile
and the Berksfile.lock, and then do a berks install on another
machine, I don’t necessarily get the same cookbooks on the second
machine as I did on the first machine… if the cookbooks were
unconstrained in the Berksfile, they may be updated. I see the same
problem if I do a berks upload even on the same machine, since it
resolves the dependencies before doing the upload. I’ve even tried
specifying Berksfile.lock as the Berksfile (i.e. berks upload --berksfile Berksfile.lock) and I see the same result.
Does Berksfile.lock simply not work? Is the only way to get consistency
to do a berks install --path PATH and commit the results? Is my idea
for workflow completely wrong?
What we do (or are moving towards) is never use roles and environments
as you can’t version them. We use wrapper cookbooks per project that
use cookbook dependencies (with x.y.z versions) in metdata.rb and set
node attributes and/or call LWRPs. We can then roll out new versions
of the wrapper cookbook incrementally. And we can roll them back
incrementally as well. We set an explicit version in node’s run list
and have a little bit of tooling to make changing them in a “one, few,
many, all” manner easier.
It can be a pain, but we can look at a given run list and know exactly
what version of cookbooks and attributes that “contains.”
On Wednesday, March 27, 2013, Brian Akins brian@akins.org wrote:
The .lock file doesn't do anything, AFAIK.
What we do (or are moving towards) is never use roles and environments
as you can't version them. We use wrapper cookbooks per project that
use cookbook dependencies (with x.y.z versions) in metdata.rb and set
node attributes and/or call LWRPs. We can then roll out new versions
of the wrapper cookbook incrementally. And we can roll them back
incrementally as well. We set an explicit version in node's run list
and have a little bit of tooling to make changing them in a "one, few,
many, all" manner easier.
It can be a pain, but we can look at a given run list and know exactly
what version of cookbooks and attributes that "contains."
sounds like a sane workflow. We are doing it similarly, but with chef solo
and thus without a chef server.
In addition to what you described:
we use application (a.k.a role or toplevel) cookbooks not roles
for library cookbooks we use no version constraints or if necessary
optimistic version constraints (e.g. ~> 1.1) in metadata.rb
for application cookbooks we use strict versioning (e.g. = 1.1.0) for all
dependencies in the graph (i.e. including the transitive ones) in
metadata.rb. We do this because we want stable application cookbooks and
chef server does not care about Cheffile/Berksfile.lock, thus we lock the
transitive deps as well in metadata.rb
with such an approach your environment version constraints would be very
simple => you only need lock the app cookbook's version there
Finally, we treat dependency resolution for app cookbooks a bit differently:
rather than a toplevel Cheffile/Berksfile in the chef repository we have
a custom yml file which contains the app cookbooks + version + git
location/ref
for each app cookbook in the yml file, we:
Clone the cookbook ref to /tmp/
Cd /tmp/ && berks install --path
/cookbooks/-
this isolates the dependencies per application cookbook, i.e. you can
have different versions of library cookbook x for app cookbooks a and b,
even though both live in the same chef repo
So, I've been struggling with getting Berkshelf to do exactly what I want
it to do, to the extent that I'm starting to wonder if what I'm trying to
do is the Wrong Thing(tm). It seems to work great for cookbook development,
but where I'm running into trouble is when I try to assemble all of our
disparate cookbooks sources (community, github, internal git repos) into a
chef repo and upload to the server. I'm currently using Berkshelf 1.2.1.
The workflow I'd like to have is this:
cookbooks have their separate release cycles and work independently,
tagging released versions as they progress. Their dependencies are resolved
first from a central chef server, then from the community api. The
cookbooks (hopefully) properly list their dependencies in metadata, and use
pessimistic constraints where necessary.
For each platform (here defined nebulously as a set of applications and
services that form a coherent whole), there is a chef repository that
contains (mostly) only roles, data bags, and environments, and a Berksfile.
When the time comes to make a platform release, we update cookbooks to
the latest versions as determined by Berkshelf. The platform Berksfile
contains mostly top-level cookbooks (i.e. cookbooks that are directly
mentioned in role runlists; ideally application and role cookbooks, but
we're not there yet), sourced from the community site where possible, and
from git with specific refs where it's not. Internally developed cookbooks
are always given a version constraint in the Berksfile, external cookbooks
only when necessary.
After resolving the dependencies, we generate a set of environment
cookbook constraints that constrain the test environment to the resolved
versions. We upload the cookbooks and then upload the new environment to
allow the test platform to converge to the new configuration. The new
configurations are given a "thorough" acceptance test. When the acceptance
test passes, we approve the new configuration for deployment.
When the configuration is approved, we modify the production
environment to have the same constraints as test, the production nodes
happily converge to their new configurations, and celebrations are had by
all.
The problem I'm running into is between steps 3 and 4. I'm having trouble
getting repeatable results out of the cookbook resolution. I'm OK if things
change between executions of berks update. The problem is that if I do a berks update on my machine, commit both the Berksfile and the
Berksfile.lock, and then do a berks install on another machine, I don't
necessarily get the same cookbooks on the second machine as I did on the
first machine... if the cookbooks were unconstrained in the Berksfile, they
may be updated. I see the same problem if I do a berks upload even on the
same machine, since it resolves the dependencies before doing the upload.
I've even tried specifying Berksfile.lock as the Berksfile (i.e. berks upload --berksfile Berksfile.lock) and I see the same result.
Does Berksfile.lock simply not work? Is the only way to get consistency to
do a berks install --path PATH and commit the results? Is my idea for
workflow completely wrong?
Am 28.03.2013 16:26 schrieb "Torben Knerr" ukio@gmx.de:
Hi Greg,
soun
Hi Greg,
sounds like a sane workflow. We are doing it similarly, but with chef solo
and thus without a chef server.
In addition to what you described:
we use application (a.k.a role or toplevel) cookbooks not roles
for library cookbooks we use no version constraints or if necessary
optimistic version constraints (e.g. ~> 1.1) in metadata.rb
for application cookbooks we use strict versioning (e.g. = 1.1.0) for
all dependencies in the graph (i.e. including the transitive ones) in
metadata.rb. We do this because we want stable application cookbooks and
chef server does not care about Cheffile/Berksfile.lock, thus we lock the
transitive deps as well in metadata.rb
with such an approach your environment version constraints would be very
simple => you only need lock the app cookbook's version there
Finally, we treat dependency resolution for app cookbooks a bit
differently:
rather than a toplevel Cheffile/Berksfile in the chef repository we have
a custom yml file which contains the app cookbooks + version + git
location/ref
for each app cookbook in the yml file, we:
Clone the cookbook ref to /tmp/
Cd /tmp/ && berks install --path
/cookbooks/-
this isolates the dependencies per application cookbook, i.e. you can
have different versions of library cookbook x for app cookbooks a and b,
even though both live in the same chef repo
So, I've been struggling with getting Berkshelf to do exactly what I want
it to do, to the extent that I'm starting to wonder if what I'm trying to
do is the Wrong Thing(tm). It seems to work great for cookbook development,
but where I'm running into trouble is when I try to assemble all of our
disparate cookbooks sources (community, github, internal git repos) into a
chef repo and upload to the server. I'm currently using Berkshelf 1.2.1.
The workflow I'd like to have is this:
cookbooks have their separate release cycles and work independently,
tagging released versions as they progress. Their dependencies are resolved
first from a central chef server, then from the community api. The
cookbooks (hopefully) properly list their dependencies in metadata, and use
pessimistic constraints where necessary.
For each platform (here defined nebulously as a set of applications
and services that form a coherent whole), there is a chef repository that
contains (mostly) only roles, data bags, and environments, and a Berksfile.
When the time comes to make a platform release, we update cookbooks to
the latest versions as determined by Berkshelf. The platform Berksfile
contains mostly top-level cookbooks (i.e. cookbooks that are directly
mentioned in role runlists; ideally application and role cookbooks, but
we're not there yet), sourced from the community site where possible, and
from git with specific refs where it's not. Internally developed cookbooks
are always given a version constraint in the Berksfile, external cookbooks
only when necessary.
After resolving the dependencies, we generate a set of environment
cookbook constraints that constrain the test environment to the resolved
versions. We upload the cookbooks and then upload the new environment to
allow the test platform to converge to the new configuration. The new
configurations are given a "thorough" acceptance test. When the acceptance
test passes, we approve the new configuration for deployment.
When the configuration is approved, we modify the production
environment to have the same constraints as test, the production nodes
happily converge to their new configurations, and celebrations are had by
all.
The problem I'm running into is between steps 3 and 4. I'm having trouble
getting repeatable results out of the cookbook resolution. I'm OK if things
change between executions of berks update. The problem is that if I do a berks update on my machine, commit both the Berksfile and the
Berksfile.lock, and then do a berks install on another machine, I don't
necessarily get the same cookbooks on the second machine as I did on the
first machine... if the cookbooks were unconstrained in the Berksfile, they
may be updated. I see the same problem if I do a berks upload even on the
same machine, since it resolves the dependencies before doing the upload.
I've even tried specifying Berksfile.lock as the Berksfile (i.e. berks upload --berksfile Berksfile.lock) and I see the same result.
Does Berksfile.lock simply not work? Is the only way to get consistency
to do a berks install --path PATH and commit the results? Is my idea for
workflow completely wrong?
I do this with librarian and it "just works". the lock file that I check
into the repository is precisely what gets deployed on the production
chef server.
librarian doesn't support as many endpoints (just git, community site, and
folders), but I'm much more concerned with consistency than features.
this is the reason we don't use berkshelf... i thought that the lockfile
issue had been resolved back in february but apparently not.
-jesse
On Thu, Mar 28, 2013 at 11:39 AM, Torben Knerr ukio@gmx.de wrote:
Am 28.03.2013 16:26 schrieb "Torben Knerr" ukio@gmx.de:
Hi Greg,
soun
Hi Greg,
sounds like a sane workflow. We are doing it similarly, but with chef
solo and thus without a chef server.
In addition to what you described:
we use application (a.k.a role or toplevel) cookbooks not roles
for library cookbooks we use no version constraints or if necessary
optimistic version constraints (e.g. ~> 1.1) in metadata.rb
for application cookbooks we use strict versioning (e.g. = 1.1.0) for
all dependencies in the graph (i.e. including the transitive ones) in
metadata.rb. We do this because we want stable application cookbooks and
chef server does not care about Cheffile/Berksfile.lock, thus we lock the
transitive deps as well in metadata.rb
with such an approach your environment version constraints would be
very simple => you only need lock the app cookbook's version there
Finally, we treat dependency resolution for app cookbooks a bit
differently:
rather than a toplevel Cheffile/Berksfile in the chef repository we
have a custom yml file which contains the app cookbooks + version + git
location/ref
for each app cookbook in the yml file, we:
Clone the cookbook ref to /tmp/
Cd /tmp/ && berks install --path
/cookbooks/-
this isolates the dependencies per application cookbook, i.e. you can
have different versions of library cookbook x for app cookbooks a and b,
even though both live in the same chef repo
So, I've been struggling with getting Berkshelf to do exactly what I
want it to do, to the extent that I'm starting to wonder if what I'm trying
to do is the Wrong Thing(tm). It seems to work great for cookbook
development, but where I'm running into trouble is when I try to assemble
all of our disparate cookbooks sources (community, github, internal git
repos) into a chef repo and upload to the server. I'm currently using
Berkshelf 1.2.1.
The workflow I'd like to have is this:
cookbooks have their separate release cycles and work independently,
tagging released versions as they progress. Their dependencies are resolved
first from a central chef server, then from the community api. The
cookbooks (hopefully) properly list their dependencies in metadata, and use
pessimistic constraints where necessary.
For each platform (here defined nebulously as a set of applications
and services that form a coherent whole), there is a chef repository that
contains (mostly) only roles, data bags, and environments, and a Berksfile.
When the time comes to make a platform release, we update cookbooks
to the latest versions as determined by Berkshelf. The platform Berksfile
contains mostly top-level cookbooks (i.e. cookbooks that are directly
mentioned in role runlists; ideally application and role cookbooks, but
we're not there yet), sourced from the community site where possible, and
from git with specific refs where it's not. Internally developed cookbooks
are always given a version constraint in the Berksfile, external cookbooks
only when necessary.
After resolving the dependencies, we generate a set of environment
cookbook constraints that constrain the test environment to the resolved
versions. We upload the cookbooks and then upload the new environment to
allow the test platform to converge to the new configuration. The new
configurations are given a "thorough" acceptance test. When the acceptance
test passes, we approve the new configuration for deployment.
When the configuration is approved, we modify the production
environment to have the same constraints as test, the production nodes
happily converge to their new configurations, and celebrations are had by
all.
The problem I'm running into is between steps 3 and 4. I'm having
trouble getting repeatable results out of the cookbook resolution. I'm OK
if things change between executions of berks update. The problem is that
if I do a berks update on my machine, commit both the Berksfile and the
Berksfile.lock, and then do a berks install on another machine, I don't
necessarily get the same cookbooks on the second machine as I did on the
first machine... if the cookbooks were unconstrained in the Berksfile, they
may be updated. I see the same problem if I do a berks upload even on the
same machine, since it resolves the dependencies before doing the upload.
I've even tried specifying Berksfile.lock as the Berksfile (i.e. berks upload --berksfile Berksfile.lock) and I see the same result.
Does Berksfile.lock simply not work? Is the only way to get consistency
to do a berks install --path PATH and commit the results? Is my idea for
workflow completely wrong?
I do this with librarian and it "just works". the lock file that I check
into the repository is precisely what gets deployed on the production
chef server.
But be careful: this is not necessarily what gets downloaded by the node if
you don't "lock" all versions in metadata.rb.
librarian doesn't support as many endpoints (just git, community site,
and folders), but I'm much more concerned with consistency than features.
this is the reason we don't use berkshelf... i thought that the lockfile
issue had been resolved back in february but apparently not.
Tend to agree here, we also use librarian because its simpler and seems to
be more stable. The only feature I'm really missing from berkshelf is the
'metadata' keyword, so you don't have to repeat yourself in metadata and
Cheffile...
-jesse
On Thu, Mar 28, 2013 at 11:39 AM, Torben Knerr ukio@gmx.de wrote:
Am 28.03.2013 16:26 schrieb "Torben Knerr" ukio@gmx.de:
Hi Greg,
soun
Hi Greg,
sounds like a sane workflow. We are doing it similarly, but with chef
solo and thus without a chef server.
In addition to what you described:
we use application (a.k.a role or toplevel) cookbooks not roles
for library cookbooks we use no version constraints or if necessary
optimistic version constraints (e.g. ~> 1.1) in metadata.rb
for application cookbooks we use strict versioning (e.g. = 1.1.0) for
all dependencies in the graph (i.e. including the transitive ones) in
metadata.rb. We do this because we want stable application cookbooks and
chef server does not care about Cheffile/Berksfile.lock, thus we lock the
transitive deps as well in metadata.rb
with such an approach your environment version constraints would be
very simple => you only need lock the app cookbook's version there
Finally, we treat dependency resolution for app cookbooks a bit
differently:
rather than a toplevel Cheffile/Berksfile in the chef repository we
have a custom yml file which contains the app cookbooks + version + git
location/ref
for each app cookbook in the yml file, we:
Clone the cookbook ref to /tmp/
Cd /tmp/ && berks install --path
/cookbooks/-
this isolates the dependencies per application cookbook, i.e. you can
have different versions of library cookbook x for app cookbooks a and b,
even though both live in the same chef repo
So, I've been struggling with getting Berkshelf to do exactly what I
want it to do, to the extent that I'm starting to wonder if what I'm trying
to do is the Wrong Thing(tm). It seems to work great for cookbook
development, but where I'm running into trouble is when I try to assemble
all of our disparate cookbooks sources (community, github, internal git
repos) into a chef repo and upload to the server. I'm currently using
Berkshelf 1.2.1.
The workflow I'd like to have is this:
cookbooks have their separate release cycles and work
independently, tagging released versions as they progress. Their
dependencies are resolved first from a central chef server, then from the
community api. The cookbooks (hopefully) properly list their dependencies
in metadata, and use pessimistic constraints where necessary.
For each platform (here defined nebulously as a set of applications
and services that form a coherent whole), there is a chef repository that
contains (mostly) only roles, data bags, and environments, and a Berksfile.
When the time comes to make a platform release, we update cookbooks
to the latest versions as determined by Berkshelf. The platform Berksfile
contains mostly top-level cookbooks (i.e. cookbooks that are directly
mentioned in role runlists; ideally application and role cookbooks, but
we're not there yet), sourced from the community site where possible, and
from git with specific refs where it's not. Internally developed cookbooks
are always given a version constraint in the Berksfile, external cookbooks
only when necessary.
After resolving the dependencies, we generate a set of environment
cookbook constraints that constrain the test environment to the resolved
versions. We upload the cookbooks and then upload the new environment to
allow the test platform to converge to the new configuration. The new
configurations are given a "thorough" acceptance test. When the acceptance
test passes, we approve the new configuration for deployment.
When the configuration is approved, we modify the production
environment to have the same constraints as test, the production nodes
happily converge to their new configurations, and celebrations are had by
all.
The problem I'm running into is between steps 3 and 4. I'm having
trouble getting repeatable results out of the cookbook resolution. I'm OK
if things change between executions of berks update. The problem is that
if I do a berks update on my machine, commit both the Berksfile and the
Berksfile.lock, and then do a berks install on another machine, I don't
necessarily get the same cookbooks on the second machine as I did on the
first machine... if the cookbooks were unconstrained in the Berksfile, they
may be updated. I see the same problem if I do a berks upload even on the
same machine, since it resolves the dependencies before doing the upload.
I've even tried specifying Berksfile.lock as the Berksfile (i.e. berks upload --berksfile Berksfile.lock) and I see the same result.
Does Berksfile.lock simply not work? Is the only way to get
consistency to do a berks install --path PATH and commit the results? Is
my idea for workflow completely wrong?
I do this with librarian and it "just works". the lock file that I
check into the repository is precisely what gets deployed on the
production chef server.
But be careful: this is not necessarily what gets downloaded by the node
if you don't "lock" all versions in metadata.rb.
What do you mean?
We don't have anything version locked, so it always pulls the most recent,
and since the most recent is always what shows up in the Cheffile.lock,
they do get the right versions... or are you referring to conditions where
the version numbers are regressing?
On Thu, Mar 28, 2013 at 12:54 PM, Torben Knerr ukio@gmx.de wrote:
I do this with librarian and it "just works". the lock file that I check
into the repository is precisely what gets deployed on the production
chef server.
But be careful: this is not necessarily what gets downloaded by the node
if you don't "lock" all versions in metadata.rb.
librarian doesn't support as many endpoints (just git, community site,
and folders), but I'm much more concerned with consistency than features.
this is the reason we don't use berkshelf... i thought that the lockfile
issue had been resolved back in february but apparently not.
Tend to agree here, we also use librarian because its simpler and seems to
be more stable. The only feature I'm really missing from berkshelf is the
'metadata' keyword, so you don't have to repeat yourself in metadata and
Cheffile...
-jesse
On Thu, Mar 28, 2013 at 11:39 AM, Torben Knerr ukio@gmx.de wrote:
Am 28.03.2013 16:26 schrieb "Torben Knerr" ukio@gmx.de:
Hi Greg,
soun
Hi Greg,
sounds like a sane workflow. We are doing it similarly, but with chef
solo and thus without a chef server.
In addition to what you described:
we use application (a.k.a role or toplevel) cookbooks not roles
for library cookbooks we use no version constraints or if necessary
optimistic version constraints (e.g. ~> 1.1) in metadata.rb
for application cookbooks we use strict versioning (e.g. = 1.1.0)
for all dependencies in the graph (i.e. including the transitive ones) in
metadata.rb. We do this because we want stable application cookbooks and
chef server does not care about Cheffile/Berksfile.lock, thus we lock the
transitive deps as well in metadata.rb
with such an approach your environment version constraints would be
very simple => you only need lock the app cookbook's version there
Finally, we treat dependency resolution for app cookbooks a bit
differently:
rather than a toplevel Cheffile/Berksfile in the chef repository we
have a custom yml file which contains the app cookbooks + version + git
location/ref
for each app cookbook in the yml file, we:
Clone the cookbook ref to /tmp/
Cd /tmp/ && berks install --path
/cookbooks/-
this isolates the dependencies per application cookbook, i.e. you
can have different versions of library cookbook x for app cookbooks a and
b, even though both live in the same chef repo
So, I've been struggling with getting Berkshelf to do exactly what I
want it to do, to the extent that I'm starting to wonder if what I'm trying
to do is the Wrong Thing(tm). It seems to work great for cookbook
development, but where I'm running into trouble is when I try to assemble
all of our disparate cookbooks sources (community, github, internal git
repos) into a chef repo and upload to the server. I'm currently using
Berkshelf 1.2.1.
The workflow I'd like to have is this:
cookbooks have their separate release cycles and work
independently, tagging released versions as they progress. Their
dependencies are resolved first from a central chef server, then from the
community api. The cookbooks (hopefully) properly list their dependencies
in metadata, and use pessimistic constraints where necessary.
For each platform (here defined nebulously as a set of
applications and services that form a coherent whole), there is a chef
repository that contains (mostly) only roles, data bags, and environments,
and a Berksfile.
When the time comes to make a platform release, we update
cookbooks to the latest versions as determined by Berkshelf. The platform
Berksfile contains mostly top-level cookbooks (i.e. cookbooks that are
directly mentioned in role runlists; ideally application and role
cookbooks, but we're not there yet), sourced from the community site where
possible, and from git with specific refs where it's not. Internally
developed cookbooks are always given a version constraint in the Berksfile,
external cookbooks only when necessary.
After resolving the dependencies, we generate a set of environment
cookbook constraints that constrain the test environment to the resolved
versions. We upload the cookbooks and then upload the new environment to
allow the test platform to converge to the new configuration. The new
configurations are given a "thorough" acceptance test. When the acceptance
test passes, we approve the new configuration for deployment.
When the configuration is approved, we modify the production
environment to have the same constraints as test, the production nodes
happily converge to their new configurations, and celebrations are had by
all.
The problem I'm running into is between steps 3 and 4. I'm having
trouble getting repeatable results out of the cookbook resolution. I'm OK
if things change between executions of berks update. The problem is that
if I do a berks update on my machine, commit both the Berksfile and the
Berksfile.lock, and then do a berks install on another machine, I don't
necessarily get the same cookbooks on the second machine as I did on the
first machine... if the cookbooks were unconstrained in the Berksfile, they
may be updated. I see the same problem if I do a berks upload even on the
same machine, since it resolves the dependencies before doing the upload.
I've even tried specifying Berksfile.lock as the Berksfile (i.e. berks upload --berksfile Berksfile.lock) and I see the same result.
Does Berksfile.lock simply not work? Is the only way to get
consistency to do a berks install --path PATH and commit the results? Is
my idea for workflow completely wrong?
I do this with librarian and it "just works". the lock file that I
check into the repository is precisely what gets deployed on the
production chef server.
But be careful: this is not necessarily what gets downloaded by the
node if you don't "lock" all versions in metadata.rb.
What do you mean?
We don't have anything version locked, so it always pulls the most
recent, and since the most recent is always what shows up in the
Cheffile.lock, they do get the right versions... or are you referring to
conditions where the version numbers are regressing?
Sorry, that was confusing. I meant that the chef-client run on the node
does not care about Cheffile.lock. If chef-client downloads its cookbook
dependencies at the beginning of the chef run, it only cares about the
version pinnings in the environment and the metadata.rb of the downloaded
cookbooks.
I prefer to pin only the top-level application cookbook versions in the
environment. The version of the dependent cookbooks are pinned in each
application cookbook's metadata.rb.
If you only pin the toplevel application cookbooks in the environment (and
that is the right thing imho), but you don't have the dependent cookbooks'
versions locked in the app cookbook's metadata.rb, then you will end up
with different / latest versions of the dependent cookbooks on each run.
The alternative would be to pin all cookbook versions (i.e. app cookbooks
all dependencies) in the environment, but that's wrong in two ways:
its an implementation detail you should not have to worry on this level
which pollutes your environment files and makes promotion to other
environments at least uneasy
you artificially limit yourself, because you now need a consistent set
of cookbook across the environment, e.g. all app cookbooks must agree to
use the same version of the apache2 cookbook. But that's artificial, it
would be perfectly fine if one app cookbook depends on apache2 1.1.0 while
another one depends on apache2 2.0.0, and both can live in the same
environment
On Thu, Mar 28, 2013 at 12:54 PM, Torben Knerr ukio@gmx.de wrote:
I do this with librarian and it "just works". the lock file that I
check into the repository is precisely what gets deployed on the
production chef server.
But be careful: this is not necessarily what gets downloaded by the node
if you don't "lock" all versions in metadata.rb.
librarian doesn't support as many endpoints (just git, community site,
and folders), but I'm much more concerned with consistency than features.
this is the reason we don't use berkshelf... i thought that the
lockfile issue had been resolved back in february but apparently not.
Tend to agree here, we also use librarian because its simpler and seems
to be more stable. The only feature I'm really missing from berkshelf is
the 'metadata' keyword, so you don't have to repeat yourself in metadata
and Cheffile...
-jesse
On Thu, Mar 28, 2013 at 11:39 AM, Torben Knerr ukio@gmx.de wrote:
Am 28.03.2013 16:26 schrieb "Torben Knerr" ukio@gmx.de:
Hi Greg,
soun
Hi Greg,
sounds like a sane workflow. We are doing it similarly, but with
chef solo and thus without a chef server.
In addition to what you described:
we use application (a.k.a role or toplevel) cookbooks not roles
for library cookbooks we use no version constraints or if
necessary optimistic version constraints (e.g. ~> 1.1) in metadata.rb
for application cookbooks we use strict versioning (e.g. = 1.1.0)
for all dependencies in the graph (i.e. including the transitive ones) in
metadata.rb. We do this because we want stable application cookbooks and
chef server does not care about Cheffile/Berksfile.lock, thus we lock the
transitive deps as well in metadata.rb
with such an approach your environment version constraints would
be very simple => you only need lock the app cookbook's version there
Finally, we treat dependency resolution for app cookbooks a bit
differently:
rather than a toplevel Cheffile/Berksfile in the chef repository
we have a custom yml file which contains the app cookbooks + version + git
location/ref
for each app cookbook in the yml file, we:
Clone the cookbook ref to /tmp/
Cd /tmp/ && berks install --path
/cookbooks/-
this isolates the dependencies per application cookbook, i.e. you
can have different versions of library cookbook x for app cookbooks a and
b, even though both live in the same chef repo
So, I've been struggling with getting Berkshelf to do exactly what
I want it to do, to the extent that I'm starting to wonder if what I'm
trying to do is the Wrong Thing(tm). It seems to work great for cookbook
development, but where I'm running into trouble is when I try to assemble
all of our disparate cookbooks sources (community, github, internal git
repos) into a chef repo and upload to the server. I'm currently using
Berkshelf 1.2.1.
The workflow I'd like to have is this:
cookbooks have their separate release cycles and work
independently, tagging released versions as they progress. Their
dependencies are resolved first from a central chef server, then from the
community api. The cookbooks (hopefully) properly list their dependencies
in metadata, and use pessimistic constraints where necessary.
For each platform (here defined nebulously as a set of
applications and services that form a coherent whole), there is a chef
repository that contains (mostly) only roles, data bags, and environments,
and a Berksfile.
When the time comes to make a platform release, we update
cookbooks to the latest versions as determined by Berkshelf. The platform
Berksfile contains mostly top-level cookbooks (i.e. cookbooks that are
directly mentioned in role runlists; ideally application and role
cookbooks, but we're not there yet), sourced from the community site where
possible, and from git with specific refs where it's not. Internally
developed cookbooks are always given a version constraint in the Berksfile,
external cookbooks only when necessary.
After resolving the dependencies, we generate a set of
environment cookbook constraints that constrain the test environment to the
resolved versions. We upload the cookbooks and then upload the new
environment to allow the test platform to converge to the new
configuration. The new configurations are given a "thorough" acceptance
test. When the acceptance test passes, we approve the new configuration for
deployment.
When the configuration is approved, we modify the production
environment to have the same constraints as test, the production nodes
happily converge to their new configurations, and celebrations are had by
all.
The problem I'm running into is between steps 3 and 4. I'm having
trouble getting repeatable results out of the cookbook resolution. I'm OK
if things change between executions of berks update. The problem is that
if I do a berks update on my machine, commit both the Berksfile and the
Berksfile.lock, and then do a berks install on another machine, I don't
necessarily get the same cookbooks on the second machine as I did on the
first machine... if the cookbooks were unconstrained in the Berksfile, they
may be updated. I see the same problem if I do a berks upload even on the
same machine, since it resolves the dependencies before doing the upload.
I've even tried specifying Berksfile.lock as the Berksfile (i.e. berks upload --berksfile Berksfile.lock) and I see the same result.
Does Berksfile.lock simply not work? Is the only way to get
consistency to do a berks install --path PATH and commit the results? Is
my idea for workflow completely wrong?
We pin versions in the top level cookbooks. We also restrict versions in
our environment files but it's always a =< so if an older cookbook is
relying on an older version of something the environment file doesn't get
in the way.
In the metadata of top level cookbooks we're strict: depends "java", "=
1.0.0"
and we list out even cookbooks that aren't called directly by the TLC, that
are dependencies of others, just to make sure everything is pinned.
In the non-TLC's we're more relaxed: depends "java", "~> 1.0.0"
which makes development and testing a little more bearable.
MG
On Thu, Mar 28, 2013 at 11:23 AM, Torben Knerr ukio@gmx.de wrote:
I do this with librarian and it "just works". the lock file that I
check into the repository is precisely what gets deployed on the
production chef server.
But be careful: this is not necessarily what gets downloaded by the
node if you don't "lock" all versions in metadata.rb.
What do you mean?
We don't have anything version locked, so it always pulls the most
recent, and since the most recent is always what shows up in the
Cheffile.lock, they do get the right versions... or are you referring to
conditions where the version numbers are regressing?
Sorry, that was confusing. I meant that the chef-client run on the node
does not care about Cheffile.lock. If chef-client downloads its cookbook
dependencies at the beginning of the chef run, it only cares about the
version pinnings in the environment and the metadata.rb of the downloaded
cookbooks.
I prefer to pin only the top-level application cookbook versions in the
environment. The version of the dependent cookbooks are pinned in each
application cookbook's metadata.rb.
If you only pin the toplevel application cookbooks in the environment (and
that is the right thing imho), but you don't have the dependent cookbooks'
versions locked in the app cookbook's metadata.rb, then you will end up
with different / latest versions of the dependent cookbooks on each run.
The alternative would be to pin all cookbook versions (i.e. app
cookbooks + all dependencies) in the environment, but that's wrong in two
ways:
its an implementation detail you should not have to worry on this level
which pollutes your environment files and makes promotion to other
environments at least uneasy
you artificially limit yourself, because you now need a consistent set
of cookbook across the environment, e.g. all app cookbooks must agree to
use the same version of the apache2 cookbook. But that's artificial, it
would be perfectly fine if one app cookbook depends on apache2 1.1.0 while
another one depends on apache2 2.0.0, and both can live in the same
environment
On Thu, Mar 28, 2013 at 12:54 PM, Torben Knerr ukio@gmx.de wrote:
I do this with librarian and it "just works". the lock file that I
check into the repository is precisely what gets deployed on the
production chef server.
But be careful: this is not necessarily what gets downloaded by the
node if you don't "lock" all versions in metadata.rb.
librarian doesn't support as many endpoints (just git, community
site, and folders), but I'm much more concerned with consistency than
features.
this is the reason we don't use berkshelf... i thought that the
lockfile issue had been resolved back in february but apparently not.
Tend to agree here, we also use librarian because its simpler and seems
to be more stable. The only feature I'm really missing from berkshelf is
the 'metadata' keyword, so you don't have to repeat yourself in metadata
and Cheffile...
-jesse
On Thu, Mar 28, 2013 at 11:39 AM, Torben Knerr ukio@gmx.de wrote:
Am 28.03.2013 16:26 schrieb "Torben Knerr" ukio@gmx.de:
Hi Greg,
soun
Hi Greg,
sounds like a sane workflow. We are doing it similarly, but with
chef solo and thus without a chef server.
In addition to what you described:
we use application (a.k.a role or toplevel) cookbooks not roles
for library cookbooks we use no version constraints or if
necessary optimistic version constraints (e.g. ~> 1.1) in metadata.rb
for application cookbooks we use strict versioning (e.g. = 1.1.0)
for all dependencies in the graph (i.e. including the transitive ones) in
metadata.rb. We do this because we want stable application cookbooks and
chef server does not care about Cheffile/Berksfile.lock, thus we lock the
transitive deps as well in metadata.rb
with such an approach your environment version constraints would
be very simple => you only need lock the app cookbook's version there
Finally, we treat dependency resolution for app cookbooks a bit
differently:
rather than a toplevel Cheffile/Berksfile in the chef repository
we have a custom yml file which contains the app cookbooks + version + git
location/ref
for each app cookbook in the yml file, we:
Clone the cookbook ref to /tmp/
Cd /tmp/ && berks install --path
/cookbooks/-
this isolates the dependencies per application cookbook, i.e. you
can have different versions of library cookbook x for app cookbooks a and
b, even though both live in the same chef repo
So, I've been struggling with getting Berkshelf to do exactly what
I want it to do, to the extent that I'm starting to wonder if what I'm
trying to do is the Wrong Thing(tm). It seems to work great for cookbook
development, but where I'm running into trouble is when I try to assemble
all of our disparate cookbooks sources (community, github, internal git
repos) into a chef repo and upload to the server. I'm currently using
Berkshelf 1.2.1.
The workflow I'd like to have is this:
cookbooks have their separate release cycles and work
independently, tagging released versions as they progress. Their
dependencies are resolved first from a central chef server, then from the
community api. The cookbooks (hopefully) properly list their dependencies
in metadata, and use pessimistic constraints where necessary.
For each platform (here defined nebulously as a set of
applications and services that form a coherent whole), there is a chef
repository that contains (mostly) only roles, data bags, and environments,
and a Berksfile.
When the time comes to make a platform release, we update
cookbooks to the latest versions as determined by Berkshelf. The platform
Berksfile contains mostly top-level cookbooks (i.e. cookbooks that are
directly mentioned in role runlists; ideally application and role
cookbooks, but we're not there yet), sourced from the community site where
possible, and from git with specific refs where it's not. Internally
developed cookbooks are always given a version constraint in the Berksfile,
external cookbooks only when necessary.
After resolving the dependencies, we generate a set of
environment cookbook constraints that constrain the test environment to the
resolved versions. We upload the cookbooks and then upload the new
environment to allow the test platform to converge to the new
configuration. The new configurations are given a "thorough" acceptance
test. When the acceptance test passes, we approve the new configuration for
deployment.
When the configuration is approved, we modify the production
environment to have the same constraints as test, the production nodes
happily converge to their new configurations, and celebrations are had by
all.
The problem I'm running into is between steps 3 and 4. I'm having
trouble getting repeatable results out of the cookbook resolution. I'm OK
if things change between executions of berks update. The problem is that
if I do a berks update on my machine, commit both the Berksfile and the
Berksfile.lock, and then do a berks install on another machine, I don't
necessarily get the same cookbooks on the second machine as I did on the
first machine... if the cookbooks were unconstrained in the Berksfile, they
may be updated. I see the same problem if I do a berks upload even on the
same machine, since it resolves the dependencies before doing the upload.
I've even tried specifying Berksfile.lock as the Berksfile (i.e. berks upload --berksfile Berksfile.lock) and I see the same result.
Does Berksfile.lock simply not work? Is the only way to get
consistency to do a berks install --path PATH and commit the results? Is
my idea for workflow completely wrong?
We pin versions in the top level cookbooks. We also restrict versions in
our environment files but it's always a =< so if an older cookbook is
relying on an older version of something the environment file doesn't get
in the way.
Does that mean you pin TLCs as well as their dependencies in the
environment files?
In the metadata of top level cookbooks we're strict: depends "java", "=
1.0.0"
and we list out even cookbooks that aren't called directly by the TLC,
that are dependencies of others, just to make sure everything is pinned.
+1 same here
In the non-TLC's we're more relaxed: depends "java", "~> 1.0.0"
which makes development and testing a little more bearable.
MG
On Thu, Mar 28, 2013 at 11:23 AM, Torben Knerr ukio@gmx.de wrote:
I do this with librarian and it "just works". the lock file that I
check into the repository is precisely what gets deployed on the
production chef server.
But be careful: this is not necessarily what gets downloaded by the
node if you don't "lock" all versions in metadata.rb.
What do you mean?
We don't have anything version locked, so it always pulls the most
recent, and since the most recent is always what shows up in the
Cheffile.lock, they do get the right versions... or are you referring to
conditions where the version numbers are regressing?
Sorry, that was confusing. I meant that the chef-client run on the node
does not care about Cheffile.lock. If chef-client downloads its cookbook
dependencies at the beginning of the chef run, it only cares about the
version pinnings in the environment and the metadata.rb of the downloaded
cookbooks.
I prefer to pin only the top-level application cookbook versions in the
environment. The version of the dependent cookbooks are pinned in each
application cookbook's metadata.rb.
If you only pin the toplevel application cookbooks in the environment
(and that is the right thing imho), but you don't have the dependent
cookbooks' versions locked in the app cookbook's metadata.rb, then you will
end up with different / latest versions of the dependent cookbooks on each
run.
The alternative would be to pin all cookbook versions (i.e. app
cookbooks + all dependencies) in the environment, but that's wrong in two
ways:
its an implementation detail you should not have to worry on this
level which pollutes your environment files and makes promotion to other
environments at least uneasy
you artificially limit yourself, because you now need a consistent
set of cookbook across the environment, e.g. all app cookbooks must agree
to use the same version of the apache2 cookbook. But that's artificial, it
would be perfectly fine if one app cookbook depends on apache2 1.1.0 while
another one depends on apache2 2.0.0, and both can live in the same
environment
On Thu, Mar 28, 2013 at 12:54 PM, Torben Knerr ukio@gmx.de wrote:
I do this with librarian and it "just works". the lock file that I
check into the repository is precisely what gets deployed on the
production chef server.
But be careful: this is not necessarily what gets downloaded by the
node if you don't "lock" all versions in metadata.rb.
librarian doesn't support as many endpoints (just git, community
site, and folders), but I'm much more concerned with consistency than
features.
this is the reason we don't use berkshelf... i thought that the
lockfile issue had been resolved back in february but apparently not.
Tend to agree here, we also use librarian because its simpler and
seems to be more stable. The only feature I'm really missing from berkshelf
is the 'metadata' keyword, so you don't have to repeat yourself in metadata
and Cheffile...
-jesse
On Thu, Mar 28, 2013 at 11:39 AM, Torben Knerr ukio@gmx.de wrote:
Am 28.03.2013 16:26 schrieb "Torben Knerr" ukio@gmx.de:
Hi Greg,
soun
Hi Greg,
sounds like a sane workflow. We are doing it similarly, but with
chef solo and thus without a chef server.
In addition to what you described:
we use application (a.k.a role or toplevel) cookbooks not roles
for library cookbooks we use no version constraints or if
necessary optimistic version constraints (e.g. ~> 1.1) in metadata.rb
for application cookbooks we use strict versioning (e.g. =
1.1.0) for all dependencies in the graph (i.e. including the transitive
ones) in metadata.rb. We do this because we want stable application
cookbooks and chef server does not care about Cheffile/Berksfile.lock, thus
we lock the transitive deps as well in metadata.rb
with such an approach your environment version constraints
would be very simple => you only need lock the app cookbook's version there
Finally, we treat dependency resolution for app cookbooks a bit
differently:
rather than a toplevel Cheffile/Berksfile in the chef
repository we have a custom yml file which contains the app cookbooks +
version + git location/ref
for each app cookbook in the yml file, we:
Clone the cookbook ref to /tmp/
Cd /tmp/ && berks install --path
/cookbooks/-
this isolates the dependencies per application cookbook, i.e.
you can have different versions of library cookbook x for app cookbooks a
and b, even though both live in the same chef repo
So, I've been struggling with getting Berkshelf to do exactly
what I want it to do, to the extent that I'm starting to wonder if what I'm
trying to do is the Wrong Thing(tm). It seems to work great for cookbook
development, but where I'm running into trouble is when I try to assemble
all of our disparate cookbooks sources (community, github, internal git
repos) into a chef repo and upload to the server. I'm currently using
Berkshelf 1.2.1.
The workflow I'd like to have is this:
cookbooks have their separate release cycles and work
independently, tagging released versions as they progress. Their
dependencies are resolved first from a central chef server, then from the
community api. The cookbooks (hopefully) properly list their dependencies
in metadata, and use pessimistic constraints where necessary.
For each platform (here defined nebulously as a set of
applications and services that form a coherent whole), there is a chef
repository that contains (mostly) only roles, data bags, and environments,
and a Berksfile.
When the time comes to make a platform release, we update
cookbooks to the latest versions as determined by Berkshelf. The platform
Berksfile contains mostly top-level cookbooks (i.e. cookbooks that are
directly mentioned in role runlists; ideally application and role
cookbooks, but we're not there yet), sourced from the community site where
possible, and from git with specific refs where it's not. Internally
developed cookbooks are always given a version constraint in the Berksfile,
external cookbooks only when necessary.
After resolving the dependencies, we generate a set of
environment cookbook constraints that constrain the test environment to the
resolved versions. We upload the cookbooks and then upload the new
environment to allow the test platform to converge to the new
configuration. The new configurations are given a "thorough" acceptance
test. When the acceptance test passes, we approve the new configuration for
deployment.
When the configuration is approved, we modify the production
environment to have the same constraints as test, the production nodes
happily converge to their new configurations, and celebrations are had by
all.
The problem I'm running into is between steps 3 and 4. I'm
having trouble getting repeatable results out of the cookbook resolution.
I'm OK if things change between executions of berks update. The problem
is that if I do a berks update on my machine, commit both the Berksfile
and the Berksfile.lock, and then do a berks install on another machine, I
don't necessarily get the same cookbooks on the second machine as I did on
the first machine... if the cookbooks were unconstrained in the Berksfile,
they may be updated. I see the same problem if I do a berks upload even
on the same machine, since it resolves the dependencies before doing the
upload. I've even tried specifying Berksfile.lock as the Berksfile (i.e. berks upload --berksfile Berksfile.lock) and I see the same result.
Does Berksfile.lock simply not work? Is the only way to get
consistency to do a berks install --path PATH and commit the results? Is
my idea for workflow completely wrong?
Michael's explanation is 100% in line with how we handle things here at Riot
--
Jamie Winsor @resetexistence
On Thursday, March 28, 2013 at 1:07 PM, Michael Glenney wrote:
We pin versions in the top level cookbooks. We also restrict versions in our environment files but it's always a =< so if an older cookbook is relying on an older version of something the environment file doesn't get in the way.
In the metadata of top level cookbooks we're strict: depends "java", "= 1.0.0"
and we list out even cookbooks that aren't called directly by the TLC, that are dependencies of others, just to make sure everything is pinned.
In the non-TLC's we're more relaxed: depends "java", "~> 1.0.0"
which makes development and testing a little more bearable.
I do this with librarian and it "just works". the lock file that I check into the repository is precisely what gets deployed on the production chef server.
But be careful: this is not necessarily what gets downloaded by the node if you don't "lock" all versions in metadata.rb.
What do you mean?
We don't have anything version locked, so it always pulls the most recent, and since the most recent is always what shows up in the Cheffile.lock, they do get the right versions... or are you referring to conditions where the version numbers are regressing?
Sorry, that was confusing. I meant that the chef-client run on the node does not care about Cheffile.lock. If chef-client downloads its cookbook dependencies at the beginning of the chef run, it only cares about the version pinnings in the environment and the metadata.rb of the downloaded cookbooks.
I prefer to pin only the top-level application cookbook versions in the environment. The version of the dependent cookbooks are pinned in each application cookbook's metadata.rb.
If you only pin the toplevel application cookbooks in the environment (and that is the right thing imho), but you don't have the dependent cookbooks' versions locked in the app cookbook's metadata.rb, then you will end up with different / latest versions of the dependent cookbooks on each run.
The alternative would be to pin all cookbook versions (i.e. app cookbooks + all dependencies) in the environment, but that's wrong in two ways:
its an implementation detail you should not have to worry on this level which pollutes your environment files and makes promotion to other environments at least uneasy
you artificially limit yourself, because you now need a consistent set of cookbook across the environment, e.g. all app cookbooks must agree to use the same version of the apache2 cookbook. But that's artificial, it would be perfectly fine if one app cookbook depends on apache2 1.1.0 while another one depends on apache2 2.0.0, and both can live in the same environment
I do this with librarian and it "just works". the lock file that I check into the repository is precisely what gets deployed on the production chef server.
But be careful: this is not necessarily what gets downloaded by the node if you don't "lock" all versions in metadata.rb.
librarian doesn't support as many endpoints (just git, community site, and folders), but I'm much more concerned with consistency than features.
this is the reason we don't use berkshelf... i thought that the lockfile issue had been resolved back in february but apparently not.
Tend to agree here, we also use librarian because its simpler and seems to be more stable. The only feature I'm really missing from berkshelf is the 'metadata' keyword, so you don't have to repeat yourself in metadata and Cheffile...
sounds like a sane workflow. We are doing it similarly, but with chef solo and thus without a chef server.
In addition to what you described:
we use application (a.k.a role or toplevel) cookbooks not roles
for library cookbooks we use no version constraints or if necessary optimistic version constraints (e.g. ~> 1.1) in metadata.rb
for application cookbooks we use strict versioning (e.g. = 1.1.0) for all dependencies in the graph (i.e. including the transitive ones) in metadata.rb. We do this because we want stable application cookbooks and chef server does not care about Cheffile/Berksfile.lock, thus we lock the transitive deps as well in metadata.rb
with such an approach your environment version constraints would be very simple => you only need lock the app cookbook's version there
Finally, we treat dependency resolution for app cookbooks a bit differently:
rather than a toplevel Cheffile/Berksfile in the chef repository we have a custom yml file which contains the app cookbooks + version + git location/ref
for each app cookbook in the yml file, we:
Clone the cookbook ref to /tmp/
Cd /tmp/ && berks install --path /cookbooks/-
this isolates the dependencies per application cookbook, i.e. you can have different versions of library cookbook x for app cookbooks a and b, even though both live in the same chef repo
So, I've been struggling with getting Berkshelf to do exactly what I want it to do, to the extent that I'm starting to wonder if what I'm trying to do is the Wrong Thing(tm). It seems to work great for cookbook development, but where I'm running into trouble is when I try to assemble all of our disparate cookbooks sources (community, github, internal git repos) into a chef repo and upload to the server. I'm currently using Berkshelf 1.2.1.
The workflow I'd like to have is this:
cookbooks have their separate release cycles and work independently, tagging released versions as they progress. Their dependencies are resolved first from a central chef server, then from the community api. The cookbooks (hopefully) properly list their dependencies in metadata, and use pessimistic constraints where necessary.
For each platform (here defined nebulously as a set of applications and services that form a coherent whole), there is a chef repository that contains (mostly) only roles, data bags, and environments, and a Berksfile.
When the time comes to make a platform release, we update cookbooks to the latest versions as determined by Berkshelf. The platform Berksfile contains mostly top-level cookbooks (i.e. cookbooks that are directly mentioned in role runlists; ideally application and role cookbooks, but we're not there yet), sourced from the community site where possible, and from git with specific refs where it's not. Internally developed cookbooks are always given a version constraint in the Berksfile, external cookbooks only when necessary.
After resolving the dependencies, we generate a set of environment cookbook constraints that constrain the test environment to the resolved versions. We upload the cookbooks and then upload the new environment to allow the test platform to converge to the new configuration. The new configurations are given a "thorough" acceptance test. When the acceptance test passes, we approve the new configuration for deployment.
When the configuration is approved, we modify the production environment to have the same constraints as test, the production nodes happily converge to their new configurations, and celebrations are had by all.
The problem I'm running into is between steps 3 and 4. I'm having trouble getting repeatable results out of the cookbook resolution. I'm OK if things change between executions of berks update. The problem is that if I do a berks update on my machine, commit both the Berksfile and the Berksfile.lock, and then do a berks install on another machine, I don't necessarily get the same cookbooks on the second machine as I did on the first machine... if the cookbooks were unconstrained in the Berksfile, they may be updated. I see the same problem if I do a berks upload even on the same machine, since it resolves the dependencies before doing the upload. I've even tried specifying Berksfile.lock as the Berksfile (i.e. berks upload --berksfile Berksfile.lock) and I see the same result.
Does Berksfile.lock simply not work? Is the only way to get consistency to do a berks install --path PATH and commit the results? Is my idea for workflow completely wrong?
We pin versions in the top level cookbooks. We also restrict versions
in our environment files but it's always a =< so if an older cookbook is
relying on an older version of something the environment file doesn't get
in the way.
Does that mean you pin TLCs as well as their dependencies in the
environment files?
In the metadata of top level cookbooks we're strict: depends "java", "=
1.0.0"
and we list out even cookbooks that aren't called directly by the TLC,
that are dependencies of others, just to make sure everything is pinned.
+1 same here
In the non-TLC's we're more relaxed: depends "java", "~> 1.0.0"
which makes development and testing a little more bearable.
MG
On Thu, Mar 28, 2013 at 11:23 AM, Torben Knerr ukio@gmx.de wrote:
I do this with librarian and it "just works". the lock file that
I check into the repository is precisely what gets deployed on the
production chef server.
But be careful: this is not necessarily what gets downloaded by the
node if you don't "lock" all versions in metadata.rb.
What do you mean?
We don't have anything version locked, so it always pulls the most
recent, and since the most recent is always what shows up in the
Cheffile.lock, they do get the right versions... or are you referring to
conditions where the version numbers are regressing?
Sorry, that was confusing. I meant that the chef-client run on the node
does not care about Cheffile.lock. If chef-client downloads its cookbook
dependencies at the beginning of the chef run, it only cares about the
version pinnings in the environment and the metadata.rb of the downloaded
cookbooks.
I prefer to pin only the top-level application cookbook versions in the
environment. The version of the dependent cookbooks are pinned in each
application cookbook's metadata.rb.
If you only pin the toplevel application cookbooks in the environment
(and that is the right thing imho), but you don't have the dependent
cookbooks' versions locked in the app cookbook's metadata.rb, then you will
end up with different / latest versions of the dependent cookbooks on each
run.
The alternative would be to pin all cookbook versions (i.e. app
cookbooks + all dependencies) in the environment, but that's wrong in two
ways:
its an implementation detail you should not have to worry on this
level which pollutes your environment files and makes promotion to other
environments at least uneasy
you artificially limit yourself, because you now need a consistent
set of cookbook across the environment, e.g. all app cookbooks must agree
to use the same version of the apache2 cookbook. But that's artificial, it
would be perfectly fine if one app cookbook depends on apache2 1.1.0 while
another one depends on apache2 2.0.0, and both can live in the same
environment
On Thu, Mar 28, 2013 at 12:54 PM, Torben Knerr ukio@gmx.de wrote:
I do this with librarian and it "just works". the lock file that I
check into the repository is precisely what gets deployed on the
production chef server.
But be careful: this is not necessarily what gets downloaded by the
node if you don't "lock" all versions in metadata.rb.
librarian doesn't support as many endpoints (just git, community
site, and folders), but I'm much more concerned with consistency than
features.
this is the reason we don't use berkshelf... i thought that the
lockfile issue had been resolved back in february but apparently not.
Tend to agree here, we also use librarian because its simpler and
seems to be more stable. The only feature I'm really missing from berkshelf
is the 'metadata' keyword, so you don't have to repeat yourself in metadata
and Cheffile...
-jesse
On Thu, Mar 28, 2013 at 11:39 AM, Torben Knerr ukio@gmx.de
wrote:
Am 28.03.2013 16:26 schrieb "Torben Knerr" ukio@gmx.de:
Hi Greg,
soun
Hi Greg,
sounds like a sane workflow. We are doing it similarly, but with
chef solo and thus without a chef server.
In addition to what you described:
we use application (a.k.a role or toplevel) cookbooks not roles
for library cookbooks we use no version constraints or if
necessary optimistic version constraints (e.g. ~> 1.1) in metadata.rb
for application cookbooks we use strict versioning (e.g. =
1.1.0) for all dependencies in the graph (i.e. including the transitive
ones) in metadata.rb. We do this because we want stable application
cookbooks and chef server does not care about Cheffile/Berksfile.lock, thus
we lock the transitive deps as well in metadata.rb
with such an approach your environment version constraints
would be very simple => you only need lock the app cookbook's version there
Finally, we treat dependency resolution for app cookbooks a bit
differently:
rather than a toplevel Cheffile/Berksfile in the chef
repository we have a custom yml file which contains the app cookbooks +
version + git location/ref
for each app cookbook in the yml file, we:
Clone the cookbook ref to /tmp/
Cd /tmp/ && berks install --path
/cookbooks/-
this isolates the dependencies per application cookbook, i.e.
you can have different versions of library cookbook x for app cookbooks a
and b, even though both live in the same chef repo
So, I've been struggling with getting Berkshelf to do exactly
what I want it to do, to the extent that I'm starting to wonder if what I'm
trying to do is the Wrong Thing(tm). It seems to work great for cookbook
development, but where I'm running into trouble is when I try to assemble
all of our disparate cookbooks sources (community, github, internal git
repos) into a chef repo and upload to the server. I'm currently using
Berkshelf 1.2.1.
The workflow I'd like to have is this:
cookbooks have their separate release cycles and work
independently, tagging released versions as they progress. Their
dependencies are resolved first from a central chef server, then from the
community api. The cookbooks (hopefully) properly list their dependencies
in metadata, and use pessimistic constraints where necessary.
For each platform (here defined nebulously as a set of
applications and services that form a coherent whole), there is a chef
repository that contains (mostly) only roles, data bags, and environments,
and a Berksfile.
When the time comes to make a platform release, we update
cookbooks to the latest versions as determined by Berkshelf. The platform
Berksfile contains mostly top-level cookbooks (i.e. cookbooks that are
directly mentioned in role runlists; ideally application and role
cookbooks, but we're not there yet), sourced from the community site where
possible, and from git with specific refs where it's not. Internally
developed cookbooks are always given a version constraint in the Berksfile,
external cookbooks only when necessary.
After resolving the dependencies, we generate a set of
environment cookbook constraints that constrain the test environment to the
resolved versions. We upload the cookbooks and then upload the new
environment to allow the test platform to converge to the new
configuration. The new configurations are given a "thorough" acceptance
test. When the acceptance test passes, we approve the new configuration for
deployment.
When the configuration is approved, we modify the production
environment to have the same constraints as test, the production nodes
happily converge to their new configurations, and celebrations are had by
all.
The problem I'm running into is between steps 3 and 4. I'm
having trouble getting repeatable results out of the cookbook resolution.
I'm OK if things change between executions of berks update. The problem
is that if I do a berks update on my machine, commit both the Berksfile
and the Berksfile.lock, and then do a berks install on another machine, I
don't necessarily get the same cookbooks on the second machine as I did on
the first machine... if the cookbooks were unconstrained in the Berksfile,
they may be updated. I see the same problem if I do a berks upload even
on the same machine, since it resolves the dependencies before doing the
upload. I've even tried specifying Berksfile.lock as the Berksfile (i.e. berks upload --berksfile Berksfile.lock) and I see the same result.
Does Berksfile.lock simply not work? Is the only way to get
consistency to do a berks install --path PATH and commit the results? Is
my idea for workflow completely wrong?
"Does that mean you pin TLCs as well as their dependencies in the
environment files?"
ALL of our cookbooks are referenced in our environment files, from TLC's
on down.
Interesting, is there a specific reason for this? If you pin all the
cookbook dependencies' versions in the TLCs metadata.rb (as I understood
you do) it should be sufficient to pin only the TLCs in the environment,
shouldn't it?
Cheers, Torben
On Thu, Mar 28, 2013 at 1:47 PM, Torben Knerr ukio@gmx.de wrote:
We pin versions in the top level cookbooks. We also restrict versions
in our environment files but it's always a =< so if an older cookbook is
relying on an older version of something the environment file doesn't get
in the way.
Does that mean you pin TLCs as well as their dependencies in the
environment files?
In the metadata of top level cookbooks we're strict: depends "java",
"= 1.0.0"
and we list out even cookbooks that aren't called directly by the TLC,
that are dependencies of others, just to make sure everything is pinned.
+1 same here
In the non-TLC's we're more relaxed: depends "java", "~> 1.0.0"
which makes development and testing a little more bearable.
MG
On Thu, Mar 28, 2013 at 11:23 AM, Torben Knerr ukio@gmx.de wrote:
I do this with librarian and it "just works". the lock file
that I check into the repository is precisely what gets deployed on the
production chef server.
But be careful: this is not necessarily what gets downloaded by
the node if you don't "lock" all versions in metadata.rb.
What do you mean?
We don't have anything version locked, so it always pulls the most
recent, and since the most recent is always what shows up in the
Cheffile.lock, they do get the right versions... or are you referring to
conditions where the version numbers are regressing?
Sorry, that was confusing. I meant that the chef-client run on the
node does not care about Cheffile.lock. If chef-client downloads its
cookbook dependencies at the beginning of the chef run, it only cares about
the version pinnings in the environment and the metadata.rb of the
downloaded cookbooks.
I prefer to pin only the top-level application cookbook versions in
the environment. The version of the dependent cookbooks are pinned in each
application cookbook's metadata.rb.
If you only pin the toplevel application cookbooks in the environment
(and that is the right thing imho), but you don't have the dependent
cookbooks' versions locked in the app cookbook's metadata.rb, then you will
end up with different / latest versions of the dependent cookbooks on each
run.
The alternative would be to pin all cookbook versions (i.e. app
cookbooks + all dependencies) in the environment, but that's wrong in two
ways:
its an implementation detail you should not have to worry on this
level which pollutes your environment files and makes promotion to other
environments at least uneasy
you artificially limit yourself, because you now need a consistent
set of cookbook across the environment, e.g. all app cookbooks must agree
to use the same version of the apache2 cookbook. But that's artificial, it
would be perfectly fine if one app cookbook depends on apache2 1.1.0 while
another one depends on apache2 2.0.0, and both can live in the same
environment
On Thu, Mar 28, 2013 at 12:54 PM, Torben Knerr ukio@gmx.de wrote:
I do this with librarian and it "just works". the lock file that
I check into the repository is precisely what gets deployed on the
production chef server.
But be careful: this is not necessarily what gets downloaded by
the node if you don't "lock" all versions in metadata.rb.
librarian doesn't support as many endpoints (just git, community
site, and folders), but I'm much more concerned with consistency than
features.
this is the reason we don't use berkshelf... i thought that the
lockfile issue had been resolved back in february but apparently not.
Tend to agree here, we also use librarian because its simpler and
seems to be more stable. The only feature I'm really missing from berkshelf
is the 'metadata' keyword, so you don't have to repeat yourself in metadata
and Cheffile...
-jesse
On Thu, Mar 28, 2013 at 11:39 AM, Torben Knerr ukio@gmx.de
wrote:
Am 28.03.2013 16:26 schrieb "Torben Knerr" ukio@gmx.de:
Hi Greg,
soun
Hi Greg,
sounds like a sane workflow. We are doing it similarly, but
with chef solo and thus without a chef server.
In addition to what you described:
we use application (a.k.a role or toplevel) cookbooks not
roles
for library cookbooks we use no version constraints or if
necessary optimistic version constraints (e.g. ~> 1.1) in metadata.rb
for application cookbooks we use strict versioning (e.g. =
1.1.0) for all dependencies in the graph (i.e. including the transitive
ones) in metadata.rb. We do this because we want stable application
cookbooks and chef server does not care about Cheffile/Berksfile.lock, thus
we lock the transitive deps as well in metadata.rb
with such an approach your environment version constraints
would be very simple => you only need lock the app cookbook's version there
Finally, we treat dependency resolution for app cookbooks a
bit differently:
rather than a toplevel Cheffile/Berksfile in the chef
repository we have a custom yml file which contains the app cookbooks +
version + git location/ref
for each app cookbook in the yml file, we:
Clone the cookbook ref to /tmp/
Cd /tmp/ && berks install --path
/cookbooks/-
this isolates the dependencies per application cookbook,
i.e. you can have different versions of library cookbook x for app
cookbooks a and b, even though both live in the same chef repo
So, I've been struggling with getting Berkshelf to do exactly
what I want it to do, to the extent that I'm starting to wonder if what I'm
trying to do is the Wrong Thing(tm). It seems to work great for cookbook
development, but where I'm running into trouble is when I try to assemble
all of our disparate cookbooks sources (community, github, internal git
repos) into a chef repo and upload to the server. I'm currently using
Berkshelf 1.2.1.
The workflow I'd like to have is this:
cookbooks have their separate release cycles and work
independently, tagging released versions as they progress. Their
dependencies are resolved first from a central chef server, then from the
community api. The cookbooks (hopefully) properly list their dependencies
in metadata, and use pessimistic constraints where necessary.
For each platform (here defined nebulously as a set of
applications and services that form a coherent whole), there is a chef
repository that contains (mostly) only roles, data bags, and environments,
and a Berksfile.
When the time comes to make a platform release, we update
cookbooks to the latest versions as determined by Berkshelf. The platform
Berksfile contains mostly top-level cookbooks (i.e. cookbooks that are
directly mentioned in role runlists; ideally application and role
cookbooks, but we're not there yet), sourced from the community site where
possible, and from git with specific refs where it's not. Internally
developed cookbooks are always given a version constraint in the Berksfile,
external cookbooks only when necessary.
After resolving the dependencies, we generate a set of
environment cookbook constraints that constrain the test environment to the
resolved versions. We upload the cookbooks and then upload the new
environment to allow the test platform to converge to the new
configuration. The new configurations are given a "thorough" acceptance
test. When the acceptance test passes, we approve the new configuration for
deployment.
When the configuration is approved, we modify the
production environment to have the same constraints as test, the production
nodes happily converge to their new configurations, and celebrations are
had by all.
The problem I'm running into is between steps 3 and 4. I'm
having trouble getting repeatable results out of the cookbook resolution.
I'm OK if things change between executions of berks update. The problem
is that if I do a berks update on my machine, commit both the Berksfile
and the Berksfile.lock, and then do a berks install on another machine, I
don't necessarily get the same cookbooks on the second machine as I did on
the first machine... if the cookbooks were unconstrained in the Berksfile,
they may be updated. I see the same problem if I do a berks upload even
on the same machine, since it resolves the dependencies before doing the
upload. I've even tried specifying Berksfile.lock as the Berksfile (i.e. berks upload --berksfile Berksfile.lock) and I see the same result.
Does Berksfile.lock simply not work? Is the only way to get
consistency to do a berks install --path PATH and commit the results? Is
my idea for workflow completely wrong?
"Does that mean you pin TLCs as well as their dependencies in the environment files?"
ALL of our cookbooks are referenced in our environment files, from TLC's on down.
Interesting, is there a specific reason for this? If you pin all the cookbook dependencies' versions in the TLCs metadata.rb (as I understood you do) it should be sufficient to pin only the TLCs in the environment, shouldn't it?
Cheers, Torben
On Thu, Mar 28, 2013 at 1:47 PM, Torben Knerr ukio@gmx.de wrote:
We pin versions in the top level cookbooks. We also restrict versions in our environment files but it's always a =< so if an older cookbook is relying on an older version of something the environment file doesn't get in the way.
Does that mean you pin TLCs as well as their dependencies in the environment files?
In the metadata of top level cookbooks we're strict: depends "java", "= 1.0.0"
and we list out even cookbooks that aren't called directly by the TLC, that are dependencies of others, just to make sure everything is pinned.
+1 same here
In the non-TLC's we're more relaxed: depends "java", "~> 1.0.0"
which makes development and testing a little more bearable.
MG
On Thu, Mar 28, 2013 at 11:23 AM, Torben Knerr ukio@gmx.de wrote:
I do this with librarian and it "just works". the lock file that I check into the repository is precisely what gets deployed on the production chef server.
But be careful: this is not necessarily what gets downloaded by the node if you don't "lock" all versions in metadata.rb.
What do you mean?
We don't have anything version locked, so it always pulls the most recent, and since the most recent is always what shows up in the Cheffile.lock, they do get the right versions... or are you referring to conditions where the version numbers are regressing?
Sorry, that was confusing. I meant that the chef-client run on the node does not care about Cheffile.lock. If chef-client downloads its cookbook dependencies at the beginning of the chef run, it only cares about the version pinnings in the environment and the metadata.rb of the downloaded cookbooks.
I prefer to pin only the top-level application cookbook versions in the environment. The version of the dependent cookbooks are pinned in each application cookbook's metadata.rb.
If you only pin the toplevel application cookbooks in the environment (and that is the right thing imho), but you don't have the dependent cookbooks' versions locked in the app cookbook's metadata.rb, then you will end up with different / latest versions of the dependent cookbooks on each run.
The alternative would be to pin all cookbook versions (i.e. app cookbooks + all dependencies) in the environment, but that's wrong in two ways:
its an implementation detail you should not have to worry on this level which pollutes your environment files and makes promotion to other environments at least uneasy
you artificially limit yourself, because you now need a consistent set of cookbook across the environment, e.g. all app cookbooks must agree to use the same version of the apache2 cookbook. But that's artificial, it would be perfectly fine if one app cookbook depends on apache2 1.1.0 while another one depends on apache2 2.0.0, and both can live in the same environment
On Thu, Mar 28, 2013 at 12:54 PM, Torben Knerr ukio@gmx.de wrote:
I do this with librarian and it "just works". the lock file that I check into the repository is precisely what gets deployed on the production chef server.
But be careful: this is not necessarily what gets downloaded by the node if you don't "lock" all versions in metadata.rb.
librarian doesn't support as many endpoints (just git, community site, and folders), but I'm much more concerned with consistency than features.
this is the reason we don't use berkshelf... i thought that the lockfile issue had been resolved back in february but apparently not.
Tend to agree here, we also use librarian because its simpler and seems to be more stable. The only feature I'm really missing from berkshelf is the 'metadata' keyword, so you don't have to repeat yourself in metadata and Cheffile...
-jesse
On Thu, Mar 28, 2013 at 11:39 AM, Torben Knerr ukio@gmx.de wrote:
Am 28.03.2013 16:26 schrieb "Torben Knerr" ukio@gmx.de:
Hi Greg,
soun
Hi Greg,
sounds like a sane workflow. We are doing it similarly, but with chef solo and thus without a chef server.
In addition to what you described:
we use application (a.k.a role or toplevel) cookbooks not roles
for library cookbooks we use no version constraints or if necessary optimistic version constraints (e.g. ~> 1.1) in metadata.rb
for application cookbooks we use strict versioning (e.g. = 1.1.0) for all dependencies in the graph (i.e. including the transitive ones) in metadata.rb. We do this because we want stable application cookbooks and chef server does not care about Cheffile/Berksfile.lock, thus we lock the transitive deps as well in metadata.rb
with such an approach your environment version constraints would be very simple => you only need lock the app cookbook's version there
Finally, we treat dependency resolution for app cookbooks a bit differently:
rather than a toplevel Cheffile/Berksfile in the chef repository we have a custom yml file which contains the app cookbooks + version + git location/ref
for each app cookbook in the yml file, we:
Clone the cookbook ref to /tmp/
Cd /tmp/ && berks install --path /cookbooks/-
this isolates the dependencies per application cookbook, i.e. you can have different versions of library cookbook x for app cookbooks a and b, even though both live in the same chef repo
So, I've been struggling with getting Berkshelf to do exactly what I want it to do, to the extent that I'm starting to wonder if what I'm trying to do is the Wrong Thing(tm). It seems to work great for cookbook development, but where I'm running into trouble is when I try to assemble all of our disparate cookbooks sources (community, github, internal git repos) into a chef repo and upload to the server. I'm currently using Berkshelf 1.2.1.
The workflow I'd like to have is this:
cookbooks have their separate release cycles and work independently, tagging released versions as they progress. Their dependencies are resolved first from a central chef server, then from the community api. The cookbooks (hopefully) properly list their dependencies in metadata, and use pessimistic constraints where necessary.
For each platform (here defined nebulously as a set of applications and services that form a coherent whole), there is a chef repository that contains (mostly) only roles, data bags, and environments, and a Berksfile.
When the time comes to make a platform release, we update cookbooks to the latest versions as determined by Berkshelf. The platform Berksfile contains mostly top-level cookbooks (i.e. cookbooks that are directly mentioned in role runlists; ideally application and role cookbooks, but we're not there yet), sourced from the community site where possible, and from git with specific refs where it's not. Internally developed cookbooks are always given a version constraint in the Berksfile, external cookbooks only when necessary.
After resolving the dependencies, we generate a set of environment cookbook constraints that constrain the test environment to the resolved versions. We upload the cookbooks and then upload the new environment to allow the test platform to converge to the new configuration. The new configurations are given a "thorough" acceptance test. When the acceptance test passes, we approve the new configuration for deployment.
When the configuration is approved, we modify the production environment to have the same constraints as test, the production nodes happily converge to their new configurations, and celebrations are had by all.
The problem I'm running into is between steps 3 and 4. I'm having trouble getting repeatable results out of the cookbook resolution. I'm OK if things change between executions of berks update. The problem is that if I do a berks update on my machine, commit both the Berksfile and the Berksfile.lock, and then do a berks install on another machine, I don't necessarily get the same cookbooks on the second machine as I did on the first machine... if the cookbooks were unconstrained in the Berksfile, they may be updated. I see the same problem if I do a berks upload even on the same machine, since it resolves the dependencies before doing the upload. I've even tried specifying Berksfile.lock as the Berksfile (i.e. berks upload --berksfile Berksfile.lock) and I see the same result.
Does Berksfile.lock simply not work? Is the only way to get consistency to do a berks install --path PATH and commit the results? Is my idea for workflow completely wrong?
FWIW, we don't use environments* at all - we just use the metadata.rb of
our TLC's to "pin" versions.
*full disclosure - we run multiple orgs and our "old" org, which still have
the most number of nodes in it, still uses roles and environments. we've
been developing the tooling as we rearrange our chef infrastructure to make
it easier. a few thor jobs go a long way.
"Does that mean you pin TLCs as well as their dependencies in the
environment files?"
ALL of our cookbooks are referenced in our environment files, from TLC's
on down.
Interesting, is there a specific reason for this? If you pin all the
cookbook dependencies' versions in the TLCs metadata.rb (as I understood
you do) it should be sufficient to pin only the TLCs in the environment,
shouldn't it?
Cheers, Torben
On Thu, Mar 28, 2013 at 1:47 PM, Torben Knerr ukio@gmx.de wrote:
We pin versions in the top level cookbooks. We also restrict
versions in our environment files but it's always a =< so if an older
cookbook is relying on an older version of something the environment file
doesn't get in the way.
Does that mean you pin TLCs as well as their dependencies in the
environment files?
In the metadata of top level cookbooks we're strict: depends "java",
"= 1.0.0"
and we list out even cookbooks that aren't called directly by the
TLC, that are dependencies of others, just to make sure everything is
pinned.
+1 same here
In the non-TLC's we're more relaxed: depends "java", "~> 1.0.0"
which makes development and testing a little more bearable.
MG
On Thu, Mar 28, 2013 at 11:23 AM, Torben Knerr ukio@gmx.de wrote:
I do this with librarian and it "just works". the lock file
that I check into the repository is precisely what gets deployed on the
production chef server.
But be careful: this is not necessarily what gets downloaded by
the node if you don't "lock" all versions in metadata.rb.
What do you mean?
We don't have anything version locked, so it always pulls the most
recent, and since the most recent is always what shows up in the
Cheffile.lock, they do get the right versions... or are you referring to
conditions where the version numbers are regressing?
Sorry, that was confusing. I meant that the chef-client run on the
node does not care about Cheffile.lock. If chef-client downloads its
cookbook dependencies at the beginning of the chef run, it only cares about
the version pinnings in the environment and the metadata.rb of the
downloaded cookbooks.
I prefer to pin only the top-level application cookbook versions in
the environment. The version of the dependent cookbooks are pinned in each
application cookbook's metadata.rb.
If you only pin the toplevel application cookbooks in the
environment (and that is the right thing imho), but you don't have the
dependent cookbooks' versions locked in the app cookbook's metadata.rb,
then you will end up with different / latest versions of the dependent
cookbooks on each run.
The alternative would be to pin all cookbook versions (i.e. app
cookbooks + all dependencies) in the environment, but that's wrong in two
ways:
its an implementation detail you should not have to worry on this
level which pollutes your environment files and makes promotion to other
environments at least uneasy
you artificially limit yourself, because you now need a
consistent set of cookbook across the environment, e.g. all app cookbooks
must agree to use the same version of the apache2 cookbook. But that's
artificial, it would be perfectly fine if one app cookbook depends on
apache2 1.1.0 while another one depends on apache2 2.0.0, and both can live
in the same environment
On Thu, Mar 28, 2013 at 12:54 PM, Torben Knerr ukio@gmx.de
wrote:
I do this with librarian and it "just works". the lock file
that I check into the repository is precisely what gets deployed on the
production chef server.
But be careful: this is not necessarily what gets downloaded by
the node if you don't "lock" all versions in metadata.rb.
librarian doesn't support as many endpoints (just git,
community site, and folders), but I'm much more concerned with consistency
than features.
this is the reason we don't use berkshelf... i thought that the
lockfile issue had been resolved back in february but apparently not.
Tend to agree here, we also use librarian because its simpler and
seems to be more stable. The only feature I'm really missing from berkshelf
is the 'metadata' keyword, so you don't have to repeat yourself in metadata
and Cheffile...
-jesse
On Thu, Mar 28, 2013 at 11:39 AM, Torben Knerr ukio@gmx.de
wrote:
Am 28.03.2013 16:26 schrieb "Torben Knerr" ukio@gmx.de:
Hi Greg,
soun
Hi Greg,
sounds like a sane workflow. We are doing it similarly, but
with chef solo and thus without a chef server.
In addition to what you described:
we use application (a.k.a role or toplevel) cookbooks not
roles
for library cookbooks we use no version constraints or if
necessary optimistic version constraints (e.g. ~> 1.1) in metadata.rb
for application cookbooks we use strict versioning (e.g. =
1.1.0) for all dependencies in the graph (i.e. including the transitive
ones) in metadata.rb. We do this because we want stable application
cookbooks and chef server does not care about Cheffile/Berksfile.lock, thus
we lock the transitive deps as well in metadata.rb
with such an approach your environment version constraints
would be very simple => you only need lock the app cookbook's version there
Finally, we treat dependency resolution for app cookbooks a
bit differently:
rather than a toplevel Cheffile/Berksfile in the chef
repository we have a custom yml file which contains the app cookbooks +
version + git location/ref
for each app cookbook in the yml file, we:
Clone the cookbook ref to /tmp/
Cd /tmp/ && berks install --path
/cookbooks/-
this isolates the dependencies per application cookbook,
i.e. you can have different versions of library cookbook x for app
cookbooks a and b, even though both live in the same chef repo
So, I've been struggling with getting Berkshelf to do
exactly what I want it to do, to the extent that I'm starting to wonder if
what I'm trying to do is the Wrong Thing(tm). It seems to work great for
cookbook development, but where I'm running into trouble is when I try to
assemble all of our disparate cookbooks sources (community, github,
internal git repos) into a chef repo and upload to the server. I'm
currently using Berkshelf 1.2.1.
The workflow I'd like to have is this:
cookbooks have their separate release cycles and work
independently, tagging released versions as they progress. Their
dependencies are resolved first from a central chef server, then from the
community api. The cookbooks (hopefully) properly list their dependencies
in metadata, and use pessimistic constraints where necessary.
For each platform (here defined nebulously as a set of
applications and services that form a coherent whole), there is a chef
repository that contains (mostly) only roles, data bags, and environments,
and a Berksfile.
When the time comes to make a platform release, we update
cookbooks to the latest versions as determined by Berkshelf. The platform
Berksfile contains mostly top-level cookbooks (i.e. cookbooks that are
directly mentioned in role runlists; ideally application and role
cookbooks, but we're not there yet), sourced from the community site where
possible, and from git with specific refs where it's not. Internally
developed cookbooks are always given a version constraint in the Berksfile,
external cookbooks only when necessary.
After resolving the dependencies, we generate a set of
environment cookbook constraints that constrain the test environment to the
resolved versions. We upload the cookbooks and then upload the new
environment to allow the test platform to converge to the new
configuration. The new configurations are given a "thorough" acceptance
test. When the acceptance test passes, we approve the new configuration for
deployment.
When the configuration is approved, we modify the
production environment to have the same constraints as test, the production
nodes happily converge to their new configurations, and celebrations are
had by all.
The problem I'm running into is between steps 3 and 4. I'm
having trouble getting repeatable results out of the cookbook resolution.
I'm OK if things change between executions of berks update. The problem
is that if I do a berks update on my machine, commit both the Berksfile
and the Berksfile.lock, and then do a berks install on another machine, I
don't necessarily get the same cookbooks on the second machine as I did on
the first machine... if the cookbooks were unconstrained in the Berksfile,
they may be updated. I see the same problem if I do a berks upload even
on the same machine, since it resolves the dependencies before doing the
upload. I've even tried specifying Berksfile.lock as the Berksfile (i.e. berks upload --berksfile Berksfile.lock) and I see the same result.
Does Berksfile.lock simply not work? Is the only way to get
consistency to do a berks install --path PATH and commit the results? Is
my idea for workflow completely wrong?
On Thursday, March 28, 2013 at 1:07 PM, Michael Glenney wrote:
We pin versions in the top level cookbooks. We also restrict versions in
our environment files but it's always a =< so if an older cookbook is
relying on an older version of something the environment file doesn't get
in
the way.
In the metadata of top level cookbooks we're strict: depends "java", "=
1.0.0"
and we list out even cookbooks that aren't called directly by the TLC,
that
are dependencies of others, just to make sure everything is pinned.
In the non-TLC's we're more relaxed: depends "java", "~> 1.0.0"
which makes development and testing a little more bearable.
In the actual chef orgs, it really just a convenience for downloading
cookbooks from our cookbook org and uploading them to a particular chef
server/org. We could just write a tool to do this, I suppose, but we
already use berkshelf with Vagrant for testing, etc already. When
building/updating out TLC's we can use berkshelf to generate the list of
exact versions we depend on. If Berksfile.lock worked as intended, then we
wouldn't have to do this I suppose.
On Thursday, March 28, 2013 at 1:07 PM, Michael Glenney wrote:
We pin versions in the top level cookbooks. We also restrict versions in
our environment files but it's always a =< so if an older cookbook is
relying on an older version of something the environment file doesn't
get in
the way.
In the metadata of top level cookbooks we're strict: depends "java", "=
1.0.0"
and we list out even cookbooks that aren't called directly by the TLC,
that
are dependencies of others, just to make sure everything is pinned.
In the non-TLC's we're more relaxed: depends "java", "~> 1.0.0"
which makes development and testing a little more bearable.
Berkshelf isn't just for uploading and downloading your cookbooks. While these are two features that you are currently using it for, Berkshelf is a source code management tool that provides software engineers with the tools they need to:
Generate new cookbooks
Create virtual environments to test convergence
Package cookbooks
Distribute cookbooks
Download/Retrieve cookbooks and their dependencies
It doesn't manage the environments on your Chef server, though. You still should control what cookbooks are present on your environments by locking the top level dependencies.
--
Jamie Winsor @resetexistence
On Friday, March 29, 2013 at 11:42 AM, Brian Akins wrote:
In the actual chef orgs, it really just a convenience for downloading cookbooks from our cookbook org and uploading them to a particular chef server/org. We could just write a tool to do this, I suppose, but we already use berkshelf with Vagrant for testing, etc already. When building/updating out TLC's we can use berkshelf to generate the list of exact versions we depend on. If Berksfile.lock worked as intended, then we wouldn't have to do this I suppose.
On Thursday, March 28, 2013 at 1:07 PM, Michael Glenney wrote:
We pin versions in the top level cookbooks. We also restrict versions in
our environment files but it's always a =< so if an older cookbook is
relying on an older version of something the environment file doesn't get in
the way.
In the metadata of top level cookbooks we're strict: depends "java", "=
1.0.0"
and we list out even cookbooks that aren't called directly by the TLC, that
are dependencies of others, just to make sure everything is pinned.
In the non-TLC's we're more relaxed: depends "java", "~> 1.0.0"
which makes development and testing a little more bearable.