Having humans manage that would totally suck.
Attributes are how we create/expose the interface by which we’d pass in the data, but there’s no need to create a permanent object like a role or environment for each of the iterations in your example (although there are certainly scenarios where that might be advisable, and in those cases I’d have machines generate the role / environment json and handle that for me).
In the simplest case, you could have your build server pass in that shasum value to the knife bootstrap command with the -j flag.
On July 15, 2014 at 10:53:25 AM, Danny Hadley (email@example.com) wrote:
I’m not asking about deployment techniques so much as I am how to provision a machine with a specific commit/version in mind. I want to keep this conversation as artifact-type agnostic as possible. Whether I’m using a git checkout, zip files, tar balls or whatever, I still need to be able to get some developer a machine with the codebase deployed at some specific state, based on his needs.
Using attributes seems like a one-off “hard coded” solution that requires me changing either a role, recipe or attribute file for every specific version of the artifact, since those are the places attributes can be defined. That also means I’d need to be committing new code to my chef source control for every commit to the main repository I wanted to deploy?
I’ve attached an image to this email, but if it doesn’t get posted to the archive, refer to here: http://oi58.tinypic.com/x9edd.jpg.
In a development team workflow, lets say I have one team working on a feature branch #5 and one working on feature branch #6. Lets say another members is allocated to work on feature #5 and wants to have a new machine provisioned with the latest code deployed on it, from that branch. In the picture, I’m talking about commit “ot56”. Now lets also say I have a build configured for this repo that provisions a brand new machine with the latest code to run regression tests against it. Lets say that someone from feature team #6 just committed “af2t". Using attributes, in order to get the appropriate code deployed for those two machines, wouldn’t I need to make code changes in the chef repository? Imagine now there are 10+ features teams, and build jobs running every 5 minutes, how would that scale?
On July 15, 2014 at 11:10:46 AM, Charles Johnson (firstname.lastname@example.org) wrote:
There’s a couple of ways to skin this cat.
First, if you want to use code straight from git or svn, you could use the git or subversion resources, and check it out directly from within the recipe. Make your desired branch/tag/commit an attribute and you’re good to go.
If you’re using a tarball, pull it down via the remote_file resource, and make the filename an attribute. Then you can change the attribute to change the file downloaded.
There are a couple of cookbooks that add resources designed to help with the tarball download-unpack-symlink dance:
That said, if you’re deploying via tarballs, I strongly suggest you switch over to native packaging if at all possible. FPM can make creating a versioned deb or rpm almost easier than wrapping up a tarball:
Hope that helps!
On July 14, 2014 at 8:49:37 PM, Danny Hadley (email@example.com) wrote:
hmm, I must not have phrased my question correctly. What I’m trying to solve is how to get a specific version of that “package” onto a node being provisioned with chef.
Suppose my package only has one file, an index.html file. And in master the file looks like:
but then I cut a feature branch, and make a commit (12a4) that changes the title:
zz | home
How can I get that code (the “package”) deployed onto a new machine without merging into master and promoting a new build artifact/release version?
I’m not asking about tracking the artifact after it’s been deployed to the machine, but how to get a specific build version of the artifact itself onto the machine. This is something that I’ve come across when using a “phoenix” approach with nodes where they are only ever provisioned with chef once. There are no re-runs; servers are provisioned, tested/used, and discarded, being replaced by the next generation (blue/green).
On July 14, 2014 at 8:09:47 PM, Mike Thibodeau (firstname.lastname@example.org) wrote:
Unpacking a tar or other file archive means content, files, that is not maintained inside chef or inside the node itself.
I have been thinking of using an intermediate directory on the destination machine to cache or explode archives and use fpm to turn that into a short lived package for the node to then truly install. This way you can have a record of each file and you can keep using your chosen source object. Creating metadata for that transient package that includes the source path, name, sum, size, and anything else about it would be good to have for future comparison and reporting.
Using the nodes native packager enables uninstall, upgrade, and all the other sweetness a package management utility brings.
Similar to downloading source and compiling locally except in this case it’s only the last steps of making the package and installing.
On Jul 14, 2014, at 2:48 PM, Danny Hadley email@example.com wrote:
I have been using chef for around a year now and I’m having a bit of trouble
bridging one last piece of the automation puzzle with a chef workflow; getting
a specific commit (i.e a feature branch I want to deploy to staging) deployed
onto a node using chef.
For time purposes, lets say my deployment procedure is to simply download and
unpack a tarball into the doc root of a web server, and lets also say that the
installation of that webserver on the node is taken care of in a different
recipe. So, my “zz” cookbook has two recipes:
where the application recipe looks like
tarball “/http/public” do
what is the best way of handling that situation I mentioned where I want a
feature branch (a specific commit), rather that the master/prod release that is
most likely the file downloaded by “http://my-artifact-server/zz.tar.gz”? Lets
say my CI pipeline has the ability to build upon push, and the commit hash is
12a4, so I do have an artifact for the 12a4 build. I was thinking that one
solution would be to have the chef “tarball” resource download it’s file from a
where the “nn” param is the node name. This way, the responsibility relies on
the artifact server to serve up the correct artifact and chef needs only have
one version of the recipe. I’m not sure that tool exists though, and it seems
like a tool that would need to be in cahootz with chef since the node would
have configuration properties stored in multiple places.
Anyways, does this make sense to anyone, has anyone else faced these growing
pains, and if so, how did you go about solving them?