Ed,
Ahh, I forgot you wrote that cookbook. We've chatted on IRC before. I'm
using Ubuntu and there were a few things that needed to change for it to
work. If you want I can send that to you in a different thread. Ubuntu
expected some files to be present that are on CentOS/RedHat by default, so I
had to reorder some of your cookbook.
I decided to ship my files to S3 so that multiple servers could pull files
down at the same time w/out a bottleneck, and because Hudson doesn't keep
all of my build, only a few of the latest.
About the use of jsvc; since I'm starting my servers and doing my first Chef
run via SSH (knife ec2 server create) the Tomcat process was dying once the
chef run was complete (SSH logout). I wasn't using runit to start
Tomcat... Using jsvc kept the process alive after logout..
Thanks for your contributions!
--Charlie
On Tue, Oct 5, 2010 at 3:07 PM, Haselwanter Edmund
edmund@haselwanter.comwrote:
On 05.10.2010, at 21:06, Charles Sullivan wrote:
I've spend a lot of time doing something similar. Here is what I had to do
to make everything work w/ Chef.
First I updated the Tomcat cookbook so that it uses JSVC so that the Java
process is daemonized properly..
I wrote that cookbook 
It was mend to work on centos and it still does.
Previously it was dying once the Chef run was complete.
must be something different
I think this is because I was boostrapping my servers with 'knife ec2
server create'. Second, I made the TomCat manager API actually work. I use
it to deploy my application correctly.
what OS did you use. this does work too on centos 5.2 on EC2
I use Hudson as my continuous build server. I have it ship my WAR files to
my S3 account. I then reference which build that should be deployed in a
databag.
if the hudson server is accessable you can request it from chef e.g. with a
bash ressource and curl. s3 is a nice solution. makes it CI agnostic. but I
wanted to be able to for a user to just point and klick. no CI / Chef
knowledge ...
GitHub - dougm/hudson-s3: Upload Hudson build artifacts to Amazon S3
To allow my recipe to download the file from S3 without changing
permissions of the file I use the following S3 resource..
s3_file.rb · GitHub
Let me know if you have any more questions..
On Tue, Oct 5, 2010 at 1:23 PM, Haselwanter Edmund <edmund@haselwanter.com
wrote:
Do you already have a CI Server in place? Then it probably comes with a
kind of
"download url". Put that in a remote file resource and use your deploy
mechanism.
What I have done for a client is:
- Wrote a small Rails App as a kind of JSON Builder for Chef
- This Rails App connects to Cruise from Thoughworks and presents the user
with the latest successful builds of a pipline
- The user checks the build to deploy which is stored
- after that (an some other configuration options for apache, tomcat6,
god, ...) the user can initiate a chef-solo run
- the rails app (in fact its webistrano with this custom extension -> many
thanks to the awesome peritor folks!) has a task copying over the generated
json to the target maschine and runs chef solo
=> the json could be written by hand but this enables not so tech savvy
people be agnostic to the inner workings of the deploy process
=> each deploy has its own server-config json string
==> you could point to another machine an do the same stuff. in fact i did
that to have the same stuff on dev and prod stages
=> you can go back in time and have some basic "I want this maschine at
the state of 03-23-2010" if you have the content to clone (which I have on
amazon EBS)
chef does the heavy lifting
happy cooking 
On 05.10.2010, at 19:47, Mark J. Reed wrote:
We're looking at a similar issue, and so far we've found a couple
other options, each of which has its own scaling issues, but might be
appropriate in some cases.
- Have the recipe check the code out and do a build.
This is more heavyweight than it needs to be, obviously, and
introduces build-time dependencies into the runtime deployment, but
it's viable if the app and/or the deployment is small enough. It has
the advantage of keeping the support infrastructure simple.
But as long as your build server is able to talk to Chef to update the
WAR's new location/name, why not just do this instead:
- Upload the WAR itself as a cookbook file.
This has the advantage that you don't introduce a new network access
dependency between your target nodes and your artifact repository (or
source code repository for option 1); if they can talk to the Chef
server, they can get the files.
On Tue, Oct 5, 2010 at 12:36 PM, Seth Chisamore schisamo@opscode.com
wrote:
Fellow Cooks,
I've been brainstorming on the best approach to incorporate Java-based
web
application deployments into the Chef ecosystem. The tricky thing we
have
to contend with, is the two step build/deploy process most Java
applications
go through. In most Java shops code is checked out from the SCM and
then
some sort of build framework like Maven compiles and packages the
application down into a deployable artifact (usually a WAR file).
This
artifact is then deployed into the Java application server(s).
Right now the data-bag driven application cookbook uses a one-step
approach...ie code is just checked out and sym-linked as "current".
It
would be nice to have a Java application that is deployed via this
cookbook
follow a similar pattern. The application cookbook can just pull the
final
deployable WAR file down from some arbitrary location...ie a valid URL
whose
reference lives in the application data bag.
We still have to deal with the "build" portion though....or do we?
Most
Java shops that are using something as sophisticated as Chef for
application
deployments probably already have a continuous integration server that
does
"builds". That's awesome and we shouldn't change it! What we need to
do is
just hook into this workflow...ie have Chef be the final mile.
In order to solve this I propose creating a Maven plugin that would do
the
following after a successful build:
-push the completed WAR to a configurable distribution point
(Artifactory,
S3 etc.).
-grab a reference to the completed WAR (artifact download url)
-make an authorized PUT request to the Chef server and update the
application data bag with the the WAR's new location/name. We could
probably leverage the jclouds chef-client to do this (ie the CI server
or
build machine becomes a node).
The next time the chef-client runs on all application servers the new
artifact should be pulled down and the deployment is complete. I
think
creating a Maven plugin is the best approach since most CI servers
work well
with Maven. A smaller shop that doesn't have a CI server could also
just
check the code out of SCM and perform a build via Maven.
Thoughts?
Seth
Opscode, Inc.
Seth Chisamore, Technical Evangelist
T: (404) 348-0505 E: schisamo@opscode.com
Twitter, IRC, Github: schisamo
--
Mark J. Reed markjreed@gmail.com
--
DI Edmund Haselwanter, edmund@haselwanter.com,
http://edmund.haselwanter.com/
http://www.iteh.at | Facebook |
http://at.linkedin.com/in/haselwanteredmund
--
Charles Sullivan
charlie.sullivan@gmail.com
--
DI Edmund Haselwanter, edmund@haselwanter.com,
http://edmund.haselwanter.com/
http://www.iteh.at | Facebook |
http://at.linkedin.com/in/haselwanteredmund
--
Charles Sullivan
charlie.sullivan@gmail.com