Ideas on deploying java applications

Fellow Cooks,
I’ve been brainstorming on the best approach to incorporate Java-based web
application deployments into the Chef ecosystem. The tricky thing we have
to contend with, is the two step build/deploy process most Java applications
go through. In most Java shops code is checked out from the SCM and then
some sort of build framework like Maven compiles and packages the
application down into a deployable artifact (usually a WAR file). This
artifact is then deployed into the Java application server(s).

Right now the data-bag driven application cookbook uses a one-step
approach…ie code is just checked out and sym-linked as “current”. It
would be nice to have a Java application that is deployed via this cookbook
follow a similar pattern. The application cookbook can just pull the final
deployable WAR file down from some arbitrary location…ie a valid URL whose
reference lives in the application data bag.

We still have to deal with the “build” portion though…or do we? Most
Java shops that are using something as sophisticated as Chef for application
deployments probably already have a continuous integration server that does
"builds". That’s awesome and we shouldn’t change it! What we need to do is
just hook into this workflow…ie have Chef be the final mile.

In order to solve this I propose creating a Maven plugin that would do the
following after a successful build:
-push the completed WAR to a configurable distribution point (Artifactory,
S3 etc.).
-grab a reference to the completed WAR (artifact download url)
-make an authorized PUT request to the Chef server and update the
application data bag with the the WAR’s new location/name. We could
probably leverage the jclouds chef-client to do this (ie the CI server or
build machine becomes a node).

The next time the chef-client runs on all application servers the new
artifact should be pulled down and the deployment is complete. I think
creating a Maven plugin is the best approach since most CI servers work well
with Maven. A smaller shop that doesn’t have a CI server could also just
check the code out of SCM and perform a build via Maven.

Thoughts?

Seth

Opscode, Inc.
Seth Chisamore, Technical Evangelist
T: (404) 348-0505 E: schisamo@opscode.com
Twitter, IRC, Github: schisamo

Build your software, make an OS native package (deb, rpm), chuck it
into your package repository and then the chef recipe beomes Super
Easy.

-s

On Tue, Oct 5, 2010 at 12:36 PM, Seth Chisamore schisamo@opscode.com wrote:

Fellow Cooks,
I've been brainstorming on the best approach to incorporate Java-based web
application deployments into the Chef ecosystem. The tricky thing we have
to contend with, is the two step build/deploy process most Java applications
go through. In most Java shops code is checked out from the SCM and then
some sort of build framework like Maven compiles and packages the
application down into a deployable artifact (usually a WAR file). This
artifact is then deployed into the Java application server(s).

Right now the data-bag driven application cookbook uses a one-step
approach...ie code is just checked out and sym-linked as "current". It
would be nice to have a Java application that is deployed via this cookbook
follow a similar pattern. The application cookbook can just pull the final
deployable WAR file down from some arbitrary location...ie a valid URL whose
reference lives in the application data bag.
We still have to deal with the "build" portion though....or do we? Most
Java shops that are using something as sophisticated as Chef for application
deployments probably already have a continuous integration server that does
"builds". That's awesome and we shouldn't change it! What we need to do is
just hook into this workflow...ie have Chef be the final mile.

In order to solve this I propose creating a Maven plugin that would do the
following after a successful build:
-push the completed WAR to a configurable distribution point (Artifactory,
S3 etc.).
-grab a reference to the completed WAR (artifact download url)
-make an authorized PUT request to the Chef server and update the
application data bag with the the WAR's new location/name. We could
probably leverage the jclouds chef-client to do this (ie the CI server or
build machine becomes a node).

The next time the chef-client runs on all application servers the new
artifact should be pulled down and the deployment is complete. I think
creating a Maven plugin is the best approach since most CI servers work well
with Maven. A smaller shop that doesn't have a CI server could also just
check the code out of SCM and perform a build via Maven.

Thoughts?

Seth

Opscode, Inc.
Seth Chisamore, Technical Evangelist
T: (404) 348-0505 E: schisamo@opscode.com
Twitter, IRC, Github: schisamo

Sounds easy but building OS native packages isn't something most Java teams
have the will or knowledge to do.

Seth

--
Opscode, Inc.
Seth Chisamore, Technical Evangelist
T: (404) 348-0505 E: schisamo@opscode.com
Twitter, IRC, Github: schisamo

On Tue, Oct 5, 2010 at 12:46 PM, Sean OMeara someara@gmail.com wrote:

Build your software, make an OS native package (deb, rpm), chuck it
into your package repository and then the chef recipe beomes Super
Easy.

-s

On Tue, Oct 5, 2010 at 12:36 PM, Seth Chisamore schisamo@opscode.com
wrote:

Fellow Cooks,
I've been brainstorming on the best approach to incorporate Java-based
web
application deployments into the Chef ecosystem. The tricky thing we
have
to contend with, is the two step build/deploy process most Java
applications
go through. In most Java shops code is checked out from the SCM and then
some sort of build framework like Maven compiles and packages the
application down into a deployable artifact (usually a WAR file). This
artifact is then deployed into the Java application server(s).

Right now the data-bag driven application cookbook uses a one-step
approach...ie code is just checked out and sym-linked as "current". It
would be nice to have a Java application that is deployed via this
cookbook
follow a similar pattern. The application cookbook can just pull the
final
deployable WAR file down from some arbitrary location...ie a valid URL
whose
reference lives in the application data bag.
We still have to deal with the "build" portion though....or do we? Most
Java shops that are using something as sophisticated as Chef for
application
deployments probably already have a continuous integration server that
does
"builds". That's awesome and we shouldn't change it! What we need to do
is
just hook into this workflow...ie have Chef be the final mile.

In order to solve this I propose creating a Maven plugin that would do
the
following after a successful build:
-push the completed WAR to a configurable distribution point
(Artifactory,
S3 etc.).
-grab a reference to the completed WAR (artifact download url)
-make an authorized PUT request to the Chef server and update the
application data bag with the the WAR's new location/name. We could
probably leverage the jclouds chef-client to do this (ie the CI server or
build machine becomes a node).

The next time the chef-client runs on all application servers the new
artifact should be pulled down and the deployment is complete. I think
creating a Maven plugin is the best approach since most CI servers work
well
with Maven. A smaller shop that doesn't have a CI server could also just
check the code out of SCM and perform a build via Maven.

Thoughts?

Seth

Opscode, Inc.
Seth Chisamore, Technical Evangelist
T: (404) 348-0505 E: schisamo@opscode.com
Twitter, IRC, Github: schisamo

I think a maven plugin is a great approach.

  • Trotter

On Tue, Oct 5, 2010 at 12:36 PM, Seth Chisamore schisamo@opscode.comwrote:

Fellow Cooks,
I've been brainstorming on the best approach to incorporate Java-based web
application deployments into the Chef ecosystem. The tricky thing we have
to contend with, is the two step build/deploy process most Java applications
go through. In most Java shops code is checked out from the SCM and then
some sort of build framework like Maven compiles and packages the
application down into a deployable artifact (usually a WAR file). This
artifact is then deployed into the Java application server(s).

Right now the data-bag driven application cookbook uses a one-step
approach...ie code is just checked out and sym-linked as "current". It
would be nice to have a Java application that is deployed via this cookbook
follow a similar pattern. The application cookbook can just pull the final
deployable WAR file down from some arbitrary location...ie a valid URL whose
reference lives in the application data bag.

We still have to deal with the "build" portion though....or do we? Most
Java shops that are using something as sophisticated as Chef for application
deployments probably already have a continuous integration server that does
"builds". That's awesome and we shouldn't change it! What we need to do is
just hook into this workflow...ie have Chef be the final mile.

In order to solve this I propose creating a Maven plugin that would do the
following after a successful build:
-push the completed WAR to a configurable distribution point (Artifactory,
S3 etc.).
-grab a reference to the completed WAR (artifact download url)
-make an authorized PUT request to the Chef server and update the
application data bag with the the WAR's new location/name. We could
probably leverage the jclouds chef-client to do this (ie the CI server or
build machine becomes a node).

The next time the chef-client runs on all application servers the new
artifact should be pulled down and the deployment is complete. I think
creating a Maven plugin is the best approach since most CI servers work well
with Maven. A smaller shop that doesn't have a CI server could also just
check the code out of SCM and perform a build via Maven.

Thoughts?

Seth

Opscode, Inc.
Seth Chisamore, Technical Evangelist
T: (404) 348-0505 E: schisamo@opscode.com
Twitter, IRC, Github: schisamo

We're looking at a similar issue, and so far we've found a couple
other options, each of which has its own scaling issues, but might be
appropriate in some cases.

  1. Have the recipe check the code out and do a build.

This is more heavyweight than it needs to be, obviously, and
introduces build-time dependencies into the runtime deployment, but
it's viable if the app and/or the deployment is small enough. It has
the advantage of keeping the support infrastructure simple.

But as long as your build server is able to talk to Chef to update the
WAR's new location/name, why not just do this instead:

  1. Upload the WAR itself as a cookbook file.

This has the advantage that you don't introduce a new network access
dependency between your target nodes and your artifact repository (or
source code repository for option 1); if they can talk to the Chef
server, they can get the files.

On Tue, Oct 5, 2010 at 12:36 PM, Seth Chisamore schisamo@opscode.com
wrote:

Fellow Cooks,
I've been brainstorming on the best approach to incorporate Java-based web
application deployments into the Chef ecosystem. The tricky thing we have
to contend with, is the two step build/deploy process most Java applications
go through. In most Java shops code is checked out from the SCM and then
some sort of build framework like Maven compiles and packages the
application down into a deployable artifact (usually a WAR file). This
artifact is then deployed into the Java application server(s).

Right now the data-bag driven application cookbook uses a one-step
approach...ie code is just checked out and sym-linked as "current". It
would be nice to have a Java application that is deployed via this cookbook
follow a similar pattern. The application cookbook can just pull the final
deployable WAR file down from some arbitrary location...ie a valid URL whose
reference lives in the application data bag.
We still have to deal with the "build" portion though....or do we? Most
Java shops that are using something as sophisticated as Chef for application
deployments probably already have a continuous integration server that does
"builds". That's awesome and we shouldn't change it! What we need to do is
just hook into this workflow...ie have Chef be the final mile.

In order to solve this I propose creating a Maven plugin that would do the
following after a successful build:
-push the completed WAR to a configurable distribution point (Artifactory,
S3 etc.).
-grab a reference to the completed WAR (artifact download url)
-make an authorized PUT request to the Chef server and update the
application data bag with the the WAR's new location/name. We could
probably leverage the jclouds chef-client to do this (ie the CI server or
build machine becomes a node).

The next time the chef-client runs on all application servers the new
artifact should be pulled down and the deployment is complete. I think
creating a Maven plugin is the best approach since most CI servers work well
with Maven. A smaller shop that doesn't have a CI server could also just
check the code out of SCM and perform a build via Maven.

Thoughts?

Seth

Opscode, Inc.
Seth Chisamore, Technical Evangelist
T: (404) 348-0505 E: schisamo@opscode.com
Twitter, IRC, Github: schisamo

--
Mark J. Reed markjreed@gmail.com

Placing the WAR in a cookbook came up in some internal discussions our team
had. sfalcon had an elegant answer I agree with completely:

Putting artifacts in cookbooks (AIC) has a number of advantages: it allows reuse of auth as well as full cookbook power; it avoids the need for another system where the artifacts would live.

But for Platform customers with large or many artifacts, there are some
downsides. Updating an artifact using AIC means uploading a large binary to
the Platform and then download that binary from the Platform onto each node
that needs it. Using customer-local artifact storage as an alternative to
AIC saves time and bandwidth.

Another inconvenience of AIC is that it doesn't fit well with a workflow in
which cookbooks are stored in version control. Imagine... you just want to
edit a cookbook, but first you must wait to checkout all the latest
artifacts. If a customer outsources version control, they may end with an
extra layer of data pushing.

To me it just feels dirty to put a bunch of binary files up on the Chef
server.

Seth

--
Opscode, Inc.
Seth Chisamore, Technical Evangelist
T: (404) 348-0505 E: schisamo@opscode.com
Twitter, IRC, Github: schisamo

On Tue, Oct 5, 2010 at 1:47 PM, Mark J. Reed markjreed@gmail.com wrote:

We're looking at a similar issue, and so far we've found a couple
other options, each of which has its own scaling issues, but might be
appropriate in some cases.

  1. Have the recipe check the code out and do a build.

This is more heavyweight than it needs to be, obviously, and
introduces build-time dependencies into the runtime deployment, but
it's viable if the app and/or the deployment is small enough. It has
the advantage of keeping the support infrastructure simple.

But as long as your build server is able to talk to Chef to update the
WAR's new location/name, why not just do this instead:

  1. Upload the WAR itself as a cookbook file.

This has the advantage that you don't introduce a new network access
dependency between your target nodes and your artifact repository (or
source code repository for option 1); if they can talk to the Chef
server, they can get the files.

On Tue, Oct 5, 2010 at 12:36 PM, Seth Chisamore schisamo@opscode.com
wrote:

Fellow Cooks,
I've been brainstorming on the best approach to incorporate Java-based
web
application deployments into the Chef ecosystem. The tricky thing we
have
to contend with, is the two step build/deploy process most Java
applications
go through. In most Java shops code is checked out from the SCM and
then
some sort of build framework like Maven compiles and packages the
application down into a deployable artifact (usually a WAR file). This
artifact is then deployed into the Java application server(s).

Right now the data-bag driven application cookbook uses a one-step
approach...ie code is just checked out and sym-linked as "current". It
would be nice to have a Java application that is deployed via this
cookbook
follow a similar pattern. The application cookbook can just pull the
final
deployable WAR file down from some arbitrary location...ie a valid URL
whose
reference lives in the application data bag.
We still have to deal with the "build" portion though....or do we? Most
Java shops that are using something as sophisticated as Chef for
application
deployments probably already have a continuous integration server that
does
"builds". That's awesome and we shouldn't change it! What we need to
do is
just hook into this workflow...ie have Chef be the final mile.

In order to solve this I propose creating a Maven plugin that would do
the
following after a successful build:
-push the completed WAR to a configurable distribution point
(Artifactory,
S3 etc.).
-grab a reference to the completed WAR (artifact download url)
-make an authorized PUT request to the Chef server and update the
application data bag with the the WAR's new location/name. We could
probably leverage the jclouds chef-client to do this (ie the CI server
or
build machine becomes a node).

The next time the chef-client runs on all application servers the new
artifact should be pulled down and the deployment is complete. I think
creating a Maven plugin is the best approach since most CI servers work
well
with Maven. A smaller shop that doesn't have a CI server could also
just
check the code out of SCM and perform a build via Maven.

Thoughts?

Seth

Opscode, Inc.
Seth Chisamore, Technical Evangelist
T: (404) 348-0505 E: schisamo@opscode.com
Twitter, IRC, Github: schisamo

--
Mark J. Reed markjreed@gmail.com

Do you already have a CI Server in place? Then it probably comes with a kind of
"download url". Put that in a remote file resource and use your deploy mechanism.

What I have done for a client is:

  • Wrote a small Rails App as a kind of JSON Builder for Chef
  • This Rails App connects to Cruise from Thoughworks and presents the user with the latest successful builds of a pipline
  • The user checks the build to deploy which is stored
  • after that (an some other configuration options for apache, tomcat6, god, ...) the user can initiate a chef-solo run
  • the rails app (in fact its webistrano with this custom extension -> many thanks to the awesome peritor folks!) has a task copying over the generated json to the target maschine and runs chef solo

=> the json could be written by hand but this enables not so tech savvy people be agnostic to the inner workings of the deploy process
=> each deploy has its own server-config json string
==> you could point to another machine an do the same stuff. in fact i did that to have the same stuff on dev and prod stages
=> you can go back in time and have some basic "I want this maschine at the state of 03-23-2010" if you have the content to clone (which I have on amazon EBS)

chef does the heavy lifting

happy cooking :slight_smile:

On 05.10.2010, at 19:47, Mark J. Reed wrote:

We're looking at a similar issue, and so far we've found a couple
other options, each of which has its own scaling issues, but might be
appropriate in some cases.

  1. Have the recipe check the code out and do a build.

This is more heavyweight than it needs to be, obviously, and
introduces build-time dependencies into the runtime deployment, but
it's viable if the app and/or the deployment is small enough. It has
the advantage of keeping the support infrastructure simple.

But as long as your build server is able to talk to Chef to update the
WAR's new location/name, why not just do this instead:

  1. Upload the WAR itself as a cookbook file.

This has the advantage that you don't introduce a new network access
dependency between your target nodes and your artifact repository (or
source code repository for option 1); if they can talk to the Chef
server, they can get the files.

On Tue, Oct 5, 2010 at 12:36 PM, Seth Chisamore schisamo@opscode.com
wrote:

Fellow Cooks,
I've been brainstorming on the best approach to incorporate Java-based web
application deployments into the Chef ecosystem. The tricky thing we have
to contend with, is the two step build/deploy process most Java applications
go through. In most Java shops code is checked out from the SCM and then
some sort of build framework like Maven compiles and packages the
application down into a deployable artifact (usually a WAR file). This
artifact is then deployed into the Java application server(s).

Right now the data-bag driven application cookbook uses a one-step
approach...ie code is just checked out and sym-linked as "current". It
would be nice to have a Java application that is deployed via this cookbook
follow a similar pattern. The application cookbook can just pull the final
deployable WAR file down from some arbitrary location...ie a valid URL whose
reference lives in the application data bag.
We still have to deal with the "build" portion though....or do we? Most
Java shops that are using something as sophisticated as Chef for application
deployments probably already have a continuous integration server that does
"builds". That's awesome and we shouldn't change it! What we need to do is
just hook into this workflow...ie have Chef be the final mile.

In order to solve this I propose creating a Maven plugin that would do the
following after a successful build:
-push the completed WAR to a configurable distribution point (Artifactory,
S3 etc.).
-grab a reference to the completed WAR (artifact download url)
-make an authorized PUT request to the Chef server and update the
application data bag with the the WAR's new location/name. We could
probably leverage the jclouds chef-client to do this (ie the CI server or
build machine becomes a node).

The next time the chef-client runs on all application servers the new
artifact should be pulled down and the deployment is complete. I think
creating a Maven plugin is the best approach since most CI servers work well
with Maven. A smaller shop that doesn't have a CI server could also just
check the code out of SCM and perform a build via Maven.

Thoughts?

Seth

Opscode, Inc.
Seth Chisamore, Technical Evangelist
T: (404) 348-0505 E: schisamo@opscode.com
Twitter, IRC, Github: schisamo

--
Mark J. Reed markjreed@gmail.com

--
DI Edmund Haselwanter, edmund@haselwanter.com, http://edmund.haselwanter.com/
http://www.iteh.at | Facebook | http://at.linkedin.com/in/haselwanteredmund

I've spend a lot of time doing something similar. Here is what I had to do
to make everything work w/ Chef.

First I updated the Tomcat cookbook so that it uses JSVC so that the Java
process is daemonized properly.. Previously it was dying once the Chef run
was complete. I think this is because I was boostrapping my servers with
'knife ec2 server create'. Second, I made the TomCat manager API actually
work. I use it to deploy my application correctly.

I use Hudson as my continuous build server. I have it ship my WAR files to
my S3 account. I then reference which build that should be deployed in a
databag.

To allow my recipe to download the file from S3 without changing permissions
of the file I use the following S3 resource..

Let me know if you have any more questions..

On Tue, Oct 5, 2010 at 1:23 PM, Haselwanter Edmund
edmund@haselwanter.comwrote:

Do you already have a CI Server in place? Then it probably comes with a
kind of
"download url". Put that in a remote file resource and use your deploy
mechanism.

What I have done for a client is:

  • Wrote a small Rails App as a kind of JSON Builder for Chef
  • This Rails App connects to Cruise from Thoughworks and presents the user
    with the latest successful builds of a pipline
  • The user checks the build to deploy which is stored
  • after that (an some other configuration options for apache, tomcat6, god,
    ...) the user can initiate a chef-solo run
  • the rails app (in fact its webistrano with this custom extension -> many
    thanks to the awesome peritor folks!) has a task copying over the generated
    json to the target maschine and runs chef solo

=> the json could be written by hand but this enables not so tech savvy
people be agnostic to the inner workings of the deploy process
=> each deploy has its own server-config json string
==> you could point to another machine an do the same stuff. in fact i did
that to have the same stuff on dev and prod stages
=> you can go back in time and have some basic "I want this maschine at the
state of 03-23-2010" if you have the content to clone (which I have on
amazon EBS)

chef does the heavy lifting

happy cooking :slight_smile:

On 05.10.2010, at 19:47, Mark J. Reed wrote:

We're looking at a similar issue, and so far we've found a couple
other options, each of which has its own scaling issues, but might be
appropriate in some cases.

  1. Have the recipe check the code out and do a build.

This is more heavyweight than it needs to be, obviously, and
introduces build-time dependencies into the runtime deployment, but
it's viable if the app and/or the deployment is small enough. It has
the advantage of keeping the support infrastructure simple.

But as long as your build server is able to talk to Chef to update the
WAR's new location/name, why not just do this instead:

  1. Upload the WAR itself as a cookbook file.

This has the advantage that you don't introduce a new network access
dependency between your target nodes and your artifact repository (or
source code repository for option 1); if they can talk to the Chef
server, they can get the files.

On Tue, Oct 5, 2010 at 12:36 PM, Seth Chisamore schisamo@opscode.com
wrote:

Fellow Cooks,
I've been brainstorming on the best approach to incorporate Java-based
web
application deployments into the Chef ecosystem. The tricky thing we
have
to contend with, is the two step build/deploy process most Java
applications
go through. In most Java shops code is checked out from the SCM and
then
some sort of build framework like Maven compiles and packages the
application down into a deployable artifact (usually a WAR file). This
artifact is then deployed into the Java application server(s).

Right now the data-bag driven application cookbook uses a one-step
approach...ie code is just checked out and sym-linked as "current". It
would be nice to have a Java application that is deployed via this
cookbook
follow a similar pattern. The application cookbook can just pull the
final
deployable WAR file down from some arbitrary location...ie a valid URL
whose
reference lives in the application data bag.
We still have to deal with the "build" portion though....or do we?
Most
Java shops that are using something as sophisticated as Chef for
application
deployments probably already have a continuous integration server that
does
"builds". That's awesome and we shouldn't change it! What we need to
do is
just hook into this workflow...ie have Chef be the final mile.

In order to solve this I propose creating a Maven plugin that would do
the
following after a successful build:
-push the completed WAR to a configurable distribution point
(Artifactory,
S3 etc.).
-grab a reference to the completed WAR (artifact download url)
-make an authorized PUT request to the Chef server and update the
application data bag with the the WAR's new location/name. We could
probably leverage the jclouds chef-client to do this (ie the CI server
or
build machine becomes a node).

The next time the chef-client runs on all application servers the new
artifact should be pulled down and the deployment is complete. I think
creating a Maven plugin is the best approach since most CI servers work
well
with Maven. A smaller shop that doesn't have a CI server could also
just
check the code out of SCM and perform a build via Maven.

Thoughts?

Seth

Opscode, Inc.
Seth Chisamore, Technical Evangelist
T: (404) 348-0505 E: schisamo@opscode.com
Twitter, IRC, Github: schisamo

--
Mark J. Reed markjreed@gmail.com

--
DI Edmund Haselwanter, edmund@haselwanter.com,
http://edmund.haselwanter.com/
http://www.iteh.at | Facebook |
http://at.linkedin.com/in/haselwanteredmund

--
Charles Sullivan
charlie.sullivan@gmail.com

We're doing the same thing. A few comments below:

On Tue, Oct 5, 2010 at 6:36 PM, Seth Chisamore schisamo@opscode.com wrote:

Fellow Cooks,
I've been brainstorming on the best approach to incorporate Java-based web
application deployments into the Chef ecosystem. The tricky thing we have
to contend with, is the two step build/deploy process most Java applications
go through. In most Java shops code is checked out from the SCM and then
some sort of build framework like Maven compiles and packages the
application down into a deployable artifact (usually a WAR file). This
artifact is then deployed into the Java application server(s).

Right now the data-bag driven application cookbook uses a one-step
approach...ie code is just checked out and sym-linked as "current". It
would be nice to have a Java application that is deployed via this cookbook
follow a similar pattern. The application cookbook can just pull the final
deployable WAR file down from some arbitrary location...ie a valid URL whose
reference lives in the application data bag.

This is how we do it. The build step uploads to S3 with a build #,
the cookbook downloads and symlinks.

We still have to deal with the "build" portion though....or do we?

I don't think so. I would hate to have all the build dependencies
installed on the server.

Most
Java shops that are using something as sophisticated as Chef for application
deployments probably already have a continuous integration server that does
"builds". That's awesome and we shouldn't change it! What we need to do is
just hook into this workflow...ie have Chef be the final mile.

Precisely.

In order to solve this I propose creating a Maven plugin that would do the
following after a successful build:
-push the completed WAR to a configurable distribution point (Artifactory,
S3 etc.).
-grab a reference to the completed WAR (artifact download url)
-make an authorized PUT request to the Chef server and update the
application data bag with the the WAR's new location/name. We could
probably leverage the jclouds chef-client to do this (ie the CI server or
build machine becomes a node).

The next time the chef-client runs on all application servers the new
artifact should be pulled down and the deployment is complete. I think
creating a Maven plugin is the best approach since most CI servers work well
with Maven. A smaller shop that doesn't have a CI server could also just
check the code out of SCM and perform a build via Maven.

Thoughts?

Seth

We don't use Maven but Gradle. It was pretty simple to script upload
to S3. I think the main part is to get a good Chef cookbok that
supports deployment as well as rollback to previous verions as well as
hooks for db updates etc :slight_smile:

/Jeppe

On 05.10.2010, at 21:06, Charles Sullivan wrote:

I've spend a lot of time doing something similar. Here is what I had to do to make everything work w/ Chef.

First I updated the Tomcat cookbook so that it uses JSVC so that the Java process is daemonized properly..

I wrote that cookbook :wink:

It was mend to work on centos and it still does.

Previously it was dying once the Chef run was complete.

must be something different

I think this is because I was boostrapping my servers with 'knife ec2 server create'. Second, I made the TomCat manager API actually work. I use it to deploy my application correctly.

what OS did you use. this does work too on centos 5.2 on EC2

I use Hudson as my continuous build server. I have it ship my WAR files to my S3 account. I then reference which build that should be deployed in a databag.

if the hudson server is accessable you can request it from chef e.g. with a bash ressource and curl. s3 is a nice solution. makes it CI agnostic. but I wanted to be able to for a user to just point and klick. no CI / Chef knowledge ...

GitHub - dougm/hudson-s3: Upload Hudson build artifacts to Amazon S3

To allow my recipe to download the file from S3 without changing permissions of the file I use the following S3 resource.. s3_file.rb · GitHub

Let me know if you have any more questions..

On Tue, Oct 5, 2010 at 1:23 PM, Haselwanter Edmund edmund@haselwanter.com wrote:

Do you already have a CI Server in place? Then it probably comes with a kind of
"download url". Put that in a remote file resource and use your deploy mechanism.

What I have done for a client is:

  • Wrote a small Rails App as a kind of JSON Builder for Chef
  • This Rails App connects to Cruise from Thoughworks and presents the user with the latest successful builds of a pipline
  • The user checks the build to deploy which is stored
  • after that (an some other configuration options for apache, tomcat6, god, ...) the user can initiate a chef-solo run
  • the rails app (in fact its webistrano with this custom extension -> many thanks to the awesome peritor folks!) has a task copying over the generated json to the target maschine and runs chef solo

=> the json could be written by hand but this enables not so tech savvy people be agnostic to the inner workings of the deploy process
=> each deploy has its own server-config json string
==> you could point to another machine an do the same stuff. in fact i did that to have the same stuff on dev and prod stages
=> you can go back in time and have some basic "I want this maschine at the state of 03-23-2010" if you have the content to clone (which I have on amazon EBS)

chef does the heavy lifting

happy cooking :slight_smile:

On 05.10.2010, at 19:47, Mark J. Reed wrote:

We're looking at a similar issue, and so far we've found a couple
other options, each of which has its own scaling issues, but might be
appropriate in some cases.

  1. Have the recipe check the code out and do a build.

This is more heavyweight than it needs to be, obviously, and
introduces build-time dependencies into the runtime deployment, but
it's viable if the app and/or the deployment is small enough. It has
the advantage of keeping the support infrastructure simple.

But as long as your build server is able to talk to Chef to update the
WAR's new location/name, why not just do this instead:

  1. Upload the WAR itself as a cookbook file.

This has the advantage that you don't introduce a new network access
dependency between your target nodes and your artifact repository (or
source code repository for option 1); if they can talk to the Chef
server, they can get the files.

On Tue, Oct 5, 2010 at 12:36 PM, Seth Chisamore schisamo@opscode.com
wrote:

Fellow Cooks,
I've been brainstorming on the best approach to incorporate Java-based web
application deployments into the Chef ecosystem. The tricky thing we have
to contend with, is the two step build/deploy process most Java applications
go through. In most Java shops code is checked out from the SCM and then
some sort of build framework like Maven compiles and packages the
application down into a deployable artifact (usually a WAR file). This
artifact is then deployed into the Java application server(s).

Right now the data-bag driven application cookbook uses a one-step
approach...ie code is just checked out and sym-linked as "current". It
would be nice to have a Java application that is deployed via this cookbook
follow a similar pattern. The application cookbook can just pull the final
deployable WAR file down from some arbitrary location...ie a valid URL whose
reference lives in the application data bag.
We still have to deal with the "build" portion though....or do we? Most
Java shops that are using something as sophisticated as Chef for application
deployments probably already have a continuous integration server that does
"builds". That's awesome and we shouldn't change it! What we need to do is
just hook into this workflow...ie have Chef be the final mile.

In order to solve this I propose creating a Maven plugin that would do the
following after a successful build:
-push the completed WAR to a configurable distribution point (Artifactory,
S3 etc.).
-grab a reference to the completed WAR (artifact download url)
-make an authorized PUT request to the Chef server and update the
application data bag with the the WAR's new location/name. We could
probably leverage the jclouds chef-client to do this (ie the CI server or
build machine becomes a node).

The next time the chef-client runs on all application servers the new
artifact should be pulled down and the deployment is complete. I think
creating a Maven plugin is the best approach since most CI servers work well
with Maven. A smaller shop that doesn't have a CI server could also just
check the code out of SCM and perform a build via Maven.

Thoughts?

Seth

Opscode, Inc.
Seth Chisamore, Technical Evangelist
T: (404) 348-0505 E: schisamo@opscode.com
Twitter, IRC, Github: schisamo

--
Mark J. Reed markjreed@gmail.com

--
DI Edmund Haselwanter, edmund@haselwanter.com, http://edmund.haselwanter.com/
http://www.iteh.at | Facebook | http://at.linkedin.com/in/haselwanteredmund

--
Charles Sullivan
charlie.sullivan@gmail.com

--
DI Edmund Haselwanter, edmund@haselwanter.com, http://edmund.haselwanter.com/
http://www.iteh.at | Facebook | http://at.linkedin.com/in/haselwanteredmund

Ed,

Ahh, I forgot you wrote that cookbook. We've chatted on IRC before. I'm
using Ubuntu and there were a few things that needed to change for it to
work. If you want I can send that to you in a different thread. Ubuntu
expected some files to be present that are on CentOS/RedHat by default, so I
had to reorder some of your cookbook.

I decided to ship my files to S3 so that multiple servers could pull files
down at the same time w/out a bottleneck, and because Hudson doesn't keep
all of my build, only a few of the latest.

About the use of jsvc; since I'm starting my servers and doing my first Chef
run via SSH (knife ec2 server create) the Tomcat process was dying once the
chef run was complete (SSH logout). I wasn't using runit to start
Tomcat... Using jsvc kept the process alive after logout..

Thanks for your contributions!

--Charlie

On Tue, Oct 5, 2010 at 3:07 PM, Haselwanter Edmund
edmund@haselwanter.comwrote:

On 05.10.2010, at 21:06, Charles Sullivan wrote:

I've spend a lot of time doing something similar. Here is what I had to do
to make everything work w/ Chef.

First I updated the Tomcat cookbook so that it uses JSVC so that the Java
process is daemonized properly..

I wrote that cookbook :wink:

It was mend to work on centos and it still does.

Previously it was dying once the Chef run was complete.

must be something different

I think this is because I was boostrapping my servers with 'knife ec2
server create'. Second, I made the TomCat manager API actually work. I use
it to deploy my application correctly.

what OS did you use. this does work too on centos 5.2 on EC2

I use Hudson as my continuous build server. I have it ship my WAR files to
my S3 account. I then reference which build that should be deployed in a
databag.

if the hudson server is accessable you can request it from chef e.g. with a
bash ressource and curl. s3 is a nice solution. makes it CI agnostic. but I
wanted to be able to for a user to just point and klick. no CI / Chef
knowledge ...

GitHub - dougm/hudson-s3: Upload Hudson build artifacts to Amazon S3

To allow my recipe to download the file from S3 without changing
permissions of the file I use the following S3 resource..
s3_file.rb · GitHub

Let me know if you have any more questions..

On Tue, Oct 5, 2010 at 1:23 PM, Haselwanter Edmund <edmund@haselwanter.com

wrote:

Do you already have a CI Server in place? Then it probably comes with a
kind of
"download url". Put that in a remote file resource and use your deploy
mechanism.

What I have done for a client is:

  • Wrote a small Rails App as a kind of JSON Builder for Chef
  • This Rails App connects to Cruise from Thoughworks and presents the user
    with the latest successful builds of a pipline
  • The user checks the build to deploy which is stored
  • after that (an some other configuration options for apache, tomcat6,
    god, ...) the user can initiate a chef-solo run
  • the rails app (in fact its webistrano with this custom extension -> many
    thanks to the awesome peritor folks!) has a task copying over the generated
    json to the target maschine and runs chef solo

=> the json could be written by hand but this enables not so tech savvy
people be agnostic to the inner workings of the deploy process
=> each deploy has its own server-config json string
==> you could point to another machine an do the same stuff. in fact i did
that to have the same stuff on dev and prod stages
=> you can go back in time and have some basic "I want this maschine at
the state of 03-23-2010" if you have the content to clone (which I have on
amazon EBS)

chef does the heavy lifting

happy cooking :slight_smile:

On 05.10.2010, at 19:47, Mark J. Reed wrote:

We're looking at a similar issue, and so far we've found a couple
other options, each of which has its own scaling issues, but might be
appropriate in some cases.

  1. Have the recipe check the code out and do a build.

This is more heavyweight than it needs to be, obviously, and
introduces build-time dependencies into the runtime deployment, but
it's viable if the app and/or the deployment is small enough. It has
the advantage of keeping the support infrastructure simple.

But as long as your build server is able to talk to Chef to update the
WAR's new location/name, why not just do this instead:

  1. Upload the WAR itself as a cookbook file.

This has the advantage that you don't introduce a new network access
dependency between your target nodes and your artifact repository (or
source code repository for option 1); if they can talk to the Chef
server, they can get the files.

On Tue, Oct 5, 2010 at 12:36 PM, Seth Chisamore schisamo@opscode.com
wrote:

Fellow Cooks,
I've been brainstorming on the best approach to incorporate Java-based
web
application deployments into the Chef ecosystem. The tricky thing we
have
to contend with, is the two step build/deploy process most Java
applications
go through. In most Java shops code is checked out from the SCM and
then
some sort of build framework like Maven compiles and packages the
application down into a deployable artifact (usually a WAR file).
This
artifact is then deployed into the Java application server(s).

Right now the data-bag driven application cookbook uses a one-step
approach...ie code is just checked out and sym-linked as "current".
It
would be nice to have a Java application that is deployed via this
cookbook
follow a similar pattern. The application cookbook can just pull the
final
deployable WAR file down from some arbitrary location...ie a valid URL
whose
reference lives in the application data bag.
We still have to deal with the "build" portion though....or do we?
Most
Java shops that are using something as sophisticated as Chef for
application
deployments probably already have a continuous integration server that
does
"builds". That's awesome and we shouldn't change it! What we need to
do is
just hook into this workflow...ie have Chef be the final mile.

In order to solve this I propose creating a Maven plugin that would do
the
following after a successful build:
-push the completed WAR to a configurable distribution point
(Artifactory,
S3 etc.).
-grab a reference to the completed WAR (artifact download url)
-make an authorized PUT request to the Chef server and update the
application data bag with the the WAR's new location/name. We could
probably leverage the jclouds chef-client to do this (ie the CI server
or
build machine becomes a node).

The next time the chef-client runs on all application servers the new
artifact should be pulled down and the deployment is complete. I
think
creating a Maven plugin is the best approach since most CI servers
work well
with Maven. A smaller shop that doesn't have a CI server could also
just
check the code out of SCM and perform a build via Maven.

Thoughts?

Seth

Opscode, Inc.
Seth Chisamore, Technical Evangelist
T: (404) 348-0505 E: schisamo@opscode.com
Twitter, IRC, Github: schisamo

--
Mark J. Reed markjreed@gmail.com

--
DI Edmund Haselwanter, edmund@haselwanter.com,
http://edmund.haselwanter.com/
http://www.iteh.at | Facebook |
http://at.linkedin.com/in/haselwanteredmund

--
Charles Sullivan
charlie.sullivan@gmail.com

--
DI Edmund Haselwanter, edmund@haselwanter.com,
http://edmund.haselwanter.com/
http://www.iteh.at | Facebook |
http://at.linkedin.com/in/haselwanteredmund

--
Charles Sullivan
charlie.sullivan@gmail.com

FWIW, we do something extremely cheesy. The deploy resource is a little
overkill for us.
We wrote a simple define that will sync a specific branch/tag from git and
then run a script. But default, it will cwd to the directory of the git
code and run “installme,” but you can override this as well. We had to use
the monkey patch that backports a fix to the git resource. So a git deploy
can notify other resources correctly.

Is this ideal, no. But it’s very simple: get stuff into a branch and we
deploy it (I really don’t care how it got to that branch). It works for
java, ruby, php, C, etc.

Here’s some pseudo chef:

define :git_deploy, :command => “./installme”, :revision => “master” do
directory “/var/git” do
owner "root"
group "root"
mode "0755"
action :create
end

bash "build #{params[:name]}" do
    user "root"
    cwd "/var/git/#{params[:name]}"
    action :nothing
    code params[:command]
end

git "/var/git/#{params[:name]}" do
    action :sync
    repository params[:repository]
    revision params[:revision]
    depth 1
    notifies :run, resources(:bash => "build #{params[:name]}"),

:immediately
end

end

We're just beginning with Chef at my workplace. However, with regard to Java application deployments, we have an internal system built with ruby that we've used for several years to deploy and manage Java software across thousands of servers. Our developers just recently open sourced it a few weeks ago. In case you find it useful for ideas for Chef, you can find it here:

Steven

On Oct 5, 2010, at 12:36 PM, Seth Chisamore wrote:

Fellow Cooks,
I've been brainstorming on the best approach to incorporate Java-based web application deployments into the Chef ecosystem. The tricky thing we have to contend with, is the two step build/deploy process most Java applications go through. In most Java shops code is checked out from the SCM and then some sort of build framework like Maven compiles and packages the application down into a deployable artifact (usually a WAR file). This artifact is then deployed into the Java application server(s).

Right now the data-bag driven application cookbook uses a one-step approach...ie code is just checked out and sym-linked as "current". It would be nice to have a Java application that is deployed via this cookbook follow a similar pattern. The application cookbook can just pull the final deployable WAR file down from some arbitrary location...ie a valid URL whose reference lives in the application data bag.

We still have to deal with the "build" portion though....or do we? Most Java shops that are using something as sophisticated as Chef for application deployments probably already have a continuous integration server that does "builds". That's awesome and we shouldn't change it! What we need to do is just hook into this workflow...ie have Chef be the final mile.

In order to solve this I propose creating a Maven plugin that would do the following after a successful build:
-push the completed WAR to a configurable distribution point (Artifactory, S3 etc.).
-grab a reference to the completed WAR (artifact download url)
-make an authorized PUT request to the Chef server and update the application data bag with the the WAR's new location/name. We could probably leverage the jclouds chef-client to do this (ie the CI server or build machine becomes a node).

The next time the chef-client runs on all application servers the new artifact should be pulled down and the deployment is complete. I think creating a Maven plugin is the best approach since most CI servers work well with Maven. A smaller shop that doesn't have a CI server could also just check the code out of SCM and perform a build via Maven.

Thoughts?

Seth

Opscode, Inc.
Seth Chisamore, Technical Evangelist
T: (404) 348-0505 E: schisamo@opscode.com
Twitter, IRC, Github: schisamo

On Wed, Oct 13, 2010 at 5:32 AM, Steven Dossett sdossett@panath.com wrote:

We're just beginning with Chef at my workplace. However, with regard to Java
application deployments, we have an internal system built with ruby that
we've used for several years to deploy and manage Java software across
thousands of servers. Our developers just recently open sourced it a few
weeks ago. In case you find it useful for ideas for Chef, you can find it
here:
GitHub - ning/galaxy: Galaxy is a lightweight software deployment and management tool. We use it at Ning to manage the Java cores and Apache httpd instances that make up the Ning platform (http://www.ning.com).

Thanks for the pointer, Steven!

Adam

--
Opscode, Inc.
Adam Jacob, CTO
T: (206) 508-7449 E: adam@opscode.com

Steven,
This is great...bookmarked and followed. Definitely keep watching the list
for Java deploy discussions...make sure we as a community are taking an
approach that makes sense from a Java-flavored Chef (a Jhef?) user.

Seth

--
Opscode, Inc.
Seth Chisamore, Technical Evangelist
T: (404) 348-0505 E: schisamo@opscode.com
Twitter, IRC, Github: schisamo

On Wed, Oct 13, 2010 at 8:32 AM, Steven Dossett sdossett@panath.com wrote:

We're just beginning with Chef at my workplace. However, with regard to
Java application deployments, we have an internal system built with ruby
that we've used for several years to deploy and manage Java software across
thousands of servers. Our developers just recently open sourced it a few
weeks ago. In case you find it useful for ideas for Chef, you can find it
here:

GitHub - ning/galaxy: Galaxy is a lightweight software deployment and management tool. We use it at Ning to manage the Java cores and Apache httpd instances that make up the Ning platform (http://www.ning.com).

Steven

On Oct 5, 2010, at 12:36 PM, Seth Chisamore wrote:

Fellow Cooks,
I've been brainstorming on the best approach to incorporate Java-based web
application deployments into the Chef ecosystem. The tricky thing we have
to contend with, is the two step build/deploy process most Java applications
go through. In most Java shops code is checked out from the SCM and then
some sort of build framework like Maven compiles and packages the
application down into a deployable artifact (usually a WAR file). This
artifact is then deployed into the Java application server(s).

Right now the data-bag driven application cookbook uses a one-step
approach...ie code is just checked out and sym-linked as "current". It
would be nice to have a Java application that is deployed via this cookbook
follow a similar pattern. The application cookbook can just pull the final
deployable WAR file down from some arbitrary location...ie a valid URL whose
reference lives in the application data bag.

We still have to deal with the "build" portion though....or do we? Most
Java shops that are using something as sophisticated as Chef for application
deployments probably already have a continuous integration server that does
"builds". That's awesome and we shouldn't change it! What we need to do is
just hook into this workflow...ie have Chef be the final mile.

In order to solve this I propose creating a Maven plugin that would do the
following after a successful build:
-push the completed WAR to a configurable distribution point (Artifactory,
S3 etc.).
-grab a reference to the completed WAR (artifact download url)
-make an authorized PUT request to the Chef server and update the
application data bag with the the WAR's new location/name. We could
probably leverage the jclouds chef-client to do this (ie the CI server or
build machine becomes a node).

The next time the chef-client runs on all application servers the new
artifact should be pulled down and the deployment is complete. I think
creating a Maven plugin is the best approach since most CI servers work well
with Maven. A smaller shop that doesn't have a CI server could also just
check the code out of SCM and perform a build via Maven.

Thoughts?

Seth

Opscode, Inc.
Seth Chisamore, Technical Evangelist
T: (404) 348-0505 E: schisamo@opscode.com
Twitter, IRC, Github: schisamo

We build rpms using the mvn (on centos) and deploy the rpm. The rpm bundles
the war file along with its required context file.
We don't do live deployments to tomcat so rpm install will (atomically) stop
tomcat, copy the new war, cleanup work directories and start tomcat.
I'm just building it now, so it's not production yet, but if this helps I'll
share a gist that demonstrates how to do it in mvn:

On Tue, Oct 5, 2010 at 7:14 PM, Seth Chisamore schisamo@opscode.com wrote:

Sounds easy but building OS native packages isn't something most Java teams
have the will or knowledge to do.

Seth

--
Opscode, Inc.
Seth Chisamore, Technical Evangelist
T: (404) 348-0505 E: schisamo@opscode.com
Twitter, IRC, Github: schisamo

On Tue, Oct 5, 2010 at 12:46 PM, Sean OMeara someara@gmail.com wrote:

Build your software, make an OS native package (deb, rpm), chuck it
into your package repository and then the chef recipe beomes Super
Easy.

-s

On Tue, Oct 5, 2010 at 12:36 PM, Seth Chisamore schisamo@opscode.com
wrote:

Fellow Cooks,
I've been brainstorming on the best approach to incorporate Java-based
web
application deployments into the Chef ecosystem. The tricky thing we
have
to contend with, is the two step build/deploy process most Java
applications
go through. In most Java shops code is checked out from the SCM and
then
some sort of build framework like Maven compiles and packages the
application down into a deployable artifact (usually a WAR file). This
artifact is then deployed into the Java application server(s).

Right now the data-bag driven application cookbook uses a one-step
approach...ie code is just checked out and sym-linked as "current". It
would be nice to have a Java application that is deployed via this
cookbook
follow a similar pattern. The application cookbook can just pull the
final
deployable WAR file down from some arbitrary location...ie a valid URL
whose
reference lives in the application data bag.
We still have to deal with the "build" portion though....or do we? Most
Java shops that are using something as sophisticated as Chef for
application
deployments probably already have a continuous integration server that
does
"builds". That's awesome and we shouldn't change it! What we need to
do is
just hook into this workflow...ie have Chef be the final mile.

In order to solve this I propose creating a Maven plugin that would do
the
following after a successful build:
-push the completed WAR to a configurable distribution point
(Artifactory,
S3 etc.).
-grab a reference to the completed WAR (artifact download url)
-make an authorized PUT request to the Chef server and update the
application data bag with the the WAR's new location/name. We could
probably leverage the jclouds chef-client to do this (ie the CI server
or
build machine becomes a node).

The next time the chef-client runs on all application servers the new
artifact should be pulled down and the deployment is complete. I think
creating a Maven plugin is the best approach since most CI servers work
well
with Maven. A smaller shop that doesn't have a CI server could also
just
check the code out of SCM and perform a build via Maven.

Thoughts?

Seth

Opscode, Inc.
Seth Chisamore, Technical Evangelist
T: (404) 348-0505 E: schisamo@opscode.com
Twitter, IRC, Github: schisamo

--
/Ran