btm++
There are also some practical considerations:
Installing binary is much faster than compiling from source, which can
make all the difference when doing automated server builds, especially
when testing things on ec2. (even gentoo has .tbz2).
Source builds require you to map out all your build prerequisites, run
prerequisites, compile flags, prefixes, etc and include them in the
cookbook you're building. If you've done all that, you may as well
just go ahead and make the package, since that's all a debian or rpm
spec is anyway.
IMHO procedural logic should be used as sparingly as possible in favor
of declarative resources.
The "install package, write config, notify service" (all using Chef
resources) design pattern is much more readable and maintainable than
a mess of "unless ::File.exists?" blocks littered throughout a recipe
The idempotency check to see if a packages is installed cleaner and safer.
All that being said, there are situations when you simply need to Get
It Done Now and might not have time or resources to build a proper
package, and might opt to build from source instead.
I just did this myself yesterday
/2 cents
-s
On Wed, Apr 13, 2011 at 5:01 PM, Bryan McLellan btm@loftninjas.org wrote:
On Wed, Apr 13, 2011 at 11:13 AM, Edward Sargisson esarge@pobox.com wrote:
What do people normally do?
If there is an upstream package, I'll backport it myself and put it in
a local apt repository, or put the deb in the cookbook if there isn't
one (boo). A file server is a reasonable alternative. Be sure you
document where these packages came from. For debian packages I tend to
change the version to whatever_it_was_1~bpo1 to signify it is a
backport, or whatever_it_was_1opscode1 to signal that there are local
changes.
Contributing back is always preferred, except that with it comes
varying degrees of difficult. If you're back porting a newer version
of a package to an older distribution, there isn't anything to push
upstream. Upstream distributions usually only backport blocking
bugfixes and security patches. Sometimes upstream moves to slow for
your patches. Sometimes they don't have the resources to deal with all
of your packages and don't see it worth the time to put them in. It
depends.
On Wed, Apr 13, 2011 at 10:52 AM, KC Braunschweig
kcbraunschweig@gmail.com wrote:
While this is a reasonable way to install from source if necessary,
I'd discourage anyone from doing this in practice on boxes you care
about (i.e. production systems or dev/qa systems in an enterprise
environment). If there isn't a package for your platform, grab the
source and build the package yourself. You won't get the benefits of
community testing of the build, but you'll still get the benefits of
package management to install, remove and manage dependencies for the
package. Bonus points for contributing your package build back to the
community (shout out to the folks on this list that have published
their chef and chef dependency packages that weren't publicly packaged
before!).
When you log into a server you didn't build because someone told you
the web service is down and find that there are multiple copies of
apache installed; one as a system package in /usr/sbin, and two built
from source in /usr/local, you curse. Which one is the right one? One
benefit of packaging is that you can cleanly remove software when
you're done with it or want a different version. So packaging helps
this, but so does documentation and good practice. Packaging as makes
it easier to know where files are installed, and that they are
installed where they should be, depending on the quality of the
package.
Some of the benefits of packaging are also served by best practices
these days. When your servers are fully automated, and you want to
know what binary is the web server, it should be clear from your chef
recipes. If it isn't, you might want to consider that they're living
documentation and clean them up a bit.
I'm disappointed by the trend of using source packages whenever there
aren't binary packages, rather than working with upstream
distributions to create them. Sooner or later I believe you end up
with a very carefully managed set of custom dependencies and you start
to lose the benefits of a shared base operating system. Still, there
are certainly times when scripting an installation from source is the
easiest way to go, and perfectly sane. If it doesn't work out, rebuilt
the system from scratch. That should be automated anyway.
Bryan