What are the use cases for Habitat? (monolithic COTS stuff, by chance?)

A monolithic app that takes up multiple machines to make a full install. Is that a good use case for Habitat? (edited)

Namely, an SNMP-based NMS/monitoring tool
It’s called AssureOne, by Federos.

Certainly! What you’ll do is take the upstream, put it in a habitat package. Make the init hook do the installation as the COTS app would normally require; link in the configuration files (if it uses them, or commands otherwise) and go to town.

It works shockingly well.

Don’t bother with trying to get the build system to use habitat native dependencies - assume that the end user knows their system requirements well enough that if the package fails, it fails.

Whoa. A reply from The Adam. Thanks man. Was great to see you at ChefConf, btw. Your talk kicked ass.

But does that still work if your end result is something like 12 VMs? How would that work in Habitat?

The end results, as cobbled together by a seven-headed Chef cookbook monstrosity is 4 “presenter” machines each with a set of LBs, 4 data nodes with LBs, and 4 collector machines. A full environment is over a dozen machines, once you factor in the VIPs.

How would something like that work in Habitat?

@akulbe If you’re interested in trying to package some cots stuff we’ve got a wiki post here that kind of shows what the process might look like!

When you say 12 machines do you mean 12 separate services or the same service on 12 machines? Either way, Habitat was definitely designed with distributed systems in mind. I’m sure @adam could wax poetic on that subject :slight_smile:

Ah you’ve updated your comment there, there could be a few different strategies based on what you’ve said there @akulbe but this is definitely in Habitat’s wheelhouse. A couple of questions - Are each of these nodes running the same binary or are the load balancers etc. their own thing deployed as a sidecar? Are the “presenter” nodes and the “collector” nodes the same binary with differing configs? Are the data nodes something like postgres or are they another configuration of the same binary?

To be clear - nothing you’ve said so far sounds out of scope for the problems that Habitat is trying to solve!

Everything runs different binaries, each on their own VMs. And yet more VMs are LBs in front. And yet even MORE VMs for VIPs in front of the LBs. (these aren’t true VMs in that case. Just something to allocate a real IP to, to pull it out of the pool)
Collectors do SNMP polling and trap catching on their own VMs.
Presentation machines run a web server and a database.
DataNodes run a database that gets some more persistent data from the Presentation machines.

Sorry for the delay in responding. I’d started typing this out… and SQUIRREL!!!

Good website, as I come back a week later, and my draft is still sitting here waiting for me.

Like @eeyun said, nothing you’ve said so far rules out Habitat.

You will likely want to create a Habitat package for each service that needs to run (e.g, collectors, web server, database, data nodes, etc.) and then deploy them across a collection of Supervisors (co-located on the same Supervisor, as necessary). Based on how the packages are set up and configured, Habitat can wire them together.

There’s obviously a bit of work to get to that point, but it should be doable.