Remote control of habitat, no configuration management or "human" required to deploy and start?

I am familiar with habitat, it’s concepts, I admiring vision you had two years ago. I have few questions in mind thus posting them even bit “raw” just in case someone is interested.

TL;DR
Note this post is not about credentials handling or configuration management tools.
Questioning “remote” hab env. control as well as it’s autonomous/distributed lifetime of the habitat cluster.

Well when app is deployed with habitat, or habitat-operator in k8s, …, then it hold’s it’s life-cycle automation with it. However what happens when you have to run an update? I mean binaries don’t change, just configuration in .toml

  • update service configuration on a node (.toml, env variables, etc…)?
  • update whole group by an upload of user.toml?
  • there is gh: request for “hab edit”

Now,forget:

  • ssh to peer node and run “hab” there
  • run an configuration management* to deploy habitat infrastructure and control it later

Ssh requires and credentials and “specific” access. Configuration management requires keeping some state information all thetime. In fact, the only thing you need to control it is A) secret to connect to habitat ring, B) remote “hab” to talk to peer.

The use-case I would like to achieve is to

  1. When habitat “deployment/cluster” is created (it might register itself to an above layer to be later “controlled”)
    • vs. the oposite, the habitat cluster is deployed from the above layer (configuration managemet)
    • when it self-register to above layer, then “service discovery” in it may happen as well
  2. Or having an ability to remotely control deployed supervisor and it’s content
    • using the “above” layer to connect the hab cluster, or simply reaching “hab sup” remotely (over ssh)
      • while keeping configuration in running cluster
      • having the ability to remotely edit the configuration
      • having the ability to update configuration from the “operator” location (ie upload some local files, read k/v from
        local k/v store when it’s being send to hab supervisor)
    • (you may even imagine, in habitat-operator and k8s case, the “above layer” is the kubernetes itself and the API)
  3. Not to depend on centralized “configuration management” (chef, salt, …), shooter as (terraform, ansible) or even “bootstrap scripts” after cloud-init for 2nd day of the app lifetime
  4. When I say NO configuration managment, I don’t say NO to a tool for “continuous deploy”
  5. My infrastructure nodes, git repos to hold any “core” secrets, or at least not encrypted

To be more specific:

  • I considering kubernetes cluster deployment (with habitat packages) on top of bare-metal. Possibly only as starting the same images “with the pre-installed setup with habitat”
  • Or I may use terraform to start nodes, install “default setup” of some application but then I want to control it only
    with, an “.key” to connect a ring, “.toml” files with updates and external k/v as Vault with core Secrets (as certs).

Now some reasonings?

  • I want to avoid centralized configuration management for application, as for example reasoned by the “mgmtconfig”.
  • Avoid “state” full configuration setup in favor of “event” based configuration changes
    • on state, “configuration management” enforces some state all the time
    • on event based behavior there is no central point to “enforce state” but the “cluster itself” keeps the state, while on
      external event perform an reconfiguration -> update.
  • Tooling is not good enough
    • Terraform keepes it’s states with all values in clear text (I know, they may be encrypted). Obviously this creates an dependency a “foundation” node that run the teraform config and hold the state.
    • Anyway, If I would use terraform then for infrastructure, not for app configuration and lifecycle :wink:
  • For core secrets I will use ENV variables populated from k/v (like Vault) in my plan/hooks scripts rather then storing
    them in .toml or in configuration management or having configuration management only to read them from k/v, etc…
  • there is request to auto-discover peers (not by IP). Well event I don’t want to know who/what is peer. I want to “speak” to hub supervisor instance and don’t are what is behind or what is the current state.

Well how this fit the concept of Habitat?

  • Is Habitat mostly considered an “application layer” only while still requiring an management to drive bootstrap, configuration and CD?
  • Is the feature request to control “hab” remotely feasible or in a roadmap?
  • How important is the habitat services auto-discovery/registration for your today customers?
  • Do you find my requirement for autonomous management of habitat supervisor deployment (similarly as described by https://purpleidea.com/tags/mgmtconfig/) out of scope?

There's lots of great questions in here. I'll try to answer a few of them:

You have a few options here both in k8s and outside. Typically running hab config apply --peer [address.of.ring.member] <SERVICE_GROUP> <VERSION_NUMBER> [path/to/updated/toml/file] Will get you what you need. The new configuration will get gossiped and the cluster will get updated with the new configuration values. Now with ring encryption, to avoid ssh-ing to a cluster to perform this operation you will need the release that we're planning to send out this week. It adds new functionality that allows fully remote management of supervisors. There are likely going to be some patterns there that we haven't discovered yet because the supervisor has never functioned in this manner before.

Habitat tries very hard to avoid touching the provisioning layer at all. That's not necessarily something we see changing at any point (it's been a guiding principle for the project), but who knows. As it stands right now you could absolutely run a little side-car service that scrapes the supervisor data and uses that to register to some other control plane. However, a mixed inversion of what you're asking for is the way that a k8s cluster will operate. K8s handles the provisioning and configuration management, and the habitat operator interfaces with k8s to register all habitat services as a ring and enable kubernetes native behaviors alongside habitat native behaviors.

Yep, this is coming in this week's release! It might not hit all of your features there but we really can't say until we get some more chances to play with the functionality alongside k8s and other tooling. That being said, technically none of these things are impossible today:

The only complication is for the last item in the list. Right now the operator maintains the configuration by exposing the habitat behaviors and managing k8s behaviors. Luckily with the operator(s) this behavior could also be pretty easily modified though you'd likely have to write some glue code.

This is the one I don't necessarily see changing. Habitat's domain of concern is explicitly the application configuration and lifecycle. We have some hard boundaries on diving into provisioning. Our own team is using Terraform to deploy our services and while habitat was designed to run 100% distributed, you still need a way to manage your compute. Running container schedulers and containerized workloads are going to be the fastest path to fully distributed and configuration management-free deployment of habitat today. Luckily, habitat was 100% designed to thrive running under tools like that. With a mixture of github, habitat's builder and all of the features therein, a docker-hub of some kind, a secrets management tool like hashi vault, and a scheduler, you can mitigate any significant need for configuration management tooling at all. You could then have an entire system built on various forms of eventing and behavior (e.g. autonomous and pseudo-biological). But the moment you switch back to less ephemeral kinds of compute config management will be your friend again.

This one is a pretty tough problem to solve it turns out, which is why consul, zookeeper, and etcd all also have this concern. The best examples I've seen of avoiding this with habitat are to simply run some bastion peers in --permanent-peer mode with DNS entries pointing to them. Doing so means you can effectively turn nodes on and off and always --peer my.bastion.ring.dns and thus not have to worry about finding a peer.

Except for the CD part this is a correct statement. In Habitat, we use builder for CD. It has integration points for publishing, as well as channel promotions to enable the supervisors to watch the promotion of a package in order to upgrade itself without user intervention. Builder's channels and rebuild features are a great way to CD with habitat. Now in the case of bootstrapping and configuration management you are correct. We don't touch that provisioning stuff outside of the application/container itself. In the future we've discussed adding some deployment integrations to builder but nothing like that exists today.

Yep, you came at the right time! Watch for this, this week!

These two are related. Habitat affords some of these behaviors out of box, the idea is you define them in your package during development/build and then at runtime you don't have to think about them. However the provisioning itself is still not in the domain of concern. Hopefully my previous response looking at schedulers + builder was concise and made sense XD but I would say autonomous management isn't totally out of scope, but aspects of it aren't totally in scope either.

With auto-discovery, I'm not sure that we have a huge push for this feature today, but that doesn't mean we won't in the future! After all there are some pretty simple ways to get these outcomes today without writing a single line of code!

Ok... phew! Long response, hopefully it made sense. Obviously feel free to reply if I can clarify anything. Also if I've gotten something wrong I hope some of my other teammates will come in and correct me XD

Regards!

4 Likes

Cool, thanks Ian for your answer - it was really worth as it answered a lot of topics. Hopefully for others as well. I am thrilled to check features on latest release :wink:

1 Like