I have a stateless frontend application running in Kubernetes as a habitat service (k8s service + deployment that runs myorigin/myapp). I have configuration that I apply using hab config apply, but I want that persisted in a ring so I don’t have to worry about rerunning it if any of my deployment’s pods are shut down, relocated, etc by Kubernetes.
Is it a good/safe practice to run a Bastion Ring as a separate service / deployment in the kubernetes cluster? Or would it be better to run that on the kubernetes master systems themselves? Anyone have any experience (and examples) of this?
@jtimberman in the case of k8s I think we lean towards leveraging kubernetes + the habitat operator to provide configuration changes which should enable your pods to behave appropriately on scaling up or down slash any other standard scheduler behaviors.
In the case that you can’t run the operator, I think it would be fine to run a bastion ring, but we haven’t quite released our container images that just deploy the supervisor so you’d need to build an image from this: https://github.com/habitat-sh/plan-ci/blob/master/images/hab-ci/Dockerfile and I’d also guess figure out some way to tell your pods to peer with that. You could run into some strange behavior with this that the operator obviates. But that might get you started.
Sanity Check™ time! I start three supervisors as a bastion ring using a container I made from the Dockerfile @eeyun linked. the first one is 172.0.0.2, the second and third need to use this as a peer, yes?
First:
hab sup run -I
Second and third:
hab sup run -I --peer 172.0.0.2
Then, my application I start with
hab start myorigin/myapp --peer 172.0.0.2 --group dev --topology leader
Is this the correct initial startup?
Additional Questions:
Do I want an ELB or similar for the bastion ring itself? Or do I use the initial first supervisor on 172.0.0.2?
Do I use one of the myorigin/myapp supervisors for things like hab config apply, or one of the bastion ring supervisors? Or if I should use an ELB (or similar), that? Or 172.0.0.2?
This is part of the challenge of using k8s and trying to leverage habitat behaviors without the operator. Because of the abstractions kubernetes provides over your infrastructure YOU may know that you've turned up a pod with a single container but kubernetes doesn't, so I think (and again I haven't tried this in almost 2 years) you actually want to try to use the pod IP for the bastion ring to peer.
Interacting with the cluster from an external source in order to do things like hab config apply I think is probably an anti-pattern with k8s, but effectively it shouldn't matter which supervisor you provide the configuration to, it should get gossiped. The complication is that if you are external to the k8s cluster you have to look into an ELB because by default you won't have access to any of the in cluster network addresses to point to. If you're somehow running this command from inside the cluster then just using the pod IP should work.
Operating in this way isn't exactly supported, so there could be a whole heap of dragons in there, just so you are aware!
Exported containers on Kubernetes do not gossip peer to peer, and trying to use non-k8s containers without the Habitat Operator will not work in a Kubernetes cluster. You need to install the Operator, and the application automation behavior you’ve established for your services when you defined them with Habitat will be run in a Kubernetes-native way to ensure consistent and expected application behavior cross platform.
Bastion rings are a best practice for VMs / unscheduled containers / bare metal to ensure you have permanent peers participating in your ring for gossip persistence. They are not needed or recommended if you are using a container scheduler, as container schedulers handle this themselves. https://www.habitat.sh/docs/using-habitat/#setting-up-a-ring