Bastion Ring on Kubernetes Master?

I have a stateless frontend application running in Kubernetes as a habitat service (k8s service + deployment that runs myorigin/myapp). I have configuration that I apply using hab config apply, but I want that persisted in a ring so I don’t have to worry about rerunning it if any of my deployment’s pods are shut down, relocated, etc by Kubernetes.

Is it a good/safe practice to run a Bastion Ring as a separate service / deployment in the kubernetes cluster? Or would it be better to run that on the kubernetes master systems themselves? Anyone have any experience (and examples) of this?

Thank you

@jtimberman in the case of k8s I think we lean towards leveraging kubernetes + the habitat operator to provide configuration changes which should enable your pods to behave appropriately on scaling up or down slash any other standard scheduler behaviors.

In your case is it possible to use the habitat operator? If so this README.md might be helpful - https://github.com/habitat-sh/habitat-operator/tree/master/examples/config

In the case that you can’t run the operator, I think it would be fine to run a bastion ring, but we haven’t quite released our container images that just deploy the supervisor so you’d need to build an image from this: https://github.com/habitat-sh/plan-ci/blob/master/images/hab-ci/Dockerfile and I’d also guess figure out some way to tell your pods to peer with that. You could run into some strange behavior with this that the operator obviates. But that might get you started.

Sanity Check™ time! I start three supervisors as a bastion ring using a container I made from the Dockerfile @eeyun linked. the first one is 172.0.0.2, the second and third need to use this as a peer, yes?

First:

hab sup run -I

Second and third:

hab sup run -I --peer 172.0.0.2

Then, my application I start with

hab start myorigin/myapp --peer 172.0.0.2 --group dev --topology leader

Is this the correct initial startup?

Additional Questions:

  1. Do I want an ELB or similar for the bastion ring itself? Or do I use the initial first supervisor on 172.0.0.2?
  2. Do I use one of the myorigin/myapp supervisors for things like hab config apply, or one of the bastion ring supervisors? Or if I should use an ELB (or similar), that? Or 172.0.0.2?
  1. This is part of the challenge of using k8s and trying to leverage habitat behaviors without the operator. Because of the abstractions kubernetes provides over your infrastructure YOU may know that you've turned up a pod with a single container but kubernetes doesn't, so I think (and again I haven't tried this in almost 2 years) you actually want to try to use the pod IP for the bastion ring to peer.

  2. Interacting with the cluster from an external source in order to do things like hab config apply I think is probably an anti-pattern with k8s, but effectively it shouldn't matter which supervisor you provide the configuration to, it should get gossiped. The complication is that if you are external to the k8s cluster you have to look into an ELB because by default you won't have access to any of the in cluster network addresses to point to. If you're somehow running this command from inside the cluster then just using the pod IP should work.

Operating in this way isn't exactly supported, so there could be a whole heap of dragons in there, just so you are aware!

I know you guys talked offline and got on the same page about this, but for those who find this article later and are wondering similar thoughts:

1 Like