Binding to remote services

I’m experimenting with creating service plans that can serve as a lightweight place to stash the connection details for a pre-provisioned remote service and provide it to an application through binding just as if it was actually providing the service. The benefits of this approach over that currently recommended in Habitat documentation of making the bind optional and adapting your config to accept connection details via bind or config includes having more of a unified code path for config and being able to run an independent health check on the remote service

Here’s my first stab at it, it uses an infinite sleep loop in place of actually running anything and includes a health_check equivalent to core/mysql's: https://github.com/JarvusInnovations/habitat-plans/tree/master/mysql-remote

This could be configured, for example with a /hab/user/mysql-remote/config/user.toml file like this:

host = "myhost.example.org"
port = "3306"
app_username = "myusername"
app_password = "mypassword"

And then bound to an application in place of core/mysql:

hab start jarvus/my-app --bind database:mysql-remote.default

The only change you need to then make to the application’s config templates is to accept a manually-configured host before falling back to the bound service’s IP:

$config = [
    {{~#eachAlive bind.database.members as |member|~}}
        {{~#if @first}}
    'database' => [
        'host' => '{{#if member.cfg.host}}{{ member.cfg.host }}{{else}}{{ member.sys.ip }}{{/if}}',
        'port' => '{{ member.cfg.port }}',
        'username' => '{{ member.cfg.username }}',
        'password' => '{{ member.cfg.password }}',
        'database' => '{{ ../cfg.database.name }}'
    ]
        {{~/if~}}
    {{~/eachAlive}}
]
1 Like

This is a really fascinating pattern. I’d be curious to hear what your outcome with this experiment ends up as!!

I just opened a PR to discuss merging this into core-plans: https://github.com/habitat-sh/core-plans/pull/1539

I like this idea. We might have to use something like this for connecting to an external elasticsearch.

This is my example of this pattern. I call it a couple of things:

  • Habitat Bind Service
  • Habitat Service Proxy

@predominant thanks for sharing! I really like your run script!

This is an interesting approach, one downside I can imagine though is that it seems like you couldn’t match the pkg_exports of the habitat-run service to provide drop-in replacement, as prefixes are needed to support multiple services. You could just configure the bind service to connect in-turn with the habitat-run service though.

Maybe this pattern could be helpful in creating a bind-sheet for complex applications that need multiple external services connected to multiple internal components, especially for composite plans since there’s no way currently to bind services to a composite. That way all the external components could be bound once to the “bind-service” within a composite plan, and then the composite plan could handle binding that to all the internal components that need it

Agreed. the samples that can be provided, and the bind-service that I linked are single-use.

Given a contract is tied to a plan very tightly, the approach you’ve taken with mysql-remote is correct, but I’m super wary about including such a pattern into core-plans. This opens the gates for *-remote doubling the number of packages. I think given the contract is simple, this can be left up to users to implement themselves, for each service they want to remote link.

One improvement I can think of is potentially including the run hook, or a separate bash script that forces that infinite loop, so that a binding service would include the “loop” package, and run its loop.sh to avoid having to rewrite this in every bind service / remote plan.

# plan.sh
pkg_name="mysql-remote"
...
pkg_deps=(core/loop)
pkg_exports=( .... )

# hooks/run
exec loop

The effort this saves is fairly minimal though.

Actually, why only {{~#if @first}}, it relates to every member.

I’ve never messed with binding multiple members to the same slot but it would break my templated config if that block repeated so I put that in there to protect against that