This year, the Articulate engineering team has been hard at work releasing continuous updates to Articulate 360. When we started planning for the launch of Articulate 360, we decided we wanted to use microservices.
Microservices let us develop new features much more quickly than we could if we built monolithic applications. They also allow us to organize our engineering teams into smaller groups that each focus on the development of different features.
To take this approach, we needed a technology that would let us develop, build, and ship microservices quickly. That’s why we decided to embrace containers, specifically Docker, for production and development. Simple, right?
Well … we ran into problems. We have many services that need to be running. They’re all separate projects, and all run in Docker. We were exposing ports to the Docker host, having to remember which ports connect to which services, and running into port conflicts. Our developers were going nuts trying to manage all these moving parts.
There Had to Be a Better Way
At first, we came across various solutions that dynamically registered containers with Consul and created Nginx upstreams so you can have an arbitrary number of reverse proxy backends. We wondered if we could take this one step further. Instead of registering more backends, we wanted to see if it was possible to register different virtual hosts.
We started hacking up a proof of concept, but we went way too far. First, our solution registered newly launched services, created virtual hosts, and even added dynamic DNS entries into a DNSmasq service. Then, it pointed the DNS resolution to that DNSmasq service, which was running in a container.
In short, it was a mess. It wasn’t any simpler than what we were doing before, and it had a lot of moving pieces. It also relied on Docker Toolbox launching a virtual machine on a specific IP address.
Then it came to me. Since it’s not possible to get around the specific IP address issue, why not embrace it? I went to one of our public DNS zones and added a wildcarded subdomain to point to the special Docker host IP (like *.tugboat.zone). We were then able to remove DNSmasq completely from the mix. The container setup became a stock setup, and everything just worked.
Like That, Tugboat Was Built
By adding five lines to an existing docker-compose.yml project file, we were able to register the service with Consul and get a fully working custom URL for it. Not only that, but the URL is consistent. Also, we don’t need to know what port it’s bound to. That allows us to actually use the Docker feature that binds to a random port. No more port conflicts!
Here’s an example of how this might work: Say we have one service that displays user profiles and another that deals with displaying their avatars. We want the profiles service to be able to talk to the avatar service directly. In short, both services need to be able to find and talk to each other.
In our old system, the profiles service would be started on, say, port 8000 and the avatars service would be on port 8001. We’d set an environment variable in the profiles service to point to where you can hit the avatar service: “192.168.99.100:8001.” And we’d also have to make sure that these ports would never conflict with another service. It was a nightmare!
Now, with Tugboat, we can register each service with a name, which becomes that server’s virtual host. So instead of pointing to some value—like “192.168.99.100:8001”—it’s simply “avatars.tugboat.zone.” That hostname will always work as long as the service is running. A lot simpler, right?
Another benefit is that you don’t need to handle Transport Layer Security (TLS) at the service level. You can put TLS on the Nginx reverse proxy instead. Click here for a quick video of how that works.
Since Tugboat’s initial release, we’ve made some additional improvements, such as adding multiple domain support and support for Docker on Mac, Windows, and Linux.
Plus, Tugboat no longer uses Nginx as a reverse proxy. Instead, it uses Fabio. That eliminates a lot of complexity since Fabio supports multiple SSL certificates, automatic setup from Consul, and a ton of extra features.
What If I We Need Custom Domains or TLS?
We’ve also created a child project called tugboat-bootstrapper that allows you to customize the domain/SSL/config for your company. This project lets you use a domain that’s different than *.tugboat.zone. Because it’s a separate repository, you can maintain a private fork for your company that allows developers to use consistent naming. There should be very little churn on that repository since there’s not much in it! You can check it out here.
If you think you could use Tugboat in your next project, head on over to the git repository and see the README file to learn more about it. Tugboat is open source under the MIT license. Pull requests are encouraged, and I can’t wait to see what you’ll do with the technology!