The tools we use to operate backend systems have been evolving incredibly fast for the last few years. It’s an exciting time. Docker, Kubernetes, serverless — there’s a lot of new jazz being played by a lot of new bands.
So why do our dev workflows feel less advanced than they did in the age of the monolith? And what can we do about it?
My first web app was made with Ruby on Rails. It was the summer of 2011. I was fresh out of uni, and having a great time learning the ropes.
Our architecture was a typical Rails monolith. There was the main web server process, a worker process for background tasks and another for scheduled tasks — each was an instance of the same Rails app, just run with different entry points. These sat on top of a Postgres database, a persistent job queue and an in-memory key-value cache.
We only had a single production server, so deployments and rollbacks were straightforward. Docker didn’t exist yet.
The dev environment was very similar to the CI and prod environments, and we didn’t rely as heavily on CI as we tend to do when working on multi-service backends. More bugs were caught by the faster parts of the test battery, which meant less waiting around.
I could focus on the application logic continuously, rarely needing to break the flow to deal with plumbing, configuration, and other chores. Happy times.
The pieces just fit together effortlessly. Very little configuration was needed, and, most importantly, all the tools were naturally aware of the application’s entire context. Starting your app, reloading it when code changes, running tests — you only needed one command for each.
Thanks to Rails, web development made a very positive first impression on me.
Things Glide Apart
In the years since, this picture has become more complicated. Monoliths are still a viable approach for many webapps — more commonly, however, new services have to fit themselves into a constellation of interdependent containerized services.
And suddenly, the old workflows aren’t available anymore, even if all the services are made with Rails. Live reloading, database migrations, integration tests — never mind just getting the whole stack up and running with one command — these are now out of scope for Rails alone.
For example, if your Rails app needs to communicate with an analytics service written in Python by the data science team, how do you make sure you’re working against an up-to date version of it (and that its database has been migrated to the latest schema version)?
You’ll have to look into how that service does migrations, how it’s built and deployed, how its test suite is run, what runtime dependencies it has, the order in which all this needs to happen, etc. — stuff you really shouldn’t have to get into or know about in detail.
Put simply, there’s a loss of connectivity here. When we partition a monolith into multiple services, we maintain the dependency relationships within the boundaries of the new services, but tend to lose the relationships that now reach between services.
Reconnecting the Dots
This is why we built Garden: to connect the dots again. Garden incorporates the relationships between the system’s services into a complete representation of how things fit together.
Because it holds the big picture, it gives us back our good old workflows and improves on them:
- Start everything with one command.
- Dependency-aware live reloads and live re-tests, without doing unnecessary work.
- Cross-service database migrations and tasks (think rake, but for everything).
- And run the same tests during development and CI.
There’s lots more to cover, but let’s look at some code!
Rails Dev Workflows in Kubernetes
We’ve set up an example project to demonstrate how Garden facilitates some of the core Rails workflows when the app is run inside a Kubernetes cluster. The source code is here. For this article, we’ve kept it very minimal in structure and function—just a Rails service and a Postgres database, two services in total.
To summarize, we’ll start up a hot-reload-enabled, multi-service development environment for a Rails app. We’ll do this by writing three simple configuration files, and then running a single Garden command in the terminal.
The system is a simple Rails app where the user votes for cat or doge.
The project consists of two services: a Rails app (<span class="p-color-bg">frontend</span>) and a Postgres database (<span class="p-color-bg">postgres</span>). If you’d like to follow along on your own machine, you’ll need to have Garden and a local Kubernetes cluster set up — here are the instructions: https://docs.garden.io/basics/installation.
To start the whole thing with live reloading enabled for the frontend service, we run <span class="p-color-bg">garden dev --hot=frontend</span> inside the project’s directory:
If we visit http://vote-rails.local.app.garden, we’ll see the Rails app, running locally. (I’m running this with Docker for Mac; with Minikube or on Linux, you’ll need to add your hostnames to <span class="p-color-bg">/etc/hosts</span>.)
Let’s break down what just happened here:
- Garden built the Docker image for the Rails app, and fetched the image for the Postgres service (this step takes longer on the first run, of course).
- Garden created a Kubernetes namespace for the project, and deployed an ingress controller into it (along with some other configuration steps).
- Before the Rails app can be deployed to the cluster, the database has to be deployed and accepting connections, and migrated to the newest schema version.
- Garden has been informed of this (we’ll get into the details of that below), so it starts by deploying the postgres service, waits until it’s up and running, then runs the db-migrate task (more details below) before deploying frontend (the Rails app).
- Because we passed the hot=frontend option, the frontend service will live-reload when we make changes to its source code.
From the garden.yml configuration files for the Rails app and the Postgres service, Garden knows what to do, in what order, and which steps can be performed concurrently.
Let’s have a closer look. First off, here’s the project-level configuration, which is little else than the project name and specifying that we’re using a local Kubernetes cluster to host our containers:
Here’s a link to the full configuration for <span class="p-color-bg">frontend</span>. First, we specify that this is a container, we name it <span class="p-color-bg">frontend</span> and add a brief description:
The <span class="p-color-bg">hotReload</span> field configures which local directories to sync to which directories inside the running container. (This only applies when the service is deployed with hot reloading enabled.)
Here, we map port 80 to port 3000 on the container and make http://vote-rails.local.app.garden route to it. With the <span class="p-color-bg">dependencies</span> field, we specify that the <span class="p-color-bg">db-migrate</span> task has to be run before this service is deployed.
The <span class="p-color-bg">db-migrate</span> task essentially wraps <span class="p-color-bg">rake db:migrate</span>, running it in the context of this container. We also specify that the <span class="p-color-bg">postgres</span> service has to be up and running before this task runs.
And the configuration for <span class="p-color-bg">postgres</span>:
That’s all — Garden takes care of the rest.
To tail the application log, we run <span class="p-color-bg">garden logs frontend -f</span>. To make this work, we configured the Rails app to emit its log output to <span class="p-color-bg">stdout</span> (so it gets picked up by Kubernetes’ logging mechanisms, which Garden uses).
For a Rails console in the context of the Rails app’s container:
Since we’ve already wrapped <span class="p-color-bg">rake db:migrate</span> in the <span class="p-color-bg">db-migrate</span> task we used above as part of the deployment pipeline, we can migrate the database by running that task directly:
In a real-world project using Kubernetes in production, there are often several services, commonly written in many languages.
That shouldn’t mean we have to compromise away our favourite tools and workflows in pursuit of a lowest common denominator.
With Garden, you don’t really have to know in detail how that Python analytics service needs to be deployed or pre-populated or what other services it depends on — everyone just describes their own components and uses their favorite technologies.
Garden takes care of tying it all together, and automates away the drudgery and uncertainty. Which, in turn, leaves more time for the fun parts of writing software.
Coda
We at Garden share some key values with Rails:
- Convention over configuration, while retaining flexibility.
- Designing something integrated that holds all the threads, but is pluggable and extensible.
- And developer experience as the top priority.
We hope Garden can help make Rails shine in the containerized world — at any rate, I think the two of them would hit it off if they met at a party.
Check out our project on GitHub. The Garden orchestrator is free and open-source, and we’d love to hear your feedback!