I was once a skeptic. I thought Kubernetes would never be as simple and as easy as <span class="p-color-bg">docker compose up</span>. But you can make your inner loop as smooth as a duck’s butt with the Compose files you already have.
It’s time to adopt Kubernetes not just for production but everywhere across the entire lifecycle, wherever you, a developer, works. You never worry if the problems cropping up when your application leaps from Docker to Kubernetes are a Docker-problem or a Kubernetes-problem because it’s all one architecture, one problem space to solve.
In this article, I will explain why you need to move to K8s if you're still using Compose and turn a classic microservices example written in Compose into ready-to-rock K8s code. This is the first in a series of articles covering architecting for cloud native written for users who may be thinking of or already starting their journey to Kubernetes local, remote, “remocal” or anywhere in-between.

Docker Compose vs. Kubernetes: Why Kubernetes? Why now?
Tell me you haven't seen this before:
- Your developer application stack runs local. The bigger the application, the more resources it consumes, until a developer’s laptop is running so hot, memory gets swapped to hard disk and work slows to a crawl.
- Your container gets pushed to production but production doesn't run Docker and there's bugs you haven't accounted for but only when it runs over there.
Kubernetes, despite its complexity, becomes the simplest tool when it’s one fewer tool in your toolbox and the other tools in your toolbox? They’re built to support it.
Mirroring production is vital to reducing stress, cognitive load, and avoiding unforeseen bugs, quirks, and conventions around networking and orchestration.
Setting Up a Kubernetes Cluster with a Dev Container
For the journey I’ve brought with me a a code sample that covers a typical application stack bundling an app and database and the Docker Compose files that support a team using Compose in both development and production. If you want to follow along, have ready:
- A Kubernetes cluster, either remote or local
- Docker Engine and Docker Compose
- VS Code
- The Dev Containers extension
While you can use a local Kubernetes cluster provisioned by Rancher Desktop or Docker Desktop, the real power comes from liberating the developer from the Hellfire of needing to run a local cluster. We’ll be using a remote cluster powered by Civo because they’re dirt cheap, super-fast to launch, and I don’t need to delve deep into the Mines of Moria to get cluster credentials.
Because I’ve shipped my code sample in a container of its own, you won’t need anything else to get started as my container ships with all the tools to build and ship my applicaton stack. Clone my repo, open it in VS Code and launch straight from the editor with everything you need to replicate.
Using Docker Compose to Set Up a Scalable Python Application with Flask and CockroachDB
Our code sample is a Python application that creates and seeds a CockroachDB database. When we visit http://localhost:5000 in our browser, our application returns our test data as a JSON object.

CockroachDB is Postgres wire-compatible and supports most of Postgres’ SQL syntax. As a distributed database, CockroachDB is the most elegant choice to demonstrate scaling from a locally driven workflow to a mirror of production with a minimum of changes.
💡If you’re using Postgres, consider these scalable flavors as an alternative to CockroachDB
Using Multiple Docker Compose Files for Development and Production Workflows
Because we are a team following Compose best practices we use 3 <span class="p-color-bg">docker-compose*.yml</span> files:
- <span class="p-color-bg">docker-compose.yml</span>, our base, canonical, file
- <span class="p-color-bg">docker-compose.override.yml</span> which automatically overlays our base file with everything we need for development when we run <span class="p-color-bg">docker compose up -d</span>
- <span class="p-color-bg">docker-compose.prod.yml</span> which ignores our development overlay and only merges base and prod with <span class="p-color-bg">docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d</span>.
A new database needs to be seeded, the database cluster initialized and other day 1 operations. I’ve included two simple scripts, <span class="p-color-bg">up.dev.sh</span> and <span class="p-color-bg">up.prod.sh</span> for your reference and use. <span class="p-color-bg">up.dev.sh</span> builds our image, creates a database and seeds it with our data, typically done when developing your app. <span class="p-color-bg">up.prod.sh</span> instantiates the production environment of our app and database. It does not create and seed our database because we assume you already have a production database with production data.
Dockerfiles for Development and Production Deployments
Our smallest atomic unit of containers we’ll be discussing is the humble Dockerfile, a manifest for everything that goes into our application containers:
<span class="p-color-bg">Dockerfile.dev</span>
Our <span class="p-color-bg">Dockerfile.dev</span> is not clean or lean but it doesn't need to be. Our production <span class="p-color-bg">Dockerfile.prod</span> is a multi-stage build that contains only our application and the dependencies required to run it.
Our production <span class="p-color-bg">docker-compose.prod.yml</span> adds a load balancer and secures our database cluster with TLS certificates to encrypt network traffic between the distributed database’s 3 nodes.

With these four files, <span class="p-color-bg">Dockerfile.dev</span>, <span class="p-color-bg">Dockerfile.prod</span>, <span class="p-color-bg">docker-compose.yml</span> and <span class="p-color-bg">docker-compose-prod.yml</span>, plus my Python source code, I have everything I need to get started transforming our Compose microservice stack to Kubernetes.
Migrating from Docker Compose to Kubernetes with Kompose
Both Kubernetes manifests and Compose files are written in YAML but Kubernetes uses its own specification for defining Kubernetes objects. Our Compose files are not compatible with Kubernetes untouched.
Instead, we'll use Kompose, the official Kubernetes tool for converting Compose stacks. Kompose takes our development stack (remember it’s our base <span class="p-color-bg">docker-compose.yml</span> with our <span class="p-color-bg">docker-compose.override.yml</span> overlay) and outputs Kubernetes-native manifests.
Here's our <span class="p-color-bg">docker-compose.yml</span>:
And <span class="p-color-bg">docker-compose.override.yaml</span>:
Merge these together and generate Kubernetes manifests with <span class="p-color-bg">kompose -f docker-compose.yml -f docker-compose.override.yml convert</span>. If you're using your own Compose files, ensure you've added your version of the Compose Spec to the top of both <span class="p-color-bg">docker-compose.yml</span> and <span class="p-color-bg">docker-compose.override.yml</span> for Kompose to run successfully. For my example I've used <span class="p-color-bg">version: '3.3'</span>. The single quotes are important.
I’ve prepared two example Compose files containing just our Flask app. These are <span class="p-color-bg">compose-roachless.yml</span> and <span class="p-color-bg">compose-roachless.override.yml</span>. If we run these through Kompose with <span class="p-color-bg">kompose convert -f compose-roachless.yml -f compose-roachless.override.yml</span> we have ourselves a very minimal set of 3 Kubernetes manifests. We can generate a single file with the <span class="p-color-bg">-o</span> flag: <span class="p-color-bg">kompose -f docker-compose.yml -f docker-compose.override.yml convert -o kompose.yml</span> puts our entire application declaration in <span class="p-color-bg">kompose.yml</span>:
So where is CockroachDB? We've removed it because one of the benefits of going Kubernetes is access to a rich ecosystem of packages created by vendors and users. Just like the software you'd install on your local machine, we can install cloud native software to our Kubernetes cluster more simply than defining it as a Compose service. These cloud native software packages are called Helm Charts.
Using Kompose to Create Helm Charts
Helm is a package manager for Kubernetes created by Deis Labs in 2015, now under the umbrella of the Cloud Native Computing Foundation. We've deleted our homegrown CockroachDB service because Helm already has it packaged for us in a Helm Chart.
Instead of using our single-file manifest created in the last section, we'll use Kompose to create a Helm chart for our Python application. Then we'll deploy both our own Chart and the official CockroachDB chart together.
First, generate a Helm Chart for our Flask app with <span class="p-color-bg">kompose convert -f compose-roachless.yml -f compose-roachless.override.yml -c -o flask-app</span> which creates the folder, <span class="p-color-bg">flask-app</span> to home our new artifact:
Deploying Applications to Kubernetes with Helm Charts
The <span class="p-color-bg">-c</span> flag at the end instructs Kompose to produce a Chart. To deploy our Chart, run <span class="p-color-bg">helm install my-flask-app ./flask-app</span>. This installs our app as a Helm Chat with the Release name my-flask-app. If the Release name confuses you, you can think of it as a way of identifying the Chart you install on your Kubernetes cluster should other instances of the same Chart exist in the same namespace. As an example, if you installed one instance of CockroachDB for development and another for production on the same cluster, giving them suitable release names would help you to identify which is which. We can see that Helm returns to tell us our Chart was installed successfully:
To install our CockroachDB Chart, we first add the Chart Repository (no centralized Charts repository exist so we must first add where our package is hosted before installing) with <span class="p-color-bg">helm repo add cockroachdb https://charts.cockroachdb.com/</span>, then install it with <span class="p-color-bg">helm install my-cockroachdb cockroachdb/cockroachdb --version 10.0.0</span>. Your command prompt won't be returned to you until Helm finishes installing the CockroachDB Helm Chart to your cluster, which can take anywhere from 1 to 3 minutes to complete.
Helm should succeed and print out some very helpful details on our installation:
Troubleshooting the Deployment of a CockroachDB Database on Kubernetes
You can display the pods (the number of instances of CockroachDB the Helm Chart has launched, equivalent to <span class="p-color-bg">roach-0</span>, <span class="p-color-bg">roach-1</span>, <span class="p-color-bg">roach-2</span>... in our <span class="p-color-bg">docker-compose*.yml</span> files) with <span class="p-color-bg">kubectl get pods</span>. If any continue to show as Pending, it's likely you've exceeded your cloud provider's storage quota, as I did when I launched our Helm Chart with default values. Let's uninstall and try again with <span class="p-color-bg">helm uninstall my-cockroachdb</span>.
You'll additionally need to delete the volumes created to store your database. Let's test deletion before committing with the <span class="p-color-bg">--dry-run</span> flag: <span class="p-color-bg">kubectl delete pvc --dry-run=client -l app.kubernetes.io/name=cockroachdb</span>:
And then really delete those volumes with <span class="p-color-bg">kubectl delete pvc -l app.kubernetes.io/name=cockroachdb</span>. Now we reinstall with saner storage defaults: <span class="p-color-bg">helm install --set storage.persistentVolume.size="25Gi" --generate-name cockroachdb/cockroachdb</span>.
If we run <span class="p-color-bg">kubectl get pods</span>, after a few minutes we should see that all pods are now Running or Completed!
We'll need to set a username and password to connect to our database. To do that, first create a database and seed it with data with the following command:
If you run this command you'll receive an error:
Why is that? It's because our database is reachable by another name! Remember the earlier output?
Helm has told us the new domain name our database is reachable at. To resolve our error, we'll need to update the <span class="p-color-bg">DATABASE_URL</span> in our Flask application's chart. We'll do this using by introducing another handy Helm-ism, template values. The chart we generated imported our environment values as they were set at time of generation. If we peek into the folder containing our chart, we'll find <span class="p-color-bg">web-deployment.yaml</span> and <span class="p-color-bg">web-service.yaml</span> under the <span class="p-color-bg">templates</span> folder. The <span class="p-color-bg">DATABASE_URL</span> is a DNS record set to be reachable from inside a Docker network not a Kubernetes network.
Using Templated Values with Helm Charts to Connect to CockroachDB in Kubernetes
In Docker Compose, a container is accessible by its hostname, identical to the name of the container itself. This name is specified in the <span class="p-color-bg">docker-compose.yml</span> file, and is used to reference the container within the Docker Compose environment. For example, our CockroachDB cluster is composed of 3 containers, we've named <span class="p-color-bg">roach-0</span>, <span class="p-color-bg">roach-1</span>, and <span class="p-color-bg">roach-2</span>. From within this Docker network, our Flask app can connect to the CockroachDB cluster by any of their container names. In this case, <span class="p-color-bg">roach-0</span>.
In Kubernetes, a container's fully qualified domain name (FQDN) is more complex and is automatically generated based on the name of the container and the namespace it is running in. Let's look at an illustrated example of the FQDN returned on successful install of our CockroachDB Helm chart:

Because both the release name and namespace are liable to change depending on your target environment, we'll use templated values to expose them to the user. To do so, open <span class="p-color-bg">flask-app/templates/web-deployment.yaml</span> in VS Code and change the file to look like the following:
We've replaced the <span class="p-color-bg">args</span> key for a <span class="p-color-bg">command</span> key. This will be useful in part 2 but otherwise has makes no difference here.
In Helm, a templated value is a variable that can be used in the configuration of a Helm chart. Templated values can be defined in the <span class="p-color-bg">values.yaml</span> file of a Helm chart and can be referenced in the chart's templates using the <span class="p-color-bg">{{ }}</span> syntax.
There are benefits to using templated values in Helm charts:
- Reusability: Templated values allow you to define values that can be used in multiple places within a chart, making it easier to reuse the same configuration across different deployments.
- Customization: Templated values allow you to customize the configuration of a chart for different environments or use cases. For example, you could define a templated value for the number of replicas of a deployment and use different values for development, staging, and production environments.
The range loop is useful because it allows you to iterate over a map of templated values and generate output for each key-value pair. In this case, the output is a list of environment variables that will be passed to a Kubernetes pod. We'll make use of this loop in part 2 to pass in variables like <span class="p-color-bg">FLASK_DEBUG</span> to set variables specific to our environment (dev or prod).
Now create a <span class="p-color-bg">values.yaml</span> file inside the <span class="p-color-bg">flask-app</span> folder and fill it like so:
These values will automatically replace the double brackets with our default values. To upgrade our deployed Flask chart with these new values, run <span class="p-color-bg">helm upgrade my-flask-app ./flask-app</span>. We can confirm our new values have been set with <span class="p-color-bg">kubectl get pod -l io.kompose.service=web -o jsonpath='{.items[].spec.containers[].env[1].value}'</span>, which will echo to our terminal our new database connection string. If we wanted to pass in an overrides file, we could create a new <span class="p-color-bg">values.yaml</span> file outside the chart folder and pass it in with <span class="p-color-bg">helm upgrade -f values.yaml my-flask-app ./flask-app</span>
Now run my helper shell script, <span class="p-color-bg">up.kube.sh</span>, which, like my other helper scripts, just creates our database user and assigns it a password. This is needed to access our secure CockroachDB cluster. Then re-run our request to Flask to create and seed our database:
Congratulations, you've gone from your Compose stack to Kubernetes in a little less than an hour! And with that you're ready to adopt cloud native tooling like Garden to write and deploy faster than you ever did with Docker Compose. Join us for the next installment, where I show you how to take your Helm Chart you've just made, deploy it with Garden, and how all this comes together to accelerate your inner development loop.
Interested in harmonizing your own stack? There's more content to read and watch from your Developer Advocate, Tao Hansen:
- Python that runs anywhere with Garden, Docker, and Kubernetes - YouTube
- Whole-Body DevOps: Against the post-mortem of a living thing
- The first issue of Tao's newsletter, The Inner Loop #1
Join our community on Discord to ask questions, share your experiences, and get help from the Garden team and other Garden users. And there's more The Inner Loop, a newsletter on free and open source software, monthly. It's full of only good things I love and use in my day to day. You can also follow me on Mastodon.
[Heading image: Anonymous, section detail of “Geigyo Hinshu Zukan” (Fourteen Varieties of Whales) (1760). Credit: New Bedford Whaling Museum]