Purple white and red flowers.
Our garden is growing. We've raised a Series A funding round.
Read more here

Migrating Docker Compose to Kubernetes with Cloud Native Tools Pt. 1

Tao Hansen
Written by 
November 29, 2022
This is the first in a series of articles covering architecting for cloud native written for users who may be thinking of or already starting their journey to Kubernetes local, remote, “remocal” or anywhere in-between.

I was once a skeptic. I thought Kubernetes would never be as simple and as easy as <span class="p-color-bg">docker compose up</span>. But you can make your inner loop as smooth as a duck’s butt with the Compose files you already have.

It’s time to adopt Kubernetes not just for production but everywhere across the entire lifecycle, wherever you, a developer, works. You never worry if the problems cropping up when your application leaps from Docker to Kubernetes are a Docker-problem or a Kubernetes-problem because it’s all one architecture, one problem space to solve.

In this article, I will explain why you need to move to K8s if you're still using Compose and turn a classic microservices example written in Compose into ready-to-rock K8s code.

A soap shaped as a duck presenting its rump

Docker Compose vs. Kubernetes: Why Kubernetes? Why now?

Tell me you haven't seen this before:

  • Your developer application stack runs local. The bigger the application, the more resources it consumes, until a developer’s laptop is running so hot, memory gets swapped to hard disk and work slows to a crawl.
  • Your container gets pushed to production but production doesn't run Docker and there's bugs you haven't accounted for but only when it runs over there.

Kubernetes, despite its complexity, becomes the simplest tool when it’s one fewer tool in your toolbox and the other tools in your toolbox? They’re built to support it.

Mirroring production is vital to reducing stress, cognitive load, and avoiding unforeseen bugs, quirks, and conventions around networking and orchestration.

Setting Up a Kubernetes Cluster with a Dev Container

For the journey I’ve brought with me a a code sample that covers a typical application stack bundling an app and database and the Docker Compose files that support a team using Compose in both development and production. If you want to follow along, have ready:

  • A Kubernetes cluster, either remote or local
  • Docker Engine and Docker Compose
  • VS Code
  • The Dev Containers extension

While you can use a local Kubernetes cluster provisioned by Rancher Desktop or Docker Desktop, the real power comes from liberating the developer from the Hellfire of needing to run a local cluster. We’ll be using a remote cluster powered by Civo because they’re dirt cheap, super-fast to launch, and I don’t need to delve deep into the Mines of Moria to get cluster credentials.

Because I’ve shipped my code sample in a container of its own, you won’t need anything else to get started as my container ships with all the tools to build and ship my applicaton stack. Clone my repo, open it in VS Code and launch straight from the editor with everything you need to replicate.

Using Docker Compose to Set Up a Scalable Python Application with Flask and CockroachDB

Our code sample is a Python application that creates and seeds a CockroachDB database. When we visit http://localhost:5000 in our browser, our application returns our test data as a JSON object.

Technical drawing of the sample application for local development

CockroachDB is Postgres wire-compatible and supports most of Postgres’ SQL syntax. As a distributed database, CockroachDB is the most elegant choice to demonstrate scaling from a locally driven workflow to a mirror of production with a minimum of changes.

💡If you’re using Postgres, consider these scalable flavors as an alternative to CockroachDB

Using Multiple Docker Compose Files for Development and Production Workflows

Because we are a team following Compose best practices we use 3 <span class="p-color-bg">docker-compose*.yml</span> files:

  • <span class="p-color-bg">docker-compose.yml</span>, our base, canonical, file
  • <span class="p-color-bg">docker-compose.override.yml</span> which automatically overlays our base file with everything we need for development when we run <span class="p-color-bg">docker compose up -d</span>
  • <span class="p-color-bg">docker-compose.prod.yml</span> which ignores our development overlay and only merges base and prod with <span class="p-color-bg">docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d</span>.

A new database needs to be seeded, the database cluster initialized and other day 1 operations. I’ve included two simple scripts, <span class="p-color-bg">up.dev.sh</span> and <span class="p-color-bg">up.prod.sh</span> for your reference and use. <span class="p-color-bg">up.dev.sh</span> builds our image, creates a database and seeds it with our data, typically done when developing your app. <span class="p-color-bg">up.prod.sh</span> instantiates the production environment of our app and database. It does not create and seed our database because we assume you already have a production database with production data.

Dockerfiles for Development and Production Deployments

Our smallest atomic unit of containers we’ll be discussing is the humble Dockerfile, a manifest for everything that goes into our application containers:

<span class="p-color-bg">Dockerfile.dev</span>


# pull official base image
FROM python:3.10-slim
EXPOSE 5000
# set work directory
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install system dependencies
RUN apt-get update && apt-get install -y \
    netcat \
  && rm -rf /var/lib/apt/lists/*
# install dependencies
RUN pip install --upgrade pip
COPY ./requirements.txt /usr/src/app/requirements.txt
RUN pip install -r requirements.txt
# copy project
COPY . /usr/src/app/
# Creates a non-root user with an explicit UID and adds permission to access the /app folder
# For more info, please refer to https://aka.ms/vscode-docker-python-configure-containers
RUN adduser -u 5678 --disabled-password --gecos "" appuser && chown -R appuser /usr/src/app
USER appuser
# run entrypoint.sh
ENTRYPOINT ["/usr/src/app/entrypoint.sh"]

Our <span class="p-color-bg">Dockerfile.dev</span> is not clean or lean but it doesn't need to be. Our production <span class="p-color-bg">Dockerfile.prod</span> is a multi-stage build that contains only our application and the dependencies required to run it.

Our production <span class="p-color-bg">docker-compose.prod.yml</span> adds a load balancer and secures our database cluster with TLS certificates to encrypt network traffic between the distributed database’s 3 nodes.

Technical drawing of the sample application with an added load balancer and TLS encryption between database nodes

With these four files, <span class="p-color-bg">Dockerfile.dev</span>, <span class="p-color-bg">Dockerfile.prod</span>, <span class="p-color-bg">docker-compose.yml</span> and <span class="p-color-bg">docker-compose-prod.yml</span>, plus my Python source code, I have everything I need to get started transforming our Compose microservice stack to Kubernetes.

Migrating from Docker Compose to Kubernetes with Kompose

Both Kubernetes manifests and Compose files are written in YAML but Kubernetes uses its own specification for defining Kubernetes objects. Our Compose files are not compatible with Kubernetes untouched.

Instead, we'll use Kompose, the official Kubernetes tool for converting Compose stacks. Kompose takes our development stack (remember it’s our base <span class="p-color-bg">docker-compose.yml</span> with our <span class="p-color-bg">docker-compose.override.yml</span> overlay) and outputs Kubernetes-native manifests.

Here's our <span class="p-color-bg">docker-compose.yml</span>:


services:
   web:
     container_name: web
     image: worldofgeese/web:v0.1.0
   roach-0:
     image: cockroachdb/cockroach:v22.1.10
     container_name: roach-0
 
   roach-1:
     container_name: roach-1
     image: cockroachdb/cockroach:v22.1.10
     depends_on: 
       - roach-0
 
   roach-2:
     container_name: roach-2
     image: cockroachdb/cockroach:v22.1.10
     depends_on:
       - roach-0

And <span class="p-color-bg">docker-compose.override.yaml</span>:


services:
  web:
    build: 
      context: ./web
      dockerfile: Dockerfile.dev
    command: python manage.py run -h 0.0.0.0
    volumes:
      - ./web/:/app/
    ports:
      - 5000:5000
    environment:
      - FLASK_APP=project/__init__.py
      - FLASK_DEBUG=1
      - DATABASE_URL=cockroachdb://root@roach-0:26257/defaultdb
      - SQL_HOST=roach-0
      - SQL_PORT=26257
      - DATABASE=cockroachdb

  roach-0:
    command: start --insecure --join=roach-0,roach-1,roach-2 
    volumes:
      - "cockroach_data:/cockroach/cockroach-data"
    ports:
      - "26257:26257"
      - "8080:8080"
 
  roach-1:
    command: start --insecure --join=roach-0,roach-1,roach-2
    volumes:
      - "cockroach_data:/cockroach/cockroach-data"
 
  roach-2:
    command: start --insecure --join=roach-0,roach-1,roach-2
    volumes:
      - "cockroach_data:/cockroach/cockroach-data"

  init:
    container_name: init
    image: cockroachdb/cockroach:latest-v22.1
    command: init --host=roach-0 --insecure
    depends_on:
      - roach-0

volumes:
  cockroach_data:

Merge these together and generate Kubernetes manifests with <span class="p-color-bg">kompose -f docker-compose.yml -f docker-compose.override.yml convert</span>. If you're using your own Compose files, ensure you've added your version of the Compose Spec to the top of both <span class="p-color-bg">docker-compose.yml</span> and <span class="p-color-bg">docker-compose.override.yml</span> for Kompose to run successfully. For my example I've used <span class="p-color-bg">version: '3.3'</span>. The single quotes are important.

I’ve prepared two example Compose files containing just our Flask app. These are <span class="p-color-bg">compose-roachless.yml</span> and <span class="p-color-bg">compose-roachless.override.yml</span>. If we run these through Kompose with <span class="p-color-bg">kompose convert -f compose-roachless.yml -f compose-roachless.override.yml</span> we have ourselves a very minimal set of 3 Kubernetes manifests. We can generate a single file with the <span class="p-color-bg">-o</span> flag: <span class="p-color-bg">kompose -f docker-compose.yml -f docker-compose.override.yml convert -o kompose.yml</span> puts our entire application declaration in <span class="p-color-bg">kompose.yml</span>:


apiVersion: v1
items:
  - apiVersion: v1
    kind: Service
    metadata:
      annotations:
        kompose.cmd: kompose -f compose-roachless.yml -f compose-roachless.override.yml convert -o kompose.yml
        kompose.version: 1.27.0 (b0ed6a2c9)
      creationTimestamp: null
      labels:
        io.kompose.service: web
      name: web
    spec:
      ports:
        - name: "5000"
          port: 5000
          targetPort: 5000
      selector:
        io.kompose.service: web
    status:
      loadBalancer: {}
  - apiVersion: apps/v1
    kind: Deployment
    metadata:
      annotations:
        kompose.cmd: kompose -f compose-roachless.yml -f compose-roachless.override.yml convert -o kompose.yml
        kompose.version: 1.27.0 (b0ed6a2c9)
      creationTimestamp: null
      labels:
        io.kompose.service: web
      name: web
    spec:
      replicas: 1
      selector:
        matchLabels:
          io.kompose.service: web
      strategy: {}
      template:
        metadata:
          annotations:
            kompose.cmd: kompose -f compose-roachless.yml -f compose-roachless.override.yml convert -o kompose.yml
            kompose.version: 1.27.0 (b0ed6a2c9)
          creationTimestamp: null
          labels:
            io.kompose.service: web
        spec:
          containers:
            - args:
                - gunicorn
                - --bind
                - 0.0.0.0:5000
                - manage:app
              env:
                - name: DATABASE
                  value: cockroachdb
                - name: DATABASE_URL
                  value: cockroachdb://roach:roach@roach-0:26257/defaultdb
                - name: FLASK_APP
                  value: project/__init__.py
                - name: FLASK_DEBUG
                  value: "0"
                - name: SQL_HOST
                  value: roach-0
                - name: SQL_PORT
                  value: "26257"
              image: worldofgeese/flask-cockroachdb-example:v0.1.0
              name: web
              ports:
                - containerPort: 5000
              resources: {}
          restartPolicy: Always
    status: {}
kind: List
metadata: {}

So where is CockroachDB? We've removed it because one of the benefits of going Kubernetes is access to a rich ecosystem of packages created by vendors and users. Just like the software you'd install on your local machine, we can install cloud native software to our Kubernetes cluster more simply than defining it as a Compose service. These cloud native software packages are called Helm Charts.

Using Kompose to Create Helm Charts

Helm is a package manager for Kubernetes created by Deis Labs in 2015, now under the umbrella of the Cloud Native Computing Foundation. We've deleted our homegrown CockroachDB service because Helm already has it packaged for us in a Helm Chart.

Instead of using our single-file manifest created in the last section, we'll use Kompose to create a Helm chart for our Python application. Then we'll deploy both our own Chart and the official CockroachDB chart together.

First, generate a Helm Chart for our Flask app with <span class="p-color-bg">kompose convert -f compose-roachless.yml -f compose-roachless.override.yml -c -o flask-app</span> which creates the folder, <span class="p-color-bg">flask-app</span> to home our new artifact:


$ tree flask-app/
flask-app
├── Chart.yaml
├── README.md
└── templates
    ├── web-deployment.yaml
    └── web-service.yaml

Deploying Applications to Kubernetes with Helm Charts

The <span class="p-color-bg">-c</span> flag at the end instructs Kompose to produce a Chart. To deploy our Chart, run <span class="p-color-bg">helm install my-flask-app ./flask-app</span>. This installs our app as a Helm Chat with the Release name my-flask-app. If the Release name confuses you, you can think of it as a way of identifying the Chart you install on your Kubernetes cluster should other instances of the same Chart exist in the same namespace. As an example, if you installed one instance of CockroachDB for development and another for production on the same cluster, giving them suitable release names would help you to identify which is which. We can see that Helm returns to tell us our Chart was installed successfully:


NAME: my-flask-app
LAST DEPLOYED: Tue Nov 29 15:08:03 2022
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None

To install our CockroachDB Chart, we first add the Chart Repository (no centralized Charts repository exist so we must first add where our package is hosted before installing) with <span class="p-color-bg">helm repo add cockroachdb https://charts.cockroachdb.com/</span>, then install it with <span class="p-color-bg">helm install my-cockroachdb cockroachdb/cockroachdb --version 10.0.0</span>. Your command prompt won't be returned to you until Helm finishes installing the CockroachDB Helm Chart to your cluster, which can take anywhere from 1 to 3 minutes to complete.

Helm should succeed and print out some very helpful details on our installation:


NAME: my-cockroachdb
LAST DEPLOYED: Wed Nov 30 14:22:41 2022
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
CockroachDB can be accessed via port 26257 at the
following DNS name from within your cluster:

my-cockroachdb-public.default.svc.cluster.local

Because CockroachDB supports the PostgreSQL wire protocol, you can connect to
the cluster using any available PostgreSQL client.

Note that because the cluster is running in secure mode, any client application
that you attempt to connect will either need to have a valid client certificate
or a valid username and password.

Finally, to open up the CockroachDB admin UI, you can port-forward from your
local machine into one of the instances in the cluster:

    kubectl port-forward my-cockroachdb-0 8080

Then you can access the admin UI at https://localhost:8080/ in your web browser.

For more information on using CockroachDB, please see the project's docs at:
https://www.cockroachlabs.com/docs/

Troubleshooting the Deployment of a CockroachDB Database on Kubernetes

You can display the pods (the number of instances of CockroachDB the Helm Chart has launched, equivalent to  <span class="p-color-bg">roach-0</span>,  <span class="p-color-bg">roach-1</span>,  <span class="p-color-bg">roach-2</span>... in our <span class="p-color-bg">docker-compose*.yml</span> files) with <span class="p-color-bg">kubectl get pods</span>. If any continue to show as Pending, it's likely you've exceeded your cloud provider's storage quota, as I did when I launched our Helm Chart with default values. Let's uninstall and try again with <span class="p-color-bg">helm uninstall my-cockroachdb</span>.

You'll additionally need to delete the volumes created to store your database. Let's test deletion before committing with the <span class="p-color-bg">--dry-run</span> flag: <span class="p-color-bg">kubectl delete pvc --dry-run=client -l app.kubernetes.io/name=cockroachdb</span>:


persistentvolumeclaim "datadir-my-cockroachdb-2" deleted (dry run)
persistentvolumeclaim "datadir-my-cockroachdb-1" deleted (dry run)
persistentvolumeclaim "datadir-my-cockroachdb-0" deleted (dry run)

And then really delete those volumes with <span class="p-color-bg">kubectl delete pvc -l app.kubernetes.io/name=cockroachdb</span>. Now we reinstall with saner storage defaults: <span class="p-color-bg">helm install --set storage.persistentVolume.size="25Gi" --generate-name cockroachdb/cockroachdb</span>.

If we run <span class="p-color-bg">kubectl get pods</span>, after a few minutes we should see that all pods are now Running or Completed!

We'll need to set a username and password to connect to our database. To do that, first create a database and seed it with data with the following command:


kubectl exec -it deployment/web --\
 python manage.py create_db
 
kubectl exec -it deployment/web --\
 python manage.py create_db

If you run this command you'll receive an error:


Traceback (most recent call last):
  File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect
    return fn()
  File "/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 320, in connect
    return _ConnectionFairy._checkout(self)
  File "/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 884, in _checkout
    fairy = _ConnectionRecord.checkout(pool)
  File "/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 486, in checkout
    rec = pool._do_get()
  File "/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/impl.py", line 145, in _do_get
    with util.safe_reraise():
  File "/usr/local/lib/python3.10/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__
    compat.raise_(
  File "/usr/local/lib/python3.10/site-packages/sqlalchemy/util/compat.py", line 208, in raise_
    raise exception
  File "/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/impl.py", line 143, in _do_get
    return self._create_connection()
  File "/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 266, in _create_connection
    return _ConnectionRecord(self)
  File "/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 381, in __init__
    self.__connect()
  File "/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 677, in __connect
    with util.safe_reraise():
  File "/usr/local/lib/python3.10/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__
    compat.raise_(
  File "/usr/local/lib/python3.10/site-packages/sqlalchemy/util/compat.py", line 208, in raise_
    raise exception
  File "/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 673, in __connect
    self.dbapi_connection = connection = pool._invoke_creator(self)
  File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/create.py", line 578, in connect
    return dialect.connect(*cargs, **cparams)
  File "/usr/local/lib/python3.10/site-packages/sqlalchemy_cockroachdb/base.py", line 104, in connect
    return super().connect(**kwargs)
  File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/default.py", line 598, in connect
    return self.dbapi.connect(*cargs, **cparams)
  File "/usr/local/lib/python3.10/site-packages/psycopg2/__init__.py", line 122, in connect
    conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: could not translate host name "roach-0" to address: Name or service not known

Why is that? It's because our database is reachable by another name! Remember the earlier output?


CockroachDB can be accessed via port 26257 at the
following DNS name from within your cluster:

my-cockroachdb-public.default.svc.cluster.local

Helm has told us the new domain name our database is reachable at. To resolve our error, we'll need to update the <span class="p-color-bg">DATABASE_URL</span> in our Flask application's chart. We'll do this using by introducing another handy Helm-ism, template values. The chart we generated imported our environment values as they were set at time of generation. If we peek into the folder containing our chart, we'll find <span class="p-color-bg">web-deployment.yaml</span> and <span class="p-color-bg">web-service.yaml</span> under the <span class="p-color-bg">templates</span> folder. The <span class="p-color-bg">DATABASE_URL</span> is a DNS record set to be reachable from inside a Docker network not a Kubernetes network.

Using Templated Values with Helm Charts to Connect to CockroachDB in Kubernetes

In Docker Compose, a container is accessible by its hostname, identical to the name of the container itself. This name is specified in the <span class="p-color-bg">docker-compose.yml</span> file, and is used to reference the container within the Docker Compose environment. For example, our CockroachDB cluster is composed of 3 containers, we've named <span class="p-color-bg">roach-0</span>, <span class="p-color-bg">roach-1</span>, and <span class="p-color-bg">roach-2</span>. From within this Docker network, our Flask app can connect to the CockroachDB cluster by any of their container names. In this case, <span class="p-color-bg">roach-0</span>.

In Kubernetes, a container's fully qualified domain name (FQDN) is more complex and is automatically generated based on the name of the container and the namespace it is running in. Let's look at an illustrated example of the FQDN returned on successful install of our CockroachDB Helm chart:

Because both the release name and namespace are liable to change depending on your target environment, we'll use templated values to expose them to the user. To do so, open <span class="p-color-bg">flask-app/templates/web-deployment.yaml</span> in VS Code and change the file to look like the following:


        - command: ["gunicorn", "--bind", "0.0.0.0:5000", "manage:app"]
        # - args:
        #     - gunicorn
        #     - --bind
        #     - 0.0.0.0:5000
        #     - manage:app
          env:
          {{- range $key, $value := .Values.env }}
            - name: {{ $key }}
              value: {{ $value | quote }}
          {{- end }}
            - name: DATABASE
              value: cockroachdb
            - name: DATABASE_URL
              value: cockroachdb://{{ .Values.databaseUser}}{{ if .Values.secureCluster}}:{{ .Values.databasePassword}}{{ end }}@{{ .Values.databaseName}}-public.{{ .Values.namespace}}.svc.cluster.local:26257/defaultdb
            - name: FLASK_APP
              value: project/__init__.py
            - name: SQL_HOST
              value: {{ .Values.databaseName}}
            - name: SQL_PORT
              value: "26257"
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"

We've replaced the <span class="p-color-bg">args</span> key for a <span class="p-color-bg">command</span> key. This will be useful in part 2 but otherwise has makes no difference here.

In Helm, a templated value is a variable that can be used in the configuration of a Helm chart. Templated values can be defined in the  <span class="p-color-bg">values.yaml</span> file of a Helm chart and can be referenced in the chart's templates using the  <span class="p-color-bg">{{ }}</span> syntax.

There are benefits to using templated values in Helm charts:

  1. Reusability: Templated values allow you to define values that can be used in multiple places within a chart, making it easier to reuse the same configuration across different deployments.
  2. Customization: Templated values allow you to customize the configuration of a chart for different environments or use cases. For example, you could define a templated value for the number of replicas of a deployment and use different values for development, staging, and production environments.

The range loop is useful because it allows you to iterate over a map of templated values and generate output for each key-value pair. In this case, the output is a list of environment variables that will be passed to a Kubernetes pod. We'll make use of this loop in part 2 to pass in variables like <span class="p-color-bg">FLASK_DEBUG</span> to set variables specific to our environment (dev or prod).

Now create a <span class="p-color-bg">values.yaml</span> file inside the <span class="p-color-bg">flask-app</span> folder and fill it like so:


databaseUser: roach
databasePassword: roach
databaseName: my-cockroachdb
namespace: default
secureCluster: false
image:
  repository: docker.io/worldofgeese/flask-app
  tag: latest

These values will automatically replace the double brackets with our default values. To upgrade our deployed Flask chart with these new values, run <span class="p-color-bg">helm upgrade my-flask-app ./flask-app</span>. We can confirm our new values have been set with <span class="p-color-bg">kubectl get pod -l io.kompose.service=web -o jsonpath='{.items[].spec.containers[].env[1].value}'</span>, which will echo to our terminal our new database connection string. If we wanted to pass in an overrides file, we could create a new <span class="p-color-bg">values.yaml</span> file outside the chart folder and pass it in with <span class="p-color-bg">helm upgrade -f values.yaml my-flask-app ./flask-app</span>

Now run my helper shell script, <span class="p-color-bg">up.kube.sh</span>, which, like my other helper scripts, just creates our database user and assigns it a password. This is needed to access our secure CockroachDB cluster. Then re-run our request to Flask to create and seed our database:


kubectl exec -it deployment/web --\
 python manage.py create_db
 
kubectl exec -it deployment/web --\
 python manage.py create_db

Congratulations, you've gone from your Compose stack to Kubernetes in a little less than an hour! And with that you're ready to adopt cloud native tooling like Garden to write and deploy faster than you ever did with Docker Compose. Join us for the next installment, where I show you how to take your Helm Chart you've just made, deploy it with Garden, and how all this comes together to accelerate your inner development loop.

Interested in harmonizing your own stack? There's more content to read and watch from your Developer Advocate, Tao Hansen:

Join our community on Discord to ask questions, share your experiences, and get help from the Garden team and other Garden users. And there's more The Inner Loop, a newsletter on free and open source software, monthly. It's full of only good things I love and use in my day to day. You can also follow me on Mastodon.

[Heading image: Anonymous, section detail of “Geigyo Hinshu Zukan” (Fourteen Varieties of Whales) (1760). Credit: New Bedford Whaling Museum]

Previous
Next