Announcing Garden Cloud: Fast and efficient Kubernetes development with Garden, now available as a managed service. Try it for free.

GKE & Cloud SQL: A Step-by-Step Guide with Garden and Terraform

Eythor Magnusson
By Eythor Magnusson
March 05, 2020

Garden Deploy

In a previous blog post, I described an


In this guide, we’ll see it in action:

We’re going to provision a Google Kubernetes Engine (GKE) cluster and a Cloud SQL database, and set up a development and staging environment in just a handful of commands, using Garden.

Garden is a developer tool that automates your workflows and makes developing and testing Kubernetes applications faster and easier than ever.

For those who don’t want to read through the post, here’s a video that shows the entire process.

From Scratch

In what follows, we’ll:

Create a GKE cluster from scratch, using Garden’s Terraform plugin.
Deploy the application to our development namespace in the cluster, alongside a development database that we can easily spin up and tear back down.
Get a feel for what it’s like to develop against a remote cluster (it’s fast!).
See how we can easily run integration tests as we develop, now that our environment is fully remote.
Provision a persistent Cloud SQL database for our staging environment (again using Terraform) and deploy our app there.

Note that we took a few shortcuts to limit the scope of this blog post:

An environment is really just a namespace in the cluster, and everything is under the same GCP project. The recommended approach is to have an individual GCP project per environment.
We store the Terraform state locally and auto apply the stack when initialising the cluster. The recommended approach is to store the state remotely and turn auto-apply off.
We’re not using TLS certificates to secure our ingresses. See here for setting up TLS in a Garden project.

You’ll find the project source code here.

Project Structure

The app itself is very simple. It contains a single backend service written in Node.js that fetches an entry from a database table.

We also have two database modules:

a Postgres Helm chart that we deploy in the development environment
a persistent Cloud SQL database that we provision via Terraform for our staging environment.

It looks something like this:

Project structure for this project

The db-dev directory contains the Garden module for the Postgres Helm chart.

The cluster-dev and db-staging directories contain the entrypoints to the GKE and the Cloud SQL Terraform modules respectively. The modules themselves are in the shared directory.

To keep things simple, we deploy the staging environment to the dev cluster. That's why there's currently no cluster-staging directory. But we've set things up in such a way that you can easily add more environments which still re-use the same shared modules. For example cluster-staging and cluster-prod, and db-prod.

Before You Start

Step 1— Install Garden

You need to have Garden installed to follow this guide. You can get the latest version from our GitHub release page. You’ll also find a more detailed installation guide here.

Note that you don’t need to have Kubernetes or Docker installed.

Step 2— Install the Google Cloud SDK and authenticate

You will also need to have access to the Google Cloud Platform. If you’re a first time user, you can follow that link and get a $300 credit for free (as of March 2020).

Once you have a GCP account, you’ll need to install the gcloud command line tool (if you haven't already). Follow the instructions here to install it, and authenticate with GCP:

gcloud auth application-default login

Step 3— Set up a GCP project

Choose a project ID for this project and run the following (skip individual steps as appropriate):

export PROJECT_ID=<id> # (Skip if you already have a project) gcloud projects create $PROJECT_ID # If you haven't already, enable billing for the project (required for the APIs below). # You need an account ID (of the form 0X0X0X-0X0X0X-0X0X0X) to use for billing. gcloud alpha billing projects link $PROJECT_ID --billing-account=<account ID> # Enable the required APIs (this can sometimes take a while). gcloud services enable --project $PROJECT_ID

Deploying the Application

With gcloud installed and a GCP project set up, it's time to get down to business.

Step 1 — Clone the project and replace the default variables

First, clone the repo and change into the project directory:

git clone cd cloud-sql

Next, replace the default variables in the project level garden.yml file. You will need to set your own GCP project ID in the gcp_project_id field under the variables key.

kind: Project name: cloud-sql variables: # gcp_project_id: my-gcp-project <---- Replace this with your GCP project ID gcp_region: europe-west1 # <----- And optionally this... gcp_zone: europe-west1-b # <----- ... and this

Step 2 — Initialize the cluster

Now we can initialize the cluster with:

garden plugins kubernetes cluster-init

This will trigger a few things:

First, the terraform provider will apply the stack that the initRoot field in the project level garden.yml points to. In this case, it's the ./infra/cluster-dev directory. We add the value for the initRoot field via a template string so that we can easily add more environments to this project:

kind: Project name: cloud-sql variables: terraformInitRoot: "./infra/cluster-dev" # ... providers: - name: terraform initRoot: "${var.terraformInitRoot}"

Terraform init root

The Terraform provider defines a kubeconfig.yaml output that the kubernetes provider consumes, via:

kind: Project name: cloud-sql variables: terraformInitRoot: "./infra/cluster-dev" # ... providers: - name: terraform # ... - name: kubernetes kubeconfig: ${var.terraformInitRoot}/${providers.terraform.outputs.kubeconfig_path}

Kubeconfig path

This is how Garden knows to deploy the stack to that particular cluster.

Next, the kubernetes provider will install the system services to the garden-system namespace.

In this project, we have buildMode set to cluster-docker. This means that Garden will build all your images in-cluster, so that all the hard work happens there,


This whole process can take a few minutes.

Step 3 — Start developing!

In the project level garden.yml file, we've set the default environment to dev so that we can simply run:

garden dev --hot-reload backend

Since this is the first time we’re deploying the project, Garden will have to:

build the backend container image (in the cluster)
deploy the backend
deploy the Postgres Helm chart
run the tasks we’ve defined to initialize the database.

Subsequent runs will be much faster since in most cases the Helm chart will already be deployed, and Garden can leverage build caches for the backend service.

At this point, your entire stack should be deployed and Garden should be watching your code for changes.

Garden project being deployed

The Garden dev command does it all: Build, deploys, tests, and runs tasks.

If you now make changes to the ./backend/app.js file, you'll notice that Garden hot reloads the backend module.

However, we’re still not able to the call the backend service. For that, we need to expose it to the outside world.

Step 4 — Add the external cluster IP address to your DNS provider

To get the external IP address of your cluster, run:

kubectl get svc garden-nginx-nginx-ingress-controller -n garden-system

You should get an output like:

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE garden-nginx-nginx-ingress-controller LoadBalancer 80:32199/TCP,443:30117/TCP 7d5h

You’ll need to add the value under the EXTERNAL_IP field to your DNS provider. We recommend including a wildcard subdomain so that each developer can have their own development hostname.

How you do this will depend on how you manage DNS in general and is outside the scope of this post. See here for information on configuring ingress controllers and setting up TLS with Garden.

Once you’ve configured your DNS, you need to edit the defaultHostname field in the project level garden.yml file:

kind: Project name: cloud-sql environments: - name: dev variables: defaultHostname: ${local.env.USER || local.username} # <--- Replace this with your own hostname - name: staging variables: defaultHostname: staging-cloud-sql.${var.hostname} # <--- Replace this with your own hostname

If the dev command is still running, Garden will re-deploy the stack with the correct hostname set.

The project is configured so that each user has their own hostname in the development environment. For example, a user named Fatima will get and Bob will get

For staging, we’re simply using

Step 5 — Test the endpoints

Now that we’ve configured DNS, we can connect to our app from the outside.

The app is a simple Node.js webserver that has a /hello endpoint that returns entries from the database.

A simple way to test the endpoint is to use the Garden call command:

garden call backend/hello

The output should look like this:

Garden output

Calling the hello endpoint

You can also go to the Garden dashboard by opening http://localhost:9777 in a browser when Garden is in watch mode. On the Overview page you can click the endpoint and view the result inline.

Garden dashboard

The Garden dashboard

In the module level garden.yml configuration for the backend service, we've defined an integration test that checks whether the backend is able to read from the database. You can enable it by uncommenting it:

# backend/garden.yml kind: Module name: backend services: # ... dependencies: [db, db-init-dev] # tests: # <--- Uncomment this to to enable tests # - name: integ # command: [npm, run, test] # env: *env-vars # dependencies: [backend]

Notice that the test depends on the backend which in turn depends on the db-init-dev task. This means that Garden will ensure that the development database is running and initialized, and that the backend is running, before running the test.

If the dev command is still running, Garden will run the test after you uncomment the lines. You can also run it manually with the garden test command.

This way, your integration tests run as you develop the application.

Of course this is a very simple example, but for more complex applications, this kind of feedback is incredibly valuable. Instead of waiting until CI to find out that your changes broke a downstream service, you’ll know right away.

Step 6 — Deploy to staging

Once our test is passing, we can confidently deploy to the staging environment by running:

garden deploy --env staging

This time, Garden will ignore the Postgres Helm chart since it’s only enabled in the dev environment. Instead, Garden will use the Terraform module from the db-staging directory.

It will apply the stack and create the Cloud SQL database instance. This can take a few minutes.

Once that’s done, it’ll deploy the backend service to the staging environment with the environment variables needed to connect to the Cloud SQL database.

Step 7 — Initialise the Cloud SQL database

Since the Cloud SQL database has a private IP and is on the same network as the cluster, we can connect to it directly from the backend service.

First, let’s initialize it with the run task command:

garden run task db-init-staging --env staging

This task will create a user table and populate it with a user named 'Staging'. (Note that for the development environment we ran the dev command which automatically runs tasks. Here we're doing it manually.)

The environment variables needed to connect to the database are set in the garden.yml configuration for the backend service:

kind: Module name: backend services: - name: backend # ... env: DB_HOST: "${ == 'dev' ? 'postgres' :}" DB_PORT: "${ == 'dev' ? '5432' : ''}" DB_USER: "${ == 'dev' ? 'postgres' :}" DB_PASSWORD: "${ == 'dev' ? 'postgres' :}"

Environment variables

Notice how the the Terraform db module returns the actual private IP address of the Cloud SQL database after it has created it.

This means that the backend service can connect to the persistent Cloud SQL database from the staging environment without us having to change a line of code.

Let’s give it a try:

garden call backend/hello --env staging

Initialize the database

Calling the hello endpoint in the staging environment

And that’s it, our application is now running in the staging environment and reading from the Cloud SQL database.


To cleanup, simply delete your GCP project, and the Terraform state:

gcloud projects delete $PROJECT_ID rm -rf .terraform terraform.tfstate


To briefly recap, we’ve:

Created a GKE cluster with a development namespace for each user, and a single staging namespace.
Deployed the application to the development namespace and seen how fast Garden updates it on changes.
Added the external cluster IP to our DNS provider and tested the /hello endpoint.
Added an integration test that runs as we develop.
Provisioned a persistent Cloud SQL database for our staging environment and deployed our application there.
Initialized the Cloud SQL database and verified that our app works in both environments.

I really hope that you’ve found this guide useful. Everything I’ve shown you is open-source and available on our GitHub page.

For more information on using Garden, check out our docs. And if you have any questions whatsoever, head to our community channel on the Kubernetes Slack, we’d love to hear from you!

Liked this article? Why not share it.
Stay in the loop
Subscribe to our newsletter for all the updates.

I would like to subscribe to the Garden newsletter.Read our privacy policy here.

Copyright © 2021 Garden.
All rights reserved.