Running Concourse CI on Kubernetes on Google Cloud Platform

on DevOps,

Helm is the Kubernetes (k8s) package manager. Thanks to the Concourse Helm Chart, installing Concourse on k8s is a matter of running a few Helm commands and configuring port-forwarding for local access, or creating an Ingress for access through the Internet.

Prerequisites

We will need to install Google Cloud SDK (gcloud), Kubernetes command line interface (kubectl) and Helm package manager(helm).

On Mac OS X I use homebrew:

➜ brew install caskroom/cask/google-cloud-sdk
➜ brew install kubernetes-cli
➜ brew install kubernetes-helm

Next, we initialise our Google Cloud SDK environment. This redirects us to an OAuth 2 login page to connect to our Google Cloud Platform accounts.

➜ gcloud init

Welcome! This command will take you through the configuration of gcloud.

Your current configuration has been set to: [default]

You can skip diagnostics next time by using the following flag:
  gcloud init --skip-diagnostics

Network diagnostic detects and fixes local network connection issues.
Checking network connection...done.
Reachability Check passed.
Network diagnostic (1/1 checks) passed.

You must log in to continue. Would you like to log in (Y/n)?

...

Your Google Cloud SDK is configured and ready to use!

* Commands that require authentication will use <email> by default
* Commands will reference project <project> by default
* Compute Engine commands will use region <region> by default
* Compute Engine commands will use zone <zone> by default

...

Then, create a cluster and fetch a set of credentials and endpoint information for kubectl to point to it.

➜ gcloud container clusters create my-cluster

Creating cluster my-cluster...done.
Created [https://container.googleapis.com/v1/projects/rc-kube/zones/asia-southeast1-b/clusters/my-cluster].
kubeconfig entry generated for my-cluster.
NAME        ZONE               MASTER_VERSION  MASTER_IP      MACHINE_TYPE   NODE_VERSION  NUM_NODES  STATUS
my-cluster  asia-southeast1-b  1.7.8-gke.0     35.198.253.39  n1-standard-1  1.7.8-gke.0   3          RUNNING

➜ gcloud container clusters get-credentials my-cluster

Fetching cluster endpoint and auth data.
kubeconfig entry generated for my-cluster

Initialising Helm installs it into the cluster.

➜ helm init

Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Adding local repo with URL: http://127.0.0.1:8879/charts
$HELM_HOME has been configured at /Users/nicluo/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
Happy Helming!

Finally, install Concourse as a named release on the cluster.

➜ helm install --name my-concourse stable/concourse

NAME:   my-concourse
LAST DEPLOYED: Fri Nov 24 22:30:53 2017
NAMESPACE: default
STATUS: DEPLOYED

... 

The pods will take time to be ready before we start using. This takes about three minutes, and we can check periodically with the helm status command.

➜ helm status my-concourse

...

==> v1/Pod(related)
NAME                                      READY  STATUS   RESTARTS  AGE
my-concourse-postgresql-1408554648-8fcfb  1/1    Running  0         3m
my-concourse-web-3913604889-rs5kq         1/1    Running  0         3m

...

If all goes well so far, you have a Concourse distribution running! All we need to do now is find a way to access it with the Fly CLI.

Local Port Forwarding

Port forwarding allows us to forward one or more ports on the local workstation to a pod.

Looking at the status, the service my-concourse-web exposes port 8080 and 2222. Without an external IP, one way to reach it is to port forward. Additionally, while we want to access the service my-concourse-web, the port forward command should forward port 8080 from the pod named my-concourse-web-3913604889-rs5kq.

==> v1/Pod(related)
NAME                                      READY  STATUS   RESTARTS  AGE
my-concourse-postgresql-1408554648-8fcfb  1/1    Running  0         3m
my-concourse-web-3913604889-rs5kq         1/1    Running  0         3m

...

==> v1/Service
NAME                     TYPE       CLUSTER-IP     EXTERNAL-IP  PORT(S)            AGE
my-concourse-postgresql  ClusterIP  10.47.252.138  <none>       5432/TCP           3m
my-concourse-web         ClusterIP  10.47.248.49   <none>       8080/TCP,2222/TCP  3m
my-concourse-worker      ClusterIP  None           <none>       <none>             3m

We can run the following command to forward port 8080 on the local workstation to port 8080 of the pod that runs my-concourse-web service.

➜ export POD_NAME=$(kubectl get pods --namespace default -l "app=my-concourse-web" -o jsonpath="{.items[0].metadata.name}")
➜ kubectl port-forward --namespace default $POD_NAME 8080:8080

Fly CLI shoudl work by logging in to 127.0.0.1:8080, which connects to our service. We can also go to http://127.0.0.1:8080 to access the web UI.

➜ fly --target k8s-concourse login --team-name main --concourse-url http://127.0.0.1:8080

Configuring Ingress

In order to access Concourse through the Internet, we’ll need to assign it an external IP.

On GCE, reserve a global static external IP address named concourse-static-ip:

➜ gcloud compute addresses create concourse-static-ip --global

We’re going to do a lot more configuration to the Concourse Helm chart this time. Create a concourse-helm.yaml file to customise the chart parameters.

There are three parts to our change. First, we change the Concourse Web UI to listen on port 80. Creating an Ingress on GKE requires the web service to be a NodePort type. Lastly, we assign the reserved IP address we generated earlier and an expected hostname.

concourse:
  atcPort: 80
web:
  service:
    type: NodePort
  ingress:
    enabled: true
    annotations:
      kubernetes.io/ingress.global-static-ip-name: concourse-static-ip
    hosts:
      - concourse.example.com

After configuring, run the next command to update our deployment.

➜ helm upgrade -f concourse-helm.yaml my-concourse stable/concourse

We can start using by visiting our site at http://concourse.example.com, or we can continue and setup TLS certificates with Let’s Encrypt.

Adding TLS to Ingress

kube-lego is a service that automatically requests certificates for Kubernetes Ingress resources from Let’s Encrypt. We can install it into our cluster using Helm.

Create a configuration file called kube-lego-helm.yaml to provide our email address for requesting certificates. LEGO_URL is configured to point to the production Let’s Encrypt server, since that is set to a staging environment by default.

config:
  LEGO_URL: https://acme-v01.api.letsencrypt.org/directory
  LEGO_EMAIL: info@example.com

Install kube-lego into our cluster:

➜ helm install -f kube-lego-helm.yaml stable/kube-lego

For kube-lego to properly detect Ingress resources that should have TLS certificates, we need to update our Concourse configuration. Extend concourse-helm.yaml so that it includes these extra rows:

concourse:
  atcPort: 80
web:
  service:
    type: NodePort
  ingress:
    enabled: true
    annotations:
      kubernetes.io/tls-acme: true
      kubernetes.io/ingress.class: gce
      kubernetes.io/ingress.global-static-ip-name: concourse-static-ip
    hosts:
      - concourse.example.com
    tls:
      - secretName: concourse-tls
        hosts:
          - concourse.example.com

Again, update our deployment, and we should be able to visit https://concourse.example.com shortly.

➜ helm upgrade -f concourse-helm.yaml my-concourse stable/concourse

This time, we can finally configure our Fly CLI without port forwarding.

➜ fly --target k8s-concourse login --team-name main --concourse-url https://concourse.example.com:8080

Hooray! :)