Running Kubernetes on Gitlab CI

Running a Kubernetes cluster in your Gitlab CI jobs

kubernetes gitlab ci-cd featured
2021-07-31
Thomas Kooi

When you work on infrastructure, develop helm charts or simply want you run your tests in a more production like environment, running Kubernetes on your Gitlab CI may a good fit for you. Luckily, it’s only a little bit of configuration to set-up!

Using kind

Your first option is kind. From their website:

kind is a tool for running local Kubernetes clusters using Docker container “nodes”. kind was primarily designed for testing Kubernetes itself, but may be used for local development or CI.

Requirements

  • CI Runner with Docker support enabled (essentially using --docker-privileged, see docs.gitlab.com)
stages:
- test

variables:
  # When using dind service, we need to instruct docker, to talk with
  # the daemon started inside of the service. The daemon is available
  # with a network connection instead of the default
  # /var/run/docker.sock socket. docker:19.03.1 does this automatically
  # by setting the DOCKER_HOST in
  # https://github.com/docker-library/docker/blob/d45051476babc297257df490d22cbd806f1b11e4/19.03.1/docker-entrypoint.sh#L23-L29
  #
  # The 'docker' hostname is the alias of the service container as described at
  # https://docs.gitlab.com/ee/ci/docker/using_docker_images.html#accessing-the-services.
  #
  # Note that if you're using the Kubernetes executor, the variable
  # should be set to tcp://localhost:2376 because of how the
  # Kubernetes executor connects services to the job container
  DOCKER_HOST: tcp://localhost:2375
  #
  # Specify to Docker where to create the certificates, Docker will
  # create them automatically on boot, and will create
  # `/certs/client` that will be shared between the service and job
  # container, thanks to volume mount from config.toml
  DOCKER_TLS_CERTDIR: ""
  DOCKER_DRIVER: overlay2

kind:test-cluster:
  image: docker:stable
  variables:
    KUBECTL: v1.21.1
    KIND: v0.11.1
  services:
    - docker:stable-dind
  stage: test
  before_script:
    - apk add -U wget
    - wget -O /usr/local/bin/kind https://github.com/kubernetes-sigs/kind/releases/download/${KIND}/kind-linux-amd64
    - chmod +x /usr/local/bin/kind
    - wget -O /usr/local/bin/kubectl https://storage.googleapis.com/kubernetes-release/release/${KUBECTL}/bin/linux/amd64/kubectl
    - chmod +x /usr/local/bin/kubectl
    # Set up the cluster
    - kind create cluster --config=./kind-config.yaml
    # When running on gitlab-ci runners with Kubernetes executor
    - sed -i -E -e 's/localhost|0\.0\.0\.0/docker/g' "$HOME/.kube/config"
  script:
    # Display initial pods, etc.
    - kubectl get nodes -o wide
    - kubectl get pods --all-namespaces -o wide
    - kubectl get services --all-namespaces -o wide

You need to provide the following config file (kind-config.yaml):

apiVersion: kind.x-k8s.io/v1alpha4
kind: Cluster
networking:
  apiServerAddress: "0.0.0.0"

# add to the apiServer certSANs the name of the docker (dind) service in order to be able to reach the cluster through it
kubeadmConfigPatchesJSON6902:
  - group: kubeadm.k8s.io
    version: v1beta2
    kind: ClusterConfiguration
    patch: |
      - op: add
        path: /apiServer/certSANs/-
        value: docker
nodes:
  - role: control-plane

When you wish to expose an application outside of your cluster, you need to add some port mappings to a node:

nodes:
  - role: control-plane
    extraPortMappings:
    - containerPort: 30000
      hostPort: 30000

You can node use a nodePort service within the cluster, and it will be reachable from the CI build job on port 30000.


Using k3d

Alternatively, you can also run on k3s using k3d. This is works almost identical:

stages:
- test

variables:
  # When using dind service, we need to instruct docker, to talk with
  # the daemon started inside of the service. The daemon is available
  # with a network connection instead of the default
  # /var/run/docker.sock socket. docker:19.03.1 does this automatically
  # by setting the DOCKER_HOST in
  # https://github.com/docker-library/docker/blob/d45051476babc297257df490d22cbd806f1b11e4/19.03.1/docker-entrypoint.sh#L23-L29
  #
  # The 'docker' hostname is the alias of the service container as described at
  # https://docs.gitlab.com/ee/ci/docker/using_docker_images.html#accessing-the-services.
  #
  # Note that if you're using the Kubernetes executor, the variable
  # should be set to tcp://localhost:2376 because of how the
  # Kubernetes executor connects services to the job container
  DOCKER_HOST: tcp://localhost:2375
  #
  # Specify to Docker where to create the certificates, Docker will
  # create them automatically on boot, and will create
  # `/certs/client` that will be shared between the service and job
  # container, thanks to volume mount from config.toml
  DOCKER_TLS_CERTDIR: ""
  DOCKER_DRIVER: overlay2

k3d:test-cluster:
  image: docker:stable
  variables:
    KUBECTL: v1.21.3
  stage: test
  services:
    - docker:stable-dind
  before_script:
    - apk add -U wget bash curl
    # install kubectl
    - wget -O /usr/local/bin/kubectl https://storage.googleapis.com/kubernetes-release/release/${KUBECTL}/bin/linux/amd64/kubectl
    - chmod +x /usr/local/bin/kubectl
    # install k3d
    - wget -q -O - https://raw.githubusercontent.com/rancher/k3d/main/install.sh | bash
    - k3d help
    - k3d cluster create testgitlabci --agents 1 --wait -p "30000:30000@agent[0]"

    - kubectl cluster-info
  script:

    # Display initial pods, etc.
    - kubectl get nodes -o wide
    - kubectl get pods --all-namespaces -o wide
    - kubectl get services --all-namespaces -o wide
  after_script: ['k3d cluster delete testgitlabci']

Note the -p "30000:30000@agent[0]", which is used to make port 30000 available for your build job.

Using the cluster

You can use this cluster to test your helm charts, or do various other tasks. What I ended up doing, was creating a custom Docker image that comes with all necessary tooling and languages pre-installed, that allows me to speed up the build pipelines by skipping the install step.

For example, when using golang with k3d:

FROM docker.io/circleci/golang

ARG KUBECTL=v1.21.1

USER root
RUN apt-get update && apt-get install -y wget bash curl

RUN wget -O /usr/local/bin/kubectl https://storage.googleapis.com/kubernetes-release/release/${KUBECTL}/bin/linux/amd64/kubectl \
    && chmod +x /usr/local/bin/kubectl \
    && wget -q -O - https://raw.githubusercontent.com/rancher/k3d/main/install.sh | bash \
    && curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash \
    && k3d help

This allows me to run integration tests written in golang against a kubernetes cluster.

Enabling network policies (kind)

By default, kind does not support network policies. This is due to the use of kindnetd.

In order to enable network policy support, you must disable the default CNI installation provided by kind. This can be done by providing the following configuration to the kind-config.yml file:

networking:
  # the default CNI will not be installed
  disableDefaultCNI: true

After creating the cluster, run the following command:

kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

Example:

kind:test-cluster:
  image: docker:stable
  variables:
    KUBECTL: v1.21.1
    KIND: v0.11.1
  services:
    - docker:stable-dind
  stage: test
  before_script:
    - apk add -U wget
    - wget -O /usr/local/bin/kind https://github.com/kubernetes-sigs/kind/releases/download/${KIND}/kind-linux-amd64
    - chmod +x /usr/local/bin/kind
    - wget -O /usr/local/bin/kubectl https://storage.googleapis.com/kubernetes-release/release/${KUBECTL}/bin/linux/amd64/kubectl
    - chmod +x /usr/local/bin/kubectl
    # Set up the cluster
    - kind create cluster --config=./kind-config.yaml
    # When running on gitlab-ci runners with Kubernetes executor
    - sed -i -E -e 's/localhost|0\.0\.0\.0/docker/g' "$HOME/.kube/config"
  script:
    - kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

    # Display initial pods, etc.
    - kubectl get nodes -o wide
    - kubectl get pods --all-namespaces -o wide
    - kubectl get services --all-namespaces -o wide

With k3d, you can use the same step to install calico, except you must disable flannel (the default networking backend) first. This is done by providing the following flag to k3d when creating the cluster: -k3s-server-arg '--flannel-backend=none'.

When you have installed calico, your should have support for network policies.


5 min read
Share this post:

Related posts