I speak here {^_^}

Notes: getting started with Cluster-API (CAPI)! #28

July 26, 2021

Last friday, I attended a kubernetes #in-dev community session around the Cluster API (CAPI) project.

This was my very first time knowing what CAPI project is about (although, I’ve come across the name quite a lot of times before). And well, I found it to be a very interesting one for me. The reason being CAPI is a project, a tooling that helps with managing cluster infrastructure as declarative configuration. And this is the area that I’m concerned with, on a regular basis.

This blog is just me taking notes as I try to understand the concepts of the CAPI project & further play, around while seting it up locally.

Before I start with the commands, here is the definition for the CAPI (from he Cluster API Book).

Cluster API is a Kubernetes sub-project focused on providing declarative APIs and tooling to simplify provisioning, upgrading, and operating multiple Kubernetes clusters.

Started by the Kubernetes Special Interest Group (SIG) Cluster Lifecycle, the Cluster API project uses Kubernetes-style APIs and patterns to automate cluster lifecycle management for platform operators. The supporting infrastructure, like virtual machines, networks, load balancers, and VPCs, as well as the Kubernetes cluster configuration are all defined in the same way that application developers operate deploying and managing their workloads. This enables consistent and repeatable cluster deployments across a wide variety of infrastructure environments.

With that, let’s get started now!

It is required to have the following installed & setup on the machine (as pre-requisites):


STEP-1: Once the above pre-requisites are setup, the first step is, creating a local kubernetes cluster locally. (I’ll do this with kind)

  • Later in the post, I’ll be using Docker infrastructure provider (while playing around with CAPI to generate workload cluster, something we will discuss later in the post)

    Think of infrastructure provider as any of these cloud providers like AWS, GCP, Azure, etc that provides compute and resources, required in order to spin up a cluster. Because I are going to do it locally, I’ll be using docker containers as my infrastructure provider (i.e. use docker containers as computer resources) to create a cluster using CAPI.

    I require to create a custom kind config file as the following (for allowing the Docker provider to access Docker on the host):

    cat > kind-cluster-with-extramounts.yaml <<EOF
    kind: Cluster
    apiVersion: kind.x-k8s.io/v1alpha4
    nodes:
    - role: control-plane
      extraMounts:
        - hostPath: /var/run/docker.sock
          containerPath: /var/run/docker.sock
    EOF
    
  • And then I create a kind cluster, with the above config file.

    kind create cluster --config kind-cluster-with-extramounts.yaml


STEP-2: Installing the clusterctl CLI tool. It handles the lifecycle of a Cluster API management cluster.

  • Since, I’m on a linux OS, following are the commands I’m using:

    $ curl -L https://github.com/kubernetes-sigs/cluster-api/releases/download/v0.4.0/clusterctl-linux-amd64 -o clusterctl
    $ chmod +x ./clusterctl
    $ sudo mv ./clusterctl /usr/local/bin/clusterctl
    $ clusterctl version
    

STEP-3: At this step, I’ll use clusterctl tool to transform the (one I spinned up above) kind cluster into a management cluster by using clusterctl init. The command accepts as input a list of providers to install.

A Management cluster is a Kubernetes cluster that manages the lifecycle of Workload Clusters. A Management Cluster is also where one or more Infrastructure Providers run, and where resources such as Machines are stored.

  • Since, I’m using Docker infrastructure provider, I will add the --infrastructure docker flag to my command.

    clusterctl init --infrastructure docker

    Screenshot from 2021-07-25 23-15-17

  • The above command actually created the following CRD (custom resource definitions) & CR (custom resources) in the kind cluster (specific to the infrastructure provider I mentioned above)

    Screenshot from 2021-07-25 23-19-53


STEP-4: Now, next step is to create a Workload Cluster.

A workload cluster is a cluster created by a ClusterAPI controller, which is not a bootstrap cluster, and is meant to be used by end-users, as opposed to by CAPI tooling.

  • clusterctl generate cluster with input flags is used to generate a YAML template for creating a workload cluster. In my case, I’m using docker as infrasture provider, I’ll use the following command:

    clusterctl generate cluster test-workload-cluster --flavor development \
    --kubernetes-version v1.21.2 \
    --control-plane-machine-count=3 \
    --worker-machine-count=3 \
    > test-workload-cluster.yaml
    

    This command will populate the test-workload-cluster.yaml file with a predefined list of Cluster API objects; Cluster, Machines, Machine Deployments, etc. required to create a workload cluster with the (⬆️) listed cluster control & worker machine configs.

  • Apply the test-workload-cluster.yaml file to create the workload cluster.

    kubectl apply -f test-workload-cluster.yaml

    Screenshot from 2021-07-25 23-57-38


STEP-5: Now is the time to verify the workload cluster deployment & actually accessing it.

  • Check the status of the cluster

    kubectl get cluster

  • Use clusterctl to view of the cluster and its resources

    clusterctl describe cluster test-workload-cluster

  • Check the status of the control plane

    kubectl get kubeadmcontrolplane

    Note: *The control plane won’t be Ready until I’ll insall a CNI (container network interface) in the following step.


STEP-6: Deploy a CNI solution.

  • First, retrieve the workload cluster kubeconfig:

    clusterctl get kubeconfig test-workload-cluster > test-workload-cluster.kubeconfig

  • Here, it will use calico for an example.

    kubectl --kubeconfig=./test-workload-cluster.kubeconfig apply -f https://docs.projectcalico.org/v3.18/manifests/calico.yaml
    
  • After a short while, the nodes should be up & running

    kubectl --kubeconfig=./test-workload-cluster.kubeconfig get nodes
    

    Note that in the above commands, we are using the workerload cluster kubeconfig, and so, with that we are accessing the workload cluster, not the kind (management cluster) we spinned up for the first.


STEP-7: Finally, step is to delete the workload cluster & the management cluster

  • Delete the workload cluster

    kubectl delete cluster test-workload-cluster

  • Delete the management cluster

    kind delete cluster


References