Killercoda Cka by Chad M. Crowell

- 13 mins read

CKA Preparation

1. Single Node Cluster


2. Two Node Cluster


3. Quick SSH: Check and Restart kubelet

Practiced:

  • SSH into a worker node (node01 )
  • Checking kubelet status
  • Restarting kubelet to restore node health
ssh node01
sudo systemctl status kubelet -n 20
sudo systemctl start kubelet
sudo systemctl status kubelet -n 20

- Lesson learned

  • check kubelet log sudo systemctl status kubelet -n 20 or kubelet

4. Kubernetes PKI Essentials

Practiced:

  • Explore the /etc/kubernetes/pki directory and understand its contents.
  • See exactly how the API server depends on those files.
  • Temporarily “break” it, watch the cluster complain, and then bring it back to life.

- Lesson learned

  • /etc/kubernetes/pki (crown jewels)
controlplane:~$ ls -al /etc/kubernetes/pki
total 68
drwxr-xr-x 3 root root 4096 Sep 17 16:50 .
drwxrwxr-x 4 root root 4096 Aug 19 09:03 ..
-rw-r--r-- 1 root root 1123 Aug 19 09:03 apiserver-etcd-client.crt
-rw------- 1 root root 1675 Aug 19 09:03 apiserver-etcd-client.key
-rw-r--r-- 1 root root 1176 Aug 19 09:03 apiserver-kubelet-client.crt
-rw------- 1 root root 1675 Aug 19 09:03 apiserver-kubelet-client.key
-rw-r--r-- 1 root root 1289 Aug 19 09:03 apiserver.crt
-rw------- 1 root root 1679 Aug 19 09:03 apiserver.key
-rw-r--r-- 1 root root 1107 Aug 19 09:03 ca.crt
-rw------- 1 root root 1675 Aug 19 09:03 ca.key
drwxr-xr-x 2 root root 4096 Aug 19 09:03 etcd
-rw-r--r-- 1 root root 1123 Aug 19 09:03 front-proxy-ca.crt
-rw------- 1 root root 1679 Aug 19 09:03 front-proxy-ca.key
-rw-r--r-- 1 root root 1119 Aug 19 09:03 front-proxy-client.crt
-rw------- 1 root root 1679 Aug 19 09:03 front-proxy-client.key
-rw------- 1 root root 1679 Aug 19 09:03 sa.key
-rw------- 1 root root  451 Aug 19 09:03 sa.pub
  • for checking if kubectl is available
kubectl get --raw='/readyz?verbose' | head
kubectl get nodes
  • admin.conf references the CA and client cert/key, letting kubectl authenticate.

5. Create a Gateway and HTTPRoute

practice:

  • Creating a Gateway and HTTPRoute using the Gateway API in Kubernetes!
  1. Check API CRDs
k get crds | grep gateway
  1. Create a Basic Gateway
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: my-gateway
  namespace: defualt
spec:
  gatewayClassName: nginx
  listeners:
  - name: http
    protocol: HTTP
    port: 80
  1. Create a Deployment and Service
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web
  labels:
    app: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
  1. Create Path-Based HTTPRoute
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: web-route
  namespace: default
spec:
  parentRefs:
  - name: my-gateway
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: "/"
    backendRefs:
    - name: web
      port: 80

- Lesson learned

  • Check CRDs gateway
k get crds | grep gateway
  • Gateway definition A central entry point (e.g. NGINX, Istio, Cilium) to manage into a cluster

6. Prepare for Kubeadm Install

Preparing the control plane node for installing a kubeadm Kubernetes cluster

  1. Prepare wget https://github.com/containerd/containerd/releases/download/v2.0.4/containerd-2.0.4-linux-amd64.tar.gz

  2. Allow IPv4 packets to be routed between interfaces

vim /etc/sysctl.conf #modify the config value
sysctl -p #reload the config
  1. Check the value
sysctl net.ipv4.ip_forward
sysctl net.bridge.bridge-nf-call-iptables

7. Install a Database Operator (redo, idk wt’s happening)

Practice installing a postgreSQL database operator in Kubernetes!

  1. Install Operator
# Cloning to local
git clone --depth 1 "https://github.com/CrunchyData/postgres-operator-examples.git"
# Install
kubectl apply -k kustomize/install/namespace
kubectl apply --server-side -k kustomize/install/default

# Check the staus
k -n postgres-operator get po -w
k get crds | grep postgres
  1. Create Database CRD
# Install hippo
k apply -k kustomize/postgres
# Track the progress
k -n postgres-operator get postgresclusters
k -n postgres-operator describe postgresclusters hippo
k -n postgres-operator get postgresclusters
# Start PostgreSQL service
export PG_CLUSTER_PRIMARY_POD=$(kubectl get pod -n postgres-operator -o name -l postgres-operator.crunchydata.com/cluster=hippo,postgres-operator.crunchydata.com/role=master)
kubectl -n postgres-operator port-forward "${PG_CLUSTER_PRIMARY_POD}" 5432:5432
# Connect to PostgreSQL

8. Priority Class

To practice with Pod Priority and Preemption in Kubernetes

  1. Get the priorityclass
k get pc
  1. Use kubectl to create a new PriorityClass named high-priority with a value of 1000000
k create pc high-priority --value=1000000
  1. Create a deployment named low-prio that has 3 pod replicas. Use the polinux/stress image with the command [“stress] and the argument [”–vm", “1”, “–vm-bytes”, “400M”, “–timeout”, “600s”] . The pod should request 500 Mebibytes of memory and 100 millicores of CPU.
# Create a low-privielged low .yaml
kubectl create deployment low-prio \
  --image=polinux/stress \
  --replicas=3 \
  --dry-run=client -o yaml > low-prio.yaml
#orignal from the command above
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: low-prio
  name: low-prio
spec:
  replicas: 3
  selector:
    matchLabels:
      app: low-prio
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: low-prio
    spec:
      containers:
      - image: polinux/stress
        name: stress
        command: ["stress"]
        args: ["--vm", "1", "--vm-bytes", "400M", "--timeout", "600s"]
        resources:
          requests:
            memory: "500Mi"
            cpu: "100m"
  1. Create a pod that uses the high-priority priority class created in a previous step. Name the pod high-prio and use the polinux/stress image with the command ["–cpu", “1”, “–vm”, “1”, “–vm-bytes”, “512M”, “–timeout”, “300s”] . The pod should request 200 mebibytes of memory and 200 millicores of CPU.
apiVersion: v1
kind: Pod
metadata:
  name: high-prio
spec:
  priorityClassName: high-priority
  containers:
  - name: stress
    image: polinux/stress
    command: ["stress"]
    args: ["--cpu", "1", "--vm", "1", "--vm-bytes", "512M", "--timeout", "300s"]
    resources:
      requests:
        memory: "200Mi"
        cpu: "200m"
  1. Test Preemption
# request additional memory
sed -i 's/200Mi/600Mi/' high-prio.yaml
# restart the pod
kubectl replace -f high-prio.yaml --force
k get po -w

- Lesson learned

  1. ‘dry-run’: check the validity, but nothing created or chagned
  2. –dry-run=client: Local
  3. –dry-run=server: API server

9. Debug a Go App in Kubernetes

Debug Kubernetes apps

  1. Debugging
journalctl
k get po
k logs goapp-deployment-dxxxx-asfdsf

Missing Error: PORT environment variable not set

  1. Mapping the port 8080 as shown in the main.go
k edit deployments.apps goapp-deployment

10. List API Resources

  1. List all cluster resources
kubectl api-resources > resources.csv

- Lesson learned

  1. List all cluster resources
kubectl api-resources > resources.csv

11. Linux System Services

  1. List all k8s related service on linux system
sudo systemctl list-unit-files --type service --all | grep kube > services.csv

12. Kubelet Status

  1. Get Kubelet Status
# get the status of kubelet using systemctl and save to '/tmp/kubelet-status.txt'
sudo systemctl status kubelet > /tmp/kubelet-status.txt

13. Create a Pod Declaratively

  1. Create a Pod Yaml with dry-run
# use kubectl to create a dry run of a pod, output to YAML, and save it to the file 'chap1-pod.yaml' 
kubectl run pod --image nginx --dry-run=client -o yaml > chap1-pod.yaml

# create the pod from YAML
kubectl create -f chap1-pod.yaml

14. List All Kubernetes Services

List all services created in k8s

k get svc -A

15. List the Pods and Their IP Addresses

Show the pod IP addresses

kubectl -n kube-system get po -o wide

16. View the Kubelet Client Certificate

# view the config that kubelet uses to authenticate to the Kubernetes API
cat /etc/kubernetes/kubelet.conf > kubelet-config.txt

# view the certificate using openssl. Get the certificate file location from the 'kubelet.conf' file above. 
openssl x509 -in /var/lib/kubelet/pki/kubelet-client-current.pem -text -noout

17. Create a Role and Role Binding

Create a new role named “sa-creator” that will allow creating service accounts in the default namespace.

  1. Create a new role named “sa-creator” that will allow creating service accounts in the default namespace.
# create a role named 'sa-creator' and add the verb 'create' and resource 'sa' (short for serviceaccounts) 
kubectl create role sa-creator --verb=create --resource=sa
# view the newly created role
kubectl get role
  1. Create a new role binding named sa-creator-binding that will bind to the sa-creator role and apply to a user named Sandra in the default namespace.
k create rolebinding sa-creator-binding --role=sa-creator --user=Sandra
k get role,rolebinding
  1. Create a service account named dev, Create a role binding that will bind the view cluster role to the newly created dev service account in the default namespace.
kubectl create rolebinding dev-view-binding --clusterrole=view --serviceaccount=default:dev --namespace=default

18. Create a Cluster Role and Role Binding

  1. Create a new cluster role named “acme-corp-clusterrole” that can create deployments, replicasets and daemonsets.
k create clusterrole acme-corp-clusterrole --verb=create --resource=deployments,replicasets,daemonsets
  1. Bind the cluster role ‘acme-corp-clusterrole’ to the service account ’secure-sa’ making sure the ‘secure-sa’ service account can only create the assigned resources within the default namespace and nowhere else.
k create clusterrolebinding secure-sa --clusterrole=acme-corp-clusterrole --user=secure-sa -n default
k -n default create rolebinding acme-corp-default --clusterrole=acme-corp-clusterrole --serviceaccount=default:secure-sa

- Lesson learned

  • clusterrolebinding apply to a whoel cluster, for single namespace use rolebinding

19. Upgrading Kubernetes

Upgrade the Kubernetes control plane components using kubeadm

  1. Check the current version and the target version
kubadm upgrade plan
  1. Upgrade kubeadm
#upgrad from v.1.33.2 to v1.33.5
apt update
apt search kubeadm
apt install kubeadm=1.33.5-1.1
kubeadm upgrade apply v1.33.5

- Lesson learned

  • display json kubeadm version -o json | jq

20. Create Service Account For a Pod

Creating a pod that uses a service account, but not exposing the service account token to the pod.

  1. Create a new service account named ’secure-sa’ in the default namespace that will not automatically mount the service account token.
k -n default create sa secure-sa --dry-run=client -o yaml > sa.yaml

edit yaml filec

apiVersion: v1
kind: ServiceAccount
metadata:
  creationTimestamp: null
  name: secure-sa
  namespace: default
automountServiceAccountToken: false
k apply -f sa.yaml
  1. Create Pod using Service Account
#pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: secure-pod
spec:
  serviceAccountName: secure-sa
  containers:
  - image: nginx
    name: secure-pod
k apply -f pod.yaml
# get a shell to the pod and output the token (if mounted)
controlplane:~$ kubectl exec secure-pod -- cat /var/run/secrets/kubernetes.io/serviceaccount/token
cat: /var/run/secrets/kubernetes.io/serviceaccount/token: No such file or directory
command terminated with exit code 1

- Lesson learned

  • Disabling auto-mount is thus useful for security-sensitive or minimal pods that do not need Kubernetes API access, following least privilege principles

21. Taints and Tolerations

Apply a toleration to a pod, in order to match a taint on the node

  1. List the taints for node01
controlplane:~$ k describe nodes node01 | grep Taints
Taints:             dedicated=special-user:NoSchedule
  1. Apply the tolerations to the pod
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: nginx
  name: nginx
spec:
  containers:
  - image: nginx
    name: nginx
  tolerations:
  - key: "dedicated"
    operator: "Equal"
    value: "special-user"
    effect: "NoSchedule"

22. Backup etcd

Backup the Kubernetes cluster configuration by taking a snapshot of the etcd datastore.

  1. Take a snapshot of the etcd datastore
export ETCDCTL_API=3
etcdctl snapshot save snapshot --cacert /etc/kubernetes/pki/etcd/ca.crt --cert /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key

#Check the status of your snapshot and write the output to a table using this command
etcdctl snapshot status snapshot --write-out=table
  1. Modify the Kubernetes cluster state (a disaster happens)
k delete ds kube-proxy -n kube-system
k get ds -A
  1. Restore the cluster state
etcdctl snapshot restore snapshot --data-dir /var/lib/etcd-restore
k get ds -A

the k8s API will be unavailable (~3mins)


23. Create New User (re-do once)

Add a new user and assign them permissions via RBAC in Kubernetes.

  1. In order to create a Role and RoleBinding in a namespace, we have to create the namespace “web” using the command
k create ns web
  1. Now, let’s creat the role that will allow our new user to “get” and “list” pods in the web namespace
k -n web create role pod-reader --verb=get,list --resource=pods
  1. Now that we’ve created a role, let’s assign this role to our new user named ‘carlton’
k -n web create rolebinding pod-reader-binding --role=pod-reader --user=carlton
  1. Create a pod for user permission testing
k -n web run pod1 --image=nginx
  1. Create a certificate signing request
openssl genrsa -out carlton.key 2048
openssl req -new -key carlton.key -subj "/CN=carlton" -out carlton.csr
  1. Submite CSR to Kubernetes API
export REQUEST=$(cat carlton.csr | base64 -w 0)
controlplane:~$ export REQUEST=$(cat carlton.csr | base64 -w 0)

controlplane:~$ cat <<EOF | kubectl apply -f -
apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
  name: carlton
spec:
  groups:
  - system:authenticated
  request: $REQUEST
  signerName: kubernetes.io/kube-apiserver-client
  usages:
  - client auth
EOF
certificatesigningrequest.certificates.k8s.io/carlton created
controlplane:~$ k get csr
NAME      AGE   SIGNERNAME                            REQUESTOR          REQUESTEDDURATION   CONDITION
carlton   3s    kubernetes.io/kube-apiserver-client   kubernetes-admin   <none>              Pending
controlplane:~$ k certificate approve carlton
certificatesigningrequest.certificates.k8s.io/carlton approved
controlplane:~$ k get csr
NAME      AGE   SIGNERNAME                            REQUESTOR          REQUESTEDDURATION   CONDITION
carlton   14s   kubernetes.io/kube-apiserver-client   kubernetes-admin   <none>              Approved,Issued
controlplane:~$ 

#Extract the client certificate
k get csr carlton -o jsonpath='{.status.certificate}' | base64 -d > carlton.crt

#

24. Apply node affinity to a pod

Apply node affinity to a pod in Kubernetes!

  1. In the namespace named 012963bd , create a pod named az1-pod which uses the nginx:1.24.0 image. This pod should use node affinity, and prefer during scheduling to be placed on the node with the label availability-zone=zone1 with a weight of 80. Also, have that same pod prefer to be scheduled to a node with the label availability-zone=zone2 with a weight of 20.
k -n 012963bd run az1-pod --image=nginx:1.24.0 --dry-run=client -o yaml > abc.yaml
#abc.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: az1-pod
  name: az1-pod
  namespace: 012963bd
spec:
  affinity:
    nodeAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 80
        preference:
          matchExpressions:
          - key: availability-zone
            operator: In
            values:
            - zone1
      - weight: 20
        preference:
          matchExpressions:
          - key: availability-zone
            operator: In
            values:
            - zone2
  containers:
  - image: nginx:1.24.0
    name: az1-pod
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
k apply -f abc.yaml

25. Scheduling a pod to a specific node

Schedule a pod to a node by the node’s name in a Kubernetes cluster!

  1. In the namespace named 012963bd , create a pod named ctrl-pod which uses the nginx image. This pod should be scheduled to the control plane node.
k -n 012963bd run ctrl-pod --image=nginx --dry-run=client -o yaml > abc.yaml
k -n 012963bd run ctrl-pod --image nginx --dry-run=client -o yaml > pod.yaml

use nodeName

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: ctrl-pod
  name: ctrl-pod
  namespace: 012963bd
spec:
  containers:
  - image: nginx
    name: ctrl-pod
    resources: {}
  nodeName: controlplane
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}

- Lesson learned

  • Pod to Node (suggest to use label selector, these are more sepcific usage)
    • nodeSelector field matching against node labels
    • Affinity and anti-affinity
    • nodeName field
    • Pod topology spread constraints

27. Upgrade Kubelet (sth error redo)


28. Logging in Kubernetes

Create a pod in two different ways, followed by viewing their logs in Kubernetes!

  1. Create a pod that the container within will log to STDOUT
apiVersion: v1
kind: Pod
metadata:
  name: counter
spec:
  containers:
  - name: count
    image: busybox:1.28
    args: [/bin/sh, -c,
            'i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done']
  1. Create a pod with an additional sidecar container
apiVersion: v1
kind: Pod
metadata:
  name: pod-logging-sidecar
spec:
  containers:
  - image: busybox
    name: main
    args: [ 'sh', '-c', 'while true; do echo "Wed Sep 24 01:42:38 UTC 2025\n" >> /var/log/main-container.log; sleep 5; done' ]
    volumeMounts:
      - name: varlog
        mountPath: /var/log
  - name: sidecar
    image: busybox
    args: [ /bin/sh, -c, 'tail -f /var/log/main-container.log' ]
    volumeMounts:
      - name: varlog
        mountPath: /var/log
  volumes:
    - name: varlog
      emptyDir: {}
  1. Create a deployment of mysql
controlplane:~$ k logs mysql-54644cb8b9-ncb6d
2025-09-24 02:52:31+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.4.6-1.el9 started.
2025-09-24 02:52:31+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql'
2025-09-24 02:52:31+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.4.6-1.el9 started.
2025-09-24 02:52:31+00:00 [ERROR] [Entrypoint]: Database is uninitialized and password option is not specified
    You need to specify one of the following as an environment variable:
    - MYSQL_ROOT_PASSWORD
    - MYSQL_ALLOW_EMPTY_PASSWORD
    - MYSQL_RANDOM_ROOT_PASSWORD

find the log and add value to the deployment by

k edit deploy mysql
...
containers:
      - env:
        - name: MYSQL_ROOT_PASSWORD
          value: password
        image: mysql:8
        imagePullPolicy: IfNotPresent
        name: mysql
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
...

29. ConfigMap in Kubernetes

Create a configMap for a pod in Kubernetes

  1. Create a configmap named redis-config . Within the configMap, use the key maxmemory with value 2mb and key maxmemory-policy with value allkeys-lru .
k create cm redis-config --from-literal=maxmemory=2mb --from-literal=maxmemory-policy=allkeys-lru
  1. Create a pod named redis-pod that uses the image redis:7 and exposes port 6379 . Use the command redis-server /redis-master/redis.conf to store redis configuration data and store this in an emptyDir volume.

Mount the redis-config configmap as a volume to the pod for use within the container.

k run redis-pod --image=redis:7 --expose=true --port=6379 --dry-run=client -oyaml > abc.yaml