Killercoda Cka by Alexis Carbillet

- 8 mins read

CKA Preparation

1. CKA Practice: ConfigMaps and Secrets

  1. Create a ConfigMap and Secret
  • Your application needs a database configuration and a password stored securely.
    • Create a ConfigMap named app-config with:
      • DB_HOST=localhost
      • DB_PORT=3306
    • Create a Secret named db-secret with:
      • DB_PASSWORD=supersecret (base64 encoded automatically by kubectl)
#configmap
apiVersion: v1
kind: ConfigMap
metadata:
    name: app-config
data:
    DB_HOST: "localhost"
    DB_PORT: "3306"
#secret
apiVersion: v1
kind: Secret
metadata:
    name: db-secret
data:
    DB_PASSWORD: "c3VwZXJzZWNyZXQK" #base64 encoded value 'supersecret'
  1. Use ConfigMap and Secret in a Pod
  • Create a Pod named app-pod using the nginx image.
  • Inject the ConfigMap values as environment variables.
  • Inject the Secret value as an environment variable.
apiVersion: v1
kind: Pod
metadata:
  name: app-pod
spec: 
  containers: 
  - name: app-pod
    image: nginx
    env:
      - name: DB_HOST
        valueFrom:
          configMapKeyRef:
            name: app-config
            key: DB_HOST
      - name: DB_PORT
        valueFrom:
          configMapKeyRef:
            name: app-config
            key: DB_PORT
      - name: DB_SECRET
        valueFrom:
          secretKeyRef:
            name: db-secret
            key: DB_PASSWORD

- Lesson learned

controlplane:/var/log$ k get pods    
NAME      READY   STATUS                       RESTARTS   AGE
app-pod   0/1     CreateContainerConfigError   0          4m43s

for debugging this error ‘CreateContainerConfigError’ by using k describe pod app-pod

controlplane:/var/log$ k describe pod app-pod
...
...
  Warning  Failed     11s (x26 over 5m35s)   kubelet            Error: couldn't find key DB_HOSTNAME in ConfigMap default/app-config

2. CKA Practice: Horizontal Pod Autoscaler (HPA)

  1. Deploy a Sample App
#deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: hpa-demo
spec:
  replicas: 1
  selector:
    matchLabels:
      app: hpa-demo
  template:
    metadata:
      labels:
        app: hpa-demo
    spec:
      containers:
        - name: hpa-demo
          image: k8s.gcr.io/hpa-example
          ports:
            - containerPort: 80
          resources:
            limits:
              cpu: 500m
            requests:
              cpu: 200m
  1. Expose the App as a Service
#service.yaml
apiVersion: v1
kind: Service
metadata:
  name: hpa-demo
spec:
  type: ClusterIP
  selector:
    app: hpa-demo
  ports:
    - port: 80
      targetPort: 80
  1. Create the HPA
kubectl autoscale deployment hpa-demo --cpu-percent=50 --min=1 --max=5
kubectl get hpa
  1. Generaet Load
kubectl run -i --tty load-generator5 --image=busybox /bin/sh
while true; do wget -q -O- http://hpa-demo; done
  1. Observe Scaling
kubectl get hpa -w
  1. Clean Up
kubectl delete deployment hpa-demo
kubectl delete service hpa-demo
kubectl delete hpa hpa-demo

- Lesson learned

  • the HPA requires metric server
  • the lab environment missing metric server k top node also won’t work. from 1-5 steps, the HPA doesn’t work at all, only 1 hpa-pod there for 10 minutes. Then, I found that the lab environment missing the metric server by checking the pod’s cpu usage k top pod, that showing an error message error: Metrics API not available

3. CKA Practice: Network Policies (not yet done)

  1. Create a Test Environment
k create ns frontend
k create ns backend
# Namespaces
apiVersion: v1
kind: Namespace
metadata:
  name: frontend
---
apiVersion: v1
kind: Namespace
metadata:
  name: backend
---
# NGINX Pod in frontend
apiVersion: v1
kind: Pod
metadata:
  name: web
  namespace: frontend
  labels:
    app: web
spec:
  containers:
  - name: nginx
    image: nginx:alpine
    ports:
    - containerPort: 80
---
# Service exposing NGINX Pod in frontend
apiVersion: v1
kind: Service
metadata:
  name: web
  namespace: frontend
spec:
  selector:
    app: web
  ports:
  - port: 80
    targetPort: 80
---
# busybox Pod in backend
apiVersion: v1
kind: Pod
metadata:
  name: tester
  namespace: backend
spec:
  containers:
  - name: busybox
    image: busybox
    command: ["sleep", "36000"]
---
# busybox Pod in backend
apiVersion: v1
kind: Pod
metadata:
  name: neighbour
  namespace: frontend
spec:
  containers:
  - name: busybox
    image: busybox
    command: ["sleep", "36000"]

Validation

kubectl exec -n backend tester -- wget -qO- http://web.frontend.svc.cluster.local
kubectl exec -n frontend neighbour -- wget -qO- http://web.frontend.svc.cluster.local
  1. Restrict Traffic with a NetworkPolicy
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-external
  namespace: default
spec:
  policyTypes:
  - Ingress
  ingress:
  - from:
    - podSelector: {}
    ports:
    - protocol: TCP
      port: 80
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-external
  namespace: frontend
spec:
  podSelector:
    matchLabels:
      app: web
  policyTypes:
  - Ingress
  ingress:
  - from:
    - podSelector: {}       # Allows ingress from any pod in the same namespace

Validation

# From backend namespace (should fail)
kubectl exec -n backend tester -- wget -qO- --timeout=5 http://web.frontend.svc.cluster.local
kubectl exec -n backend tester -- wget -qO- --timeout=5 http://web

# From frontend namespace (should succeed)
kubectl exec -n frontend neighbour -- wget -qO- --timeout=5 http://web.frontend.svc.cluster.local
kubectl exec -n frontend neighbour -- wget -qO- --timeout=5 http://web
kubectl run temp --rm -i --image=busybox -n frontend -- wget -qO- http://web

- Lesson learned

  • Service metaname matters, as it is part of the fqdn

4. CKA Practice: Node Maintenance

  1. Drain a Node
# Check all nodes
kubectl get nodes

# Drain node01 while ignoring DaemonSets
kubectl drain node01 --ignore-daemonsets

Result

kubectl get nodes

NAME           STATUS                     ROLES           AGE   VERSION
controlplane   Ready                      control-plane   25d   v1.33.2
node01         Ready,SchedulingDisabled   <none>          25d   v1.33.2
  1. Uncordon a Node
kubectl uncordon node01

NAME           STATUS   ROLES           AGE   VERSION
controlplane   Ready    control-plane   25d   v1.33.2
node01         Ready    <none>          25d   v1.33.2

- Lesson learned

  • Drain a node for purposes below:
    • Planned maintenance or upgrades: Kernel, OS, or Kubernetes version upgrades require the node to be rebooted or restarted.
    • Decommissioning/removing node: Safely relocate workloads before taking a node out of service.
    • Troubleshooting: Isolate nodes to debug hardware or software issues without impacting active workloads.
    • Load balancing: Move workloads away from overloaded nodes for rebalancing.
    • Scaling down: Evict pods as part of a cluster scale-down process.

5. CKA Practice: Persistent Volumes and PVC

  1. Create PersistentVolume
apiVersion: v1
kind: PersistentVolume
metadata:
  name: my-pv
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: /mnt/data
  1. Create PersistentVolumeClaim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 500Mi
  1. Create Pod with PVC
#pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: pv-test-pod
spec:
  containers:
    - name: busybox
      image: busybox
      command: ["sh", "-c", "sleep 3600"]
      volumeMounts:
        - mountPath: "/data"
          name: my-storage
  volumes:
    - name: my-storage
      persistentVolumeClaim:
        claimName: my-pvc
  1. Write Data to PV
kubectl exec -it pv-test-pod -- sh -c "echo 'Hello from PV!' > /data/hello.txt"
  1. Delete and Recreate Pod
k delete pod pv-test-pod
k apply -f pod.yaml
  1. Read Data from PV
kubectl exec -it pv-test-pod -- sh -c "echo 'Hello from PV!' > /data/hello.txt"

- Lesson learned

  • To list the contents of /mnt/data (mounted by your PersistentVolume), you need to attach that volume to a pod and then run a shell or command inside the pod.

6. CKA Practice: Pod Debugging

  1. Identify the Issue
# List pods and check their status
kubectl get pods

# Describe the problematic pod
kubectl describe pod broken-pod

# View container logs (if the container started at all)
kubectl logs broken-pod

  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
........
nonexistent-image:latest": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
  Warning  Failed     20s (x2 over 34s)  kubelet            Error: ErrImagePull
  Normal   BackOff    5s (x2 over 33s)   kubelet            Back-off pulling image "nonexistent-image:latest"
  Warning  Failed     5s (x2 over 33s)   kubelet            Error: ImagePullBackOff
  1. Fix the Pod
k edit pod broken-pod #change the image name to nginx:1.21

# OR if it's part of a deployment, update the deployment
kubectl set image deployment/my-deployment my-container=nginx:1.21

- Lesson learned

  • pod issues debugging, k describe & k logs

7. CKA Practice: RBAC (Role-Based Access Control)

RBAC in Kubernetes

In this scenario, you will learn how to:

  • Create a namespace for isolation.
  • Create a ServiceAccount.
  • Create a Role with limited permissions (e.g., get and list pods only).
  • Bind the Role to the ServiceAccount.
  • Test allowed and forbidden actions.
  • This is essential for controlling access and following the principle of least privilege.
  1. Creat a Namespace
k create ns rbac-demo
  1. Create a ServiceAccount
k create sa limited-sa -n rbac-demo
  1. Create a Role with Limited Permissions
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: pod-reader
  namespace: rbac-demo
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "list"]
  1. Bind the Role to the ServiceAccount
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: read-pods-binding
  namespace: rbac-demo
subjects:
- kind: ServiceAccount
  name: limited-sa
  namespace: rbac-demo
roleRef:
  kind: Role
  name: pod-reader
  apiGroup: rbac.authorization.k8s.io
  1. Test Allowed Actions
#no
kubectl auth can-i get pods --as=system:serviceaccount:rbac-demo:limited-sa -n rbac-demo
kubectl auth can-i list pods --as=system:serviceaccount:rbac-demo:limited-sa -n rbac-demo
#yes
kubectl auth can-i create pods --as=system:serviceaccount:rbac-demo:limited-sa -n rbac-demo
kubectl auth can-i delete pods --as=system:serviceaccount:rbac-demo:limited-sa -n rbac-demo

8. CKA Practice: Rolling Updates & Rollbacks

  1. Create Initial Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
  name: webapp
spec:
  replicas: 3
  selector:
    matchLabels:
      app: webapp
  template:
    metadata:
      labels:
        app: webapp
    spec:
      containers:
        - name: nginx
          image: nginx:1.19
          ports:
            - containerPort: 80
  1. Verify Deployment
kubectl get deployments
kubectl get pods -l app=webapp

kubectl exec -it <pod-name> -- nginx -v
  1. Perform a Rolling Update
kubectl set image deployment/webapp nginx=nginx:1.21 --record
  1. Observe Update Progress
kubectl rollout status deployment/webapp
kubectl rollout history deployment/webapp
  1. Roll Back Deplyment
kubectl rollout undo deployment/webapp
kubectl get pods -l app=webapp
kubectl exec -it <pod-name> -- nginx -v
  1. Clean up
k delete deployment webapp

9. CKA Practice: Service Not Routing Traffic

  1. Troubleshoot and Fix the service
#Check if the Service selector matches the labels of the pods.
kubectl get svc web-service -o yaml
kubectl get pods --show-labels

#If the selector is wrong, patch the Service or edit it:
kubectl edit svc web-service

#Verify that the Service routes traffic:
kubectl port-forward svc/web-service 8080:80
curl http://localhost:8080

the selector is wrong, change to selector: web-deployment


10. CKA Practice: Troubleshooting a CrashLoopBackOff Pod

  1. Identify the Problem
#Check the Pods in the default namespace:
kubectl get pods

#Find the Pod that is in CrashLoopBackOff.

#View detailed information about the Pod:
kubectl describe pod <pod-name>

#Check the logs for the Pod:
kubectl logs <pod-name>
  1. Fix the Issue
k edit deployments.app broken-app

11. CKA ResourceQuotas & LimitRanges

  1. Create a Namespace
k create ns quota-lab
  1. Apply a ResourceQuota
apiVersion: v1
kind: ResourceQuota
metadata:
  name: compute-quota
  namespace: quota-lab
spec:
  hard:
    pods: "3"
    requests.cpu: "1"
    requests.memory: 1Gi
    limits.cpu: "2"
    limits.memory: 2Gi
  1. Apply a LimitRange
apiVersion: v1
kind: LimitRange
metadata:
  name: mem-limit-range
  namespace: quota-lab
spec:
  limits:
  - default:
      cpu: "500m"
      memory: 512Mi
    defaultRequest:
      cpu: "200m"
      memory: 256Mi
    type: Container
  1. Testing with a Failing Pod
apiVersion: v1
kind: Pod
metadata:
  name: pod-noresources
  namespace: quota-lab
spec:
  containers:
  - name: nginx
    image: nginx
  1. Fix the Pod
apiVersion: v1
kind: Pod
metadata:
  name: pod-large
  namespace: quota-lab
spec:
  containers:
  - name: busybox
    image: busybox
    command: ["sh", "-c", "sleep 3600"]
    resources:
      requests:
        cpu: "250m"
        memory: 64Mi
      limits:
        cpu: "500m"
        memory: 128Mi