Killercode Cka by Kim Wüstkamp

- 7 mins read

CKA Preparation

1. Playground

This playground will always have the same version as currently in the Linux Foundation Exam.


2. Vim Setup

Using vim to edit file


3. Apiserver Crash

Configure a wrong argument

knowledge

  • Check log
    • /var/log/pods
    • /var/log/containers
    • crictl ps + crictl logs
    • docker ps + docker logs
    • kubelet logs: /var/log/syslog or journalctl
  • kube api manifest directory
    • /etc/kubernetes/manifest/*

Solution

  1. Locating & Fixing kube-apiserver errors

4. Apiserver Misconfigured

The Apiserver manifest contains error

The error means API server’s not running

controlplane:~$ k -n kube-system get pods
The connection to the server 172.30.1.2:6443 was refused - did you specify the right host or port?

Solution

  1. Locating & Fixing kube-apiserver errors

5. Kube Controller Manager Misconfigured

It is crashing, fix it

Solution

  1. Locating & Fixing kube-control-manager errors

6. Kubelet Misconfigured

Someone tried to improve the Kubelet, but broke it instead

knowledge

  • Check log
    • /var/log/pods
    • /var/log/containers
    • crictl ps + crictl logs
    • docker ps + docker logs
    • kubelet logs: /var/log/syslog or journalctl
  • kube api manifest directory
    • /etc/kubernetes/manifest/*
  • kubelet flag location
    • /var/lib/kubelet/kubeadm-flags.env

Solution

  1. Found the error was casued by a flag, then remove the part casue the error.

7. Application Misconfigured 1

Deployment is not coming up, find the error and fix it

There is a Deployment in Namespace application1 which seems to have issues and is not getting ready. Fix it by only editing the Deployment itself and no other resources.

knowledge

  • Check log
    • /var/log/pods
    • /var/log/containers
    • crictl ps + crictl logs
    • docker ps + docker logs
    • kubelet logs: /var/log/syslog or journalctl
  • kube api manifest directory
    • /etc/kubernetes/manifest/*
  • kubelet flag location
    • /var/lib/kubelet/kubeadm-flags.env
  • Deployment debug
    • k -n application1 get deploy
    • k -n application1 logs deploy/api
    • k -n application1 describe deploy api
    • k -n application1 get cm
    • **k -n application1 get pod/xxxxxxx

Solution

  1. Check deploy error log
  2. Fix the deployment .yaml file

8. Application Misconfigured 2

Pods are not running, find the error and fix it

A Deployment has been imported from another Kubernetes cluster. But it’s seems like the Pods are not running. Fix the Deployment so that it would work in any Kubernetes cluster.

knowledge

  • Check log
    • /var/log/pods
    • /var/log/containers
    • crictl ps + crictl logs
    • docker ps + docker logs
    • kubelet logs: /var/log/syslog or journalctl
  • kube api manifest directory
    • /etc/kubernetes/manifest/*
  • kubelet flag location
    • /var/lib/kubelet/kubeadm-flags.env
  • Deployment debug
    • k -n application1 get deploy
    • k -n application1 logs deploy/api
    • k -n application1 describe deploy api
    • k -n application1 get cm
    • k -n application1 get pod/xxxxxxx

Solution

  1. check log found that ’nodeName: staging1-node error'
  2. k -n default edit deploy management-frontend

9. Application Multi Container Issue

Gather logs

There is a multi-container Deployment in Namespace management which seems to have issues and is not getting ready. Write the logs of all containers to /root/logs.log . Can you see the reason for failure?

Fix the Deployment

Fix the Deployment in Namespace management where both containers try to listen on port 80. Remove one container.

knowledge

  • Check log
    • /var/log/pods
    • /var/log/containers
    • crictl ps + crictl logs
    • docker ps + docker logs
    • kubelet logs: /var/log/syslog or journalctl
  • kube api manifest directory
    • /etc/kubernetes/manifest/*
  • kubelet flag location
    • /var/lib/kubelet/kubeadm-flags.env
  • Deployment debug
    • k -n application1 get deploy
    • k -n application1 logs deploy/api
    • k -n application1 describe deploy api
    • k -n application1 get cm
    • k -n application1 get pod/xxxxxxx

Solution

  1. k -n management logs deploy/collect-data
  2. k -n management logs –all-cotainers deploy/collect-data

10. ConfigMap Access in Pods

Create ConfigMaps

knowledge

  • Check log
    • /var/log/pods
    • /var/log/containers
    • crictl ps + crictl logs
    • docker ps + docker logs
    • kubelet logs: /var/log/syslog or journalctl
  • kube api manifest directory
    • /etc/kubernetes/manifest/*
  • kubelet flag location
    • /var/lib/kubelet/kubeadm-flags.env
  • Deployment debug
    • k -n application1 get deploy
    • k -n application1 logs deploy/api
    • k -n application1 describe deploy api
    • k -n application1 get cm
    • k -n application1 get pod/xxxxxxx

Solution

  1. k create cm trauerweide –from-literal=tree=trauerweide
  2. k create -f cm.yaml
  3. k create -f pod1.yaml

11. Ingress Create

Create Services for existing Deployments

knowledge

  • Check log
    • /var/log/pods
    • /var/log/containers
    • crictl ps + crictl logs
    • docker ps + docker logs
    • kubelet logs: /var/log/syslog or journalctl
  • kube api manifest directory
    • /etc/kubernetes/manifest/*
  • kubelet flag location
    • /var/lib/kubelet/kubeadm-flags.env
  • Deployment debug
    • k -n application1 get deploy
    • k -n application1 logs deploy/api
    • k -n application1 describe deploy api
    • k -n application1 get cm
    • k -n application1 get pod/xxxxxxx
  • Ingress Service
    • k -n ingress-nginx get svc ingress-nginx-controller

Solution

  1. k -n world get deploy
  2. k -n world expose –port=80 deploy/asia
  3. k -n world expose –port=80 deploy/europe
  4. k creat -f test.yaml
#test.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: world
  namespace: world
  annotations:
    # this annotation removes the need for a trailing slash when calling urls
    # but it is not necessary for solving this scenario
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  ingressClassName: nginx # k get ingressclass
  rules:
  - host: "world.universe.mine"
    http:
      paths:
      - path: /europe
        pathType: Prefix
        backend:
          service:
            name: europe
            port:
              number: 80
  - host: "world.universe.mine"
    http:
      paths:
      - path: /asia
        pathType: Prefix
        backend:
          service:
            name: asia
            port:
              number: 80

12. NetworkPolicy Namespace Selector

Solution

  1. space1.yaml
#space1.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: np
  namespace: space1
spec:
  podSelector: {}
  policyTypes:
  - Egress
  egress:
  - to:
    - namespaceSelector:
        matchLabels:
          kubernetes.io/metadata.name: space2
  - ports:
    - protocol: TCP
      port: 53
    - protocol: UDP
      port: 53 
  1. space2.yaml
#space2.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: np
  namespace: space2
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          kubernetes.io/metadata.name: space1
  1. k create -f space1.yaml
  2. k create -f space2.yaml

13. NetworkPolicy Misconfigured

Solution

  1. k get ns –show-labels
  2. k describe networkpolicy np-100x
  3. found error that using level-1000 for level-1001
  4. k edit netoworkpolicy np-100x

14. RBAC ServiceAccount Permissions

Solution

create SAs

  1. k -n ns1 create sa pipeline
  2. k -n ns2 create sa pipeline

use ClusterRole view

  1. k get clusterrole view # there is default one
  2. k create clusterrolebinding pipeline-view –clusterrole view –serviceaccount ns1:pipeline –serviceaccount ns2:pipeline

manage Deployments in both Namespaces

  1. k create clusterrole -h # examples
  2. k create clusterrole pipeline-deployment-manager –verb create,delete –resource deployments

instead of one ClusterRole we could also create the same Role in both Namespaces

  1. k -n ns1 create rolebinding pipeline-deployment-manager –clusterrole pipeline-deployment-manager –serviceaccount ns1:pipeline
  2. k -n ns2 create rolebinding pipeline-deployment-manager –clusterrole pipeline-deployment-manager –serviceaccount ns2:pipeline

15. RBAC ServiceAccount Permissions (re-do)


16. RBAC User Permissions (re-do)

Solution

k -n applications create role smoke –verb create,delete –resource pods,deployments,sts k -n applications create rolebinding smoke –role smoke –user smoke

#allow for other namespaces k get ns # get all namespaces k -n applications create rolebinding smoke-view –clusterrole view –user smoke k -n default create rolebinding smoke-view –clusterrole view –user smoke k -n kube-node-lease create rolebinding smoke-view –clusterrole view –user smoke k -n kube-public create rolebinding smoke-view –clusterrole view –user smoke


17. Scheduling Priority (re-do)

Solution

k -n management get pod -oyaml 300000 > 200000 (number larger = higher priority)


18. Scheduling Pod Affinity (re-do)

Solution

#hobby.yaml
apiVersion: v1
kind: Pod
metadata:
  labels:
    level: hobby
  name: hobby-project
spec:
  affinity:
    podAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 1
        podAffinityTerm:
          labelSelector:
            matchExpressions:
            - key: level
              operator: In
              values:
              - restricted
          topologyKey: kubernetes.io/hostname
  containers:
  - image: nginx:alpine
    name: c

19. Scheduling Pod Anti Affinity

Solution

#hobby.yaml
apiVersion: v1
kind: Pod
metadata:
  labels:
    level: hobby
  name: hobby-project
spec:
  affinity:
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchExpressions:
          - key: level
            operator: In
            values:
            - restricted
        topologyKey: kubernetes.io/hostname
  containers:
  - image: nginx:alpine
    name: c

20. DaemonSet HostPath Configurator

Solution

#test.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: configurator
  namespace: configurator
spec:
  selector:
    matchLabels:
      name: configurator
  template:
    metadata:
      labels:
        name: configurator
    spec:
      containers:
      - name: configurator
        image: bash
        command: ["sh", "-c", "echo 'aaba997ac-1c89-4d64' > /configurator/config && sleep 1d"]
        volumeMounts:
        - name: mount-conf
          mountPath: /configurator
      # it may be desirable to set a high priority class to ensure that a DaemonSet Pod
      # preempts running Pods
      # priorityClassName: important
      volumes:
      - name: mount-conf
        hostPath:
          path: /configurator

21. Cluster Setup

Solution

kubeadm init –ignore-preflight-errors=“NumCPU,Mem” –v=5 –kubernetes-version v1.33.3 –pod-network-cidr 192.168.0.0/16 cp -i /etc/kubernetes/admin.conf $HOME/.kube/config ssh summer-node sudo kubeadm join –token “qtv56v.okih1tibvxoq05ma” controlplane:6443 –discovery-token-unsafe-skip-ca-verification


22. Cluster Upgrade

Solution

Upgrade controlplane kubeadm upgrade plan (see possible versions) apt-cache show kubeadm (show available versions) apt-get install kubeadm=1.33.5-1.1 (can be different for you) kubeadm upgrade apply v1.33.5 (could be a different version for you, it can also take a bit to finish!) apt-get install kubectl=1.33.5-1.1 kubelet=1.33.5-1.1 (Next we update kubectl and kubelet :) service kubelet restart

Upgrade worker node ssh node01 # can be a different version for you apt-get install kubeadm=1.33.5-1.1 kubeadm upgrade node ssh node01 apt-get install kubelet=1.33.5-1.1 kubectl=1.33.5-1.1 service kubelet restart


23. Cluster Node Join

Solution

ssh summer-node sudo kubeadm join –token “qtv56v.okih1tibvxoq05ma” controlplane:6443 –discovery-token-unsafe-skip-ca-verification


24. Cluster Certificate Management

Solution

kubeadm certs check-expiration > /root/apiserver-expiration (check expiration) kubeadm certs renew apiserver kubeadm certs renew scheduler.conf


25. Static Pod move

Knowledge: /etc/kubernetes/manifests/ directory is static pod

Solution

scp node01:/etc/kubernetes/manifests/resource-reserver.yaml . mv resource-reserver.yaml /etc/kubernetes/manifests/resource-reserver.yaml ssh node01 – rm /etc/kubernetes/manifests/resource-reserver.yaml