The name Kubernetes originates from Greek, meaning helmsman or pilot.

Terminology

k8s is two concepts; the control plane and nodes.

Terminology:

  • Control plane management components that mother-hen nodes and pods
    • kube-controller the controller manager
    • kube-api-server the frontend API that ties everything together
    • kube-scheduler figures out which nodes to run pods on
    • etcd the underpinning database
  • Node a worker machine (VM) that hosts pods
    • kubelet agent that talks with the control plane
    • kube-proxy a network proxy
    • supervisord monitoring of the kubelet and pods
    • weave a software defined network (SDN)
    • fluentd unified logging agent
    • containerd a container runtime of some sort
  • Pod a set of containers (spacesuit)
  • ReplicaSet manages replicas of a pod (ex: 3 nginx pods)
  • Deployment transitions actual state to desired state
  • Service exposes an application that may be running on many pods, externally from the k8s cluster

Essentials

Help

  • Local offline CLI based help is taken care of by the kubectl explain command. Whats a pod and how do you define it again? kubectl explain pods
  • kubectl Cheat Sheet

Bash kubectl completion

  1. Register the bash completion script either per user echo 'source <(kubectl completion bash)' >>~/.bashrc or at machine level kubectl completion bash >/etc/bash_completion.d/kubectl
  2. To work with aliases echo 'alias k=kubectl' >>~/.bashrc then echo 'complete -F __start_kubectl k' >>~/.bashrc

Web UI dashboard

  1. Follow the Web UI docs.
  2. By default will need a token to authenticate. Use kubectl describe secret -n kube-system and find the token called attachdetach-controller-token-dhx2s.
  3. Establish localhost (node) level connectivity to the kube API with kubectl proxy (or microk8s dashboard-proxy)
  4. For external cluster web UI access on port 10433 sudo microk8s kubectl port-forward -n kube-system service/kubernetes-dashboard 10443:443 --address 0.0.0.0

Pods

  • a pod always lives on a single node (i.e. cant span nodes)
  • are allocated unique IPs within the k8s cluster (i.e. cluster IPs, which are not externally accessible)
  • containers within a pod, share the same network namespace (i.e. can communicate via loopback)
  • container processes within the same pod, need to bind to different ports (e.g. cant have multiple port 80 processes in same pod)

Creating a pod

Option 1: Imperatively with the CLI

With kubectl run my-frontend-pod --image=nginx:alpine

Option 2: Declaratively with YAML

Using kubectl create/apply.

First you need to articulate the various object settings as YAML.

Luckily these are well documented, such as the format for a Pod or a Deployment and so on.

apiVersion: v1
kind: Pod
metadata:
  name: my-nginx
  labels:
    app: nginx
    rel: stable
spec:
  containers:
    - name: my-nginx
      image: nginx:alpine
      ports:
        - containerPort: 80
      resources:
  1. Author the YAML
  2. Validate it kubectl create -f my-nginx-pod.yml --dry-run=client --validate=true
  3. Run it kubectl create -f my-nginx-pod.yml --save-config

Highlights:

  • kubectl create will error if an object already exists
  • kubectl apply on the other hand is more accomodating, and will update existing objects if needed
  • kubectl create has an --save-config option which will export the base configuration as YAML as store it in the YAML as an metadata: Annotation
  • kubectl delete -f my-nginx-pod.yml
  • YAML can be interactively edited with kubectl edit or patched with kubectl patch

Port forwarding

After the pod is made, need to expose the pod externally in some way, so the outside world can get in. One option is simple port forwarding with kubectl port-forward my-frontend-pod 8080:80 (8080 = external, 80 = internal)

  1. kubectl run my-nginx --image=nginx:alpine
  2. kubectl get pods -o yaml
  3. kubectl port-forward my-nginx 8080:80 --address 0.0.0.0

Managing pods

  • kubectl get pods -o yaml get detailed pod information as YAML
  • kubectl describe pods my-nginx awesome details about pod and its containers, but also full event log
  • kubectl delete pod my-frontend-pod be aware, if made with a deployment, k8s will automatically recreate the pod (i.e. you need to delete the deployment first)
  • kubectl exec my-nginx -it sh get an interative (i.e. stdin and stdout) TTY

Pod Health

The kubelet uses probes to know when to bounce a container.

  • liveness probe when should a container be restarted?
  • readiness probe sometimes apps are temporarily unfit to serve traffic (e.g. sparking up a JVM is often slow, but fast after load)
  • startup probe for old apps that do take a long time to start, you can define a large startup probe threshold (ex: 300s) which must succeed before the liveness probe will kick in

A probe can comprise of:

  • ExecAction run some shell in the container
  • TCPSocketAction TCP request
  • HTTPGetAction HTTP GET request

For example an HTTP GET probe:

apiVersion: v1
kind: Pod
---
spec:
  containers:
    - name: my-nginx
      image: nginx:alpine
      livenessProbe:
        httpGet:
          path: /index.html
          port: 80
        initialDelaySeconds: 15
        timeoutSeconds: 2
        periodSeconds: 5
        failureThreshold: 1

Versus an exec probe:

apiVersion: v1
kind: Pod
metadata:
  labels:
    test: liveness
  name: liveness-exec
spec:
  containers:
    - name: liveness
      image: k8s.gcr.io/busybox
      args:
        - /bin/sh
        - -c
        - touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600
      livenessProbe:
        exec:
          command:
            - cat
            - /tmp/healthy
        initialDelaySeconds: 5
        periodSeconds: 5

If a pod fails a probe, by default it will be restarted (restartPolicy: Always)

Deployments and ReplicaSets

A neat way of managing the rollout of Pods. Deployment lean on the concept of a ReplicaSet, which ensures a specified number of instances of a Pod runs.

Deploying Pods by hand is unusual, always use Deployments

ReplicaSet

  • A ReplicaSet can be thought of as a Pod controller
  • Uses a pod template (YAML) to spool up new pods when needed

Deployment

Deployments were introduced post ReplicaSets, adds even more abstraction and convenience on top of ReplicaSets, such as zero downtime deploys.

  • Wrap ReplicaSets
  • Facilitate zero downtime rolling updates, by carefully creating and destroying ReplicaSets
  • Provide rollbacks, if bugs are discovered in the latest release
  • To manage the various ReplicaSets and Pods that get created and killed off, assigns unique labels
  • A Deployment spec in YAML is almost identical to a ReplicaSet. Pod templates are defined, each with a selector, as various Pods (ex: nginx, postgres, redix) can be managed by a single Deployment
  • Label play a huge role in tying various Pod workloads together as needed
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-nginx
spec:
  selector:
    matchLabels:
      app: my-nginx
  replicas: 3
  minReadySeconds: 10 #dont kill my pod, wait at least 10s pls
  template:
    metadata:
      labels:
        app: my-nginx
    spec:
      containers:
        - name: my-nginx
          image: nginx:alpine
          ports:
            - containerPort: 80
          resources:
            limits:
              memory: "128Mi" #128 MB
              cpu: "100m" #100 millicpu (.1 cpu or 10% of the cpu)

Creating this deployment, will create 3 Pods, 1 ReplicaSet and 1 Deployment. Note the unique identifier added to the name of the ReplicaSet, matches up with the Pods. This simple scheme is extactly how k8s knows which Pods relate to ReplicaSets.

# kubectl create -f nginx.deployment.yml --save-config
deployment.apps/my-nginx created

# kubectl get all
NAME                            READY   STATUS    RESTARTS   AGE
pod/my-nginx-5bb9b897c8-hx2w5   1/1     Running   0          5s
pod/my-nginx-5bb9b897c8-ql6nq   1/1     Running   0          5s
pod/my-nginx-5bb9b897c8-j65xb   1/1     Running   0          5s

NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.152.183.1   <none>        443/TCP   26h

NAME                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/my-nginx   3/3     3            3           5s

NAME                                  DESIRED   CURRENT   READY   AGE
replicaset.apps/my-nginx-5bb9b897c8   3         3         3       5s

Deployments with kubectl

  • kubectl apply -f my-deployment.yml create deployment (or update it as necessary)
  • kubectl create -f nginx.deployment.yml --save-config as with other objects, create works too
  • kubectl get deployment --show-labels show deployment labels
  • kubectl get deployment -l tier=backend filter deployments on label match
  • kubectl delete my-deployment blow it away
  • kubectl scale deployment my-deployment --replicas=5 awesome!
  • kubectl
  • kubectl

Deployment Options

Rolling updates

When deploying a new version of the app, this mode will replace a single v1 pod at a time, with a v2 pod only once its ready to serve traffic (readiness probe). If successful, it will continue in the same manner, until all v1 pods are gradually replaced.

Blue Green

When you have two concurrent releases of an app running live in production. Traffic is gradually transferred from the old to the new.

Canary

Deploying a canary involves deploying the new app side by side the old version, and allocating a controlled subset (ex: 1 in 10 request) of traffic to it, before fully rolling it out.

Rollbacks

Reinstate the previous version of the deployment.

StatefulSets

Manages the deployment and scaling of a set of Pods, and provides guarantees about the ordering and uniqueness of these Pods.

Deployment or ReplicaSet workload types are stateless, in that Pods when provisioned or rescheduled are:

  • not assigned stable identifier
  • not deployed in any particular order
  • not deleted in any particular order
  • not scaled in any paricular order

A StatefulSet is almost identitical to a Deployment (i.e. based on the same container spec), however it:

  • maintains a sticky identity for each Pod
  • provides ordered deployment, deletion or scaling

Use cases:

  • Stable, unique network identifiers
  • Stable, persistent storage
  • Ordered, graceful deployment and scaling
  • Ordered, automated rolling updates

Services

An abstraction to expose an app running on a set of Pods as a network service.

Using Pod IP addresses directly simply doesnt scale, as Pods, and hence their IPs are ephemeral (i.e. can be killed off), and they can also be dynamically provisioned.

  • Services decouple consumers from Pods
  • A Service is assigned a fixed virtual IP (on the Node by kubectl) and can load balance request over to the Pods
  • Labels play a key role, in allowing the Service to marry up to particular Pods
  • Services are not ephemeral
  • Pods in turn can (and should) address other Pods through Services
  • A Service can map any incoming port to a targetPort
  • A Service can have multiple port definitions if needed
  • Services when given a .metadata.name, its registered into the internal DNS within the cluster automatically (i.e. within cluster can just refer to its friendly name e.g. frontend Pod can just access backend:8080)

Service Types

Different way to network up Services (ex: such as exposing a frontend app to an external IP for use by web browsers).

Types (as of late 2020) include:

  1. ClusterIP exposes the Service on a cluster-internal IP (only reachable from within the cluster)
  2. NodePort exposes the Service on each Node’s IP at a static port. A ClusterIP Service, to which the NodePort Service routes, is automatically created. You’ll be able to contact the NodePort Service, from outside the cluster, by requesting <NodeIP>:<NodePort>
  3. LoadBalancer exposes the Service externally using a cloud provider’s load balancer. NodePort and ClusterIP Services, are automatically created
  4. ExternalName just like an alias or proxy to an external service that Pods connect with. This will map the Service to the contents of the externalName field (e.g. foo.bar.example.com), by returning a CNAME record with its value.

Port forwarding take 2

As seen with Pods you can port forward to them directly kubectl port-forward my-frontend-pod 8080:80

This can be applied to the high level constructs of Deployments and Services:

kubectl port-forward deployment/my-sik-deployment 8080:80
kubectl port-forward service/my-sik-service 8080:80

Services YAML

Based on ServiceSpec v1 core.

Basic blueprint:

apiVersion: v1
kind: Service
metadata:
  name: frontend
spec:
  type:
  selector:
  ports:
  • type is one of ClusterIP, NodePort, LoadBalancer
  • selector selects Pods this Service applies to
  • port the externally exposed port
  • targetPort the Pod port to forward onto

NodePort example

apiVersion: v1
kind: Service
metadata:
  name: frontend
spec:
  type: NodePort
  selector:
    app: my-nginx
  ports:
    - port: 80
      targetPort: 80
      nodePort: 31000 #normally dynamically generated

ExternalName example

apiVersion: v1
kind: Service
metadata:
  name: external-service
spec:
  type: ExternalName
  externalName: api.bencode.net
  ports:
    - port: 9000

Pods wanting to consume the externally hosted API https://api.bencode.net, would instead target external-service:9000.

Testing Service and Pod with curl

Shell into Pod a test URL. Note, you’ll need to add -c [container-id] if a Pod is housing multiple containers.

kubectl exec [pod-name] -- curl -s http://[pod-ip]

curl is a luxury item when it comes to lean containers, and will need to be installed (ex: alpine) over an interactive TTY like so:

kubectl exec [pod-name] -it sh
apk add curl
curl -s http://[pod-ip]

Storage

Volumes

Volumes are used to preserve state for Pods and containers.

  • Volumes can be attached to Pods
  • Containers rely on mountPath to get to the Volume
  • Volumes can outlive the lifetime of Pods

Volume Types

There are many, some common options:

  • emptyDir first created when a Pod is assigned to a Node, and exists as long as that Pod is running on that node. Useful for housing temporary scratch files.
  • hostPath Pod mounts to the Nodes file system
  • nfs literally an NFS backed file share mounted into the Pod
  • configMap a way to inject configuration data into pods. The data stored in a ConfigMap can be referenced in a volume of type configMap and then consumed by containerized applications running in a pod.
  • persistentVolumeClaim gives Pods more persistent storage

Viewing a Pods volumes

Both get and describe commands on the Pod object expose volumes:

  • kubectl describe pod [pod-name]
  • kubectl get pod [pod-name] -o yaml

emptyDir volume example

apiVersion: v1
kind: Pod
metadata:
  name: nginx-alpine-volume
spec:
  containers:
    - name: nginx
      image: nginx:alpine
      volumeMounts:
        - name: html
          mountPath: /usr/share/nginx/html
          readOnly: true
      resources:
    - name: html-updater
      image: alpine
      command: ["/bin/sh", "-c"]
      args:
        - while true; do date >> /html/index.html;sleep 10; done
      resources:
      volumeMounts:
        - name: html
          mountPath: /html
  volumes:
    - name: html
      emptyDir: {} #lifecycle tied to Pod

# kubectl apply -f nginx-alpine-emptyDir.pod.yml
# kubectl port-forward nginx-alpine-volume 8080:80 --address 0.0.0.0

PeristentVolumes and PeristentVolumesClaims

A PersistentVolume is provisioned by an administrator (i.e. not dynamically as part of a Deployment), is cluster wide storage unit, that has a lifecycle independent from a Pod.

A PeristentVolumesClaim is simply a request to make use of a particular PersistentVolume.

  • A PersistentVolume is available to a Pod, even if reallocated to another Node
  • Relies on an underlying storage provider (GlusterFS, Ceph, NFS, cloud storage, etc)
  • A Pod binds to a PersistentVolume by issuing a PersistentVolumeClaim

StorageClasses

A way to manage storage as “profiles” (ex: a backup profile vs a low latency profile).

  • Can dynamically provision storage as needed (unlike a PersistentVolume)
  • Acts a storage template
  • If enabled, admins dont have to get involved to create PeristentVolumes in advance

StorageClass workflow:

  1. Define a StorageClass (YAML)
  2. Create a PeristentVolumesClaim that references the StorageClass
  3. StorageClass provisioner will create PersistentVolume
  4. After the actual storage is creatd for the PersistentVolume, the PersistentVolume is connected up to the original PeristentVolumesClaim (from step 2)
apiVersion: v1
kind: ConfigMap
metadata:
  labels:
    app: mongo-env
  name: mongo-env
data:
  MONGODB_DBNAME: myMongoDb
  #TODO: Use Secret
  MONGODB_PASSWORD: password
  MONGODB_ROLE: readWrite
  #TODO: Use Secret
  MONGODB_ROOT_PASSWORD: password
  MONGODB_ROOT_ROLE: root
  MONGODB_ROOT_USERNAME: dbadmin
  MONGODB_USERNAME: webrole

---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: local-storage
provisioner: kubernetes.io/no-provisioner
# The reclaim policy (keep storage around forevs) applies to the persistent volumes not the storage class itself
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer

---
# Note: While a local storage PV works, going with a more durable solution (NFS, cloud option, etc.) is recommended
# https://kubernetes.io/blog/2018/04/13/local-persistent-volumes-beta/
apiVersion: v1
kind: PersistentVolume
metadata:
  name: mongo-pv
spec:
  capacity:
    storage: 10Gi
  # volumeMode block feature gate enabled by default with 1.13+
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  # StorageClass has a reclaim policy default so it'll be "inherited" by the PV
  # persistentVolumeReclaimPolicy: Retain
  storageClassName: local-storage
  local:
    path: /tmp/data/db
  # the node this storage will be bound to
  nodeAffinity:
    required:
      nodeSelectorTerms:
        - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
                - foobox002

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mongo-pvc
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: local-storage
  resources:
    requests:
      storage: 10Gi

---
apiVersion: v1
kind: Service
metadata:
  name: mongo
spec:
  selector:
    app: mongo
  ports:
    - port: 27017
      targetPort: 27017

---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  labels:
    app: mongo
  name: mongo
spec:
  serviceName: mongo
  replicas: 1
  selector:
    matchLabels:
      app: mongo
  template:
    metadata:
      labels:
        app: mongo
    spec:
      containers:
        - image: mongo
          name: mongo
          ports:
            - containerPort: 27017
          command:
            - mongod
            - "--auth"
          resources: {}
          volumeMounts:
            - name: mongo-volume
              mountPath: /data/db
          env:
            - name: MONGODB_DBNAME
              valueFrom:
                configMapKeyRef:
                  key: MONGODB_DBNAME
                  name: mongo-env
            - name: MONGODB_PASSWORD
              valueFrom:
                configMapKeyRef:
                  key: MONGODB_PASSWORD
                  name: mongo-env
            - name: MONGODB_ROLE
              valueFrom:
                configMapKeyRef:
                  key: MONGODB_ROLE
                  name: mongo-env
            - name: MONGODB_ROOT_PASSWORD
              valueFrom:
                configMapKeyRef:
                  key: MONGODB_ROOT_PASSWORD
                  name: mongo-env
            - name: MONGODB_ROOT_ROLE
              valueFrom:
                configMapKeyRef:
                  key: MONGODB_ROOT_ROLE
                  name: mongo-env
            - name: MONGODB_ROOT_USERNAME
              valueFrom:
                configMapKeyRef:
                  key: MONGODB_ROOT_USERNAME
                  name: mongo-env
            - name: MONGODB_USERNAME
              valueFrom:
                configMapKeyRef:
                  key: MONGODB_USERNAME
                  name: mongo-env
      volumes:
        - name: mongo-volume
          persistentVolumeClaim:
            claimName: mongo-pvc

Managing configuration with ConfigMaps and Secrets

ConfigMaps store configuration information and surface it to containers.

  • configuration is surfaced through to Pods as they are scheduled throughout the cluster
  • can represent entire files (ex: JSON, XML, YAML) or specific key/value pairs
  • values can be provided with kubectl (CLI) ex:
    • --from-file: kubectl create configmap app-settings --from-file=settings.properties this will implicitly add the file name as a root key into the ConfigMap data
    • --from-env-file: kubectl create cm app-settings --from-env-file=settings.properties will NOT add file name as root key, will quote non-string values.
    • --from-literal: kubectl create configmap app-settings --from-literal=apiUrl=https://my-api --from-literal=otherKey=otherValue --from-literal=count=50
  • ConfigMaps are first class object type and can be defined with a manifest (YAML) like other k8s objects i.e. kubectl apply -f settings.configmap.yml

Defining ConfigMaps

When adding a raw config file using kubectl and --from-file, note the file name is used as key for values:

kubectl create configmap game-config --from-file=game.settings

apiVersion: v1
kind: ConfigMap
data:
  game.config: |-
    enemies=aliens
    lives=3
    enemies.cheat=true
    enemies.cheat.level=noGoodRotten    

Hand crafting the manifest work nicely, here some key/values and files are defined:

apiVersion: v1
kind: ConfigMap
metadata:
  name: game-demo
data:
  # property-like keys; each key maps to a simple value
  player_initial_lives: "3"
  ui_properties_file_name: "user-interface.properties"

  # file-like keys
  game.properties: |
    enemy.types=aliens,monsters
    player.maximum-lives=5    
  user-interface.properties: |
    color.good=purple
    color.bad=yellow
    allow.textmode=true    

Consuming ConfigMaps

To examine a ConfigMap as its manifest definition use kubectl get configmap game-config -o yaml

There are 4 ways to consume ConfigMaps from Pods:

  1. Inside a container command and args
  2. Environment variables for a container
  3. Add as a file in read-only volume, for application to read
  4. Write code to run within the Pod that uses the k8s API to read a ConfigMap

Pods can reference specific ConfigMap keys:

- name: UI_PROPERTIES_FILE_NAME
    valueFrom:
    configMapKeyRef:
        name: game-demo
        key: ui_properties_file_name

Or just expose every key defined in the ConfigMap as a corresponding container environment variable using the envFrom directive:

spec:
  containers:
    - name: demo
      image: alpine
      command: ["sleep", "3600"]
      envFrom:
        - configMapRef:
          name: game-demo

Complete example:

apiVersion: v1
kind: Pod
metadata:
  name: configmap-demo-pod
spec:
  containers:
    - name: demo
      image: alpine
      command: ["sleep", "3600"]
      env:
        # Define the environment variable
        - name: PLAYER_INITIAL_LIVES # Notice that the case is different here
          # from the key name in the ConfigMap.
          valueFrom:
            configMapKeyRef:
              name: game-demo # The ConfigMap this value comes from.
              key: player_initial_lives # The key to fetch.
        - name: UI_PROPERTIES_FILE_NAME
          valueFrom:
            configMapKeyRef:
              name: game-demo
              key: ui_properties_file_name
      volumeMounts:
        - name: config
          mountPath: "/config"
          readOnly: true
  volumes:
    # You set volumes at the Pod level, then mount them into containers inside that Pod
    - name: config
      configMap:
        # Provide the name of the ConfigMap you want to mount.
        name: game-demo
        # An array of keys from the ConfigMap to create as files
        items:
          - key: "game.properties"
            path: "game.properties"
          - key: "user-interface.properties"
            path: "user-interface.properties"

Secrets

Secrets let you store and manage sensitive information, such as passwords, OAuth tokens, and ssh keys.

  • Secrets can be mounted into Pods as files, or environment variables.
  • Secrets are only released to Nodes running Pods that request the Secret
  • Secrets are stored in tmpfs (in memory) on only Nodes that require them

Secret best practices

  • Enabled encryption at rest for cluster data
  • Limit access to etcd to only admins
  • TLS for etcd peer-to-peer communication
  • Secret manifest defintions (YAML) are only base64 encoded, don’t blindly store these in Git and so on.
  • By design Pods can get to Secrets, therefore who can create Pods must be locked down with RBAC.

Storing Secrets

Close compatibility to dealing with ConfigMaps.

  • can represent entire files (ex: JSON, XML, YAML) or specific key/value pairs
  • values can be provided with kubectl (CLI) ex:
    • --from-literal: kubectl create secret generic dev-db-secret --from-literal=username=devuser --from-literal=password='S!B\*d$zDsb='
    • --from-file: kubectl create secret generic db-user-pass --from-file=username=./username.txt --from-file=password=./password.txt
  • There are a bunch of Secret types (ex: token, TLS), that are domain specific: kubectl create secret tls tls-secret --cert=path/to/tls.cer --key=path/to/tls.key
  • Secret manifest (YAML) are supported, with values always being base64 encoded:
apiVersion: v1
kind: Secret
metadata:
  name: mysecret
type: Opaque
data:
  username: YWRtaW4=
  password: MWYyZDFlMmU2N2Rm

Using Secrets

  • list all kubectl get secrets
  • detailed YAML for specific secret kubectl get secrets db-password -o yaml
  • to decode a Secret is easy kubectl get secret db-user-pass -o jsonpath='{.data}' then wash the secret text through base64 --decode like so echo 'MWYyZDFlMmU2N2Rm' | base64 --decode

Secrets as environment variables

apiVersion: v1
kind: Pod
metadata:
  name: secret-env-pod
spec:
  containers:
    - name: mycontainer
      image: redis
      env:
        - name: SECRET_USERNAME
          valueFrom:
            secretKeyRef:
              name: mysecret
              key: username
        - name: SECRET_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mysecret
              key: password
  restartPolicy: Never

Secrets as files

apiVersion: v1
kind: Pod
metadata:
  name: mypod
spec:
  containers:
    - name: mypod
      image: redis
      volumeMounts:
        - name: foo
          mountPath: "/etc/foo"
          readOnly: true
  volumes:
    - name: foo
      secret:
        secretName: mysecret

Troubleshooting

Logs

  • view all Pod level logs kubectl logs [pod-name]
  • view specific container logs for a Pod kubectl logs [pod-name] -c [container-name]
  • for an old (possibly deallocated) Pod kubectl logs -p [pod-name]
  • tail a Pod’s logs kubectl logs -f [pod-name]

Configuration verification

  • kubectl describe pod [pod-name]
  • kubectl get pod [pod-name] -o yaml

Shell into Pod container

  • kubectl exec [pod-name] -it sh

The API

Kubernetes API reference

The backbone of the control plane, it exposes and manipulates objects such as pods, namespaces and MANY others, as kubectl api-resources will show.

Versioning is taken very seriously, with the goal of not breaking compatibility. New alpha and beta functionality is released under a version tag. See kubectl api-versions.

General kubectl

  • kubectl version cluster version
  • kubectl cluster-info
  • kubectl get all pull back info on pods, deployments, services
  • kubectl run [container-name] --image=[image-name] simple deployment for a pod
  • kubectl port-forward [pod] [ports] expose port in cluster for external access
  • kubectl expose ... expose port for deployment or pod
  • kubectl create [resource] create thing (pod, deployment, service, secret)
  • kubectl apply [resource] create (or if it exists already modify) resource
  • kubectl get all
  • kubectl get all -n kube-system
  • kubectl get pods -o yaml list all pods, output as YAML

Waaay cool

  • Canary deployments
  • Services when given a .metadata.name, its registered into the internal DNS within the cluster automatically!
  • Kustomize added in 1.14, is a tool for customising k8s configurations, by generating resources from other sources such as Secrets and ConfigMaps, setting cross-cutting fields for resources, composing and customizing collections of resources through bases and overlays.
  • Namespaces?
  • kubectl apply -f k8s/ supports taking a directory name, full of various YAML files, and create all objects it finds

Samples

node.js app

Source code

const http = require("http"),
  os = require("os");

console.log("v1 server starting");

var handler = function (request, response) {
  console.log("Request from: " + request.connection.remoteAddress);
  response.writeHead(200);
  response.end("v1 running in a pod: " + os.hostname() + "\n");
};

var www = http.createServer(handler);
www.listen(8080);
FROM node:alpine
LABEL author="Benjamin Simmonds"
COPY server.js /server.js
ENTRYPOINT ["node", "server.js"]

In the directory containing the Dockerfile and server.js build and tag a new image.

docker build -t node-app:1.0 .

Build a few versioned images, modifying the version output in server.js and the image tag to 2.0 and so on.

Then create deployments for each version.

Note: using my local private docker (localhost:32000) register managed by microk8s.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: node-app
spec:
  replicas: 3
  minReadySeconds: 10
  selector:
    matchLabels:
      app: node-app
  template:
    metadata:
      labels:
        app: node-app
    spec:
      containers:
        - image: localhost:32000/node-app:1.0
          name: node-app
          resources:

And let the deployments roam free kubectl apply -f node-app-v1.deployment.yml. kubectl get all should show 3 v1 pod instances running.

To make life a bit easier, register a service:

apiVersion: v1
kind: Service
metadata:
  name: node-app
spec:
  type: LoadBalancer
  selector:
    app: node-app
  ports:
    - port: 80
      targetPort: 8080

Externally you should be able to access the service, on my microk8s cluster this is http://192.168.122.103:32484/ (external service port shown in kubectl get all for the service)

Create deployment YAML for v2 (just change the image from node-app:1.0 to node-app:2.0). ZAfter applying, you will witness a rolling update. Its freaking beautiful to watch <3!!

microk8s

Love this distribution, which just works.

Shell improvements

alias kubectl="microk8s kubectl"
alias mkctl="microk8s kubectl"
alias k="microk8s kubectl"
complete -F __start_kubectl k

PersistentVolume storage location

How to change microk8s kubernetes storage location

Microk8s storage configuration

By default uses /var/snap/microk8s/common/var/lib/containerd and /var/snap/microk8s/common/run/.

Edit /var/snap/microk8s/current/args/containerd and point the --root and --state to the volume you want. Here is an example that targets /mnt:

--config ${SNAP_DATA}/args/containerd.toml
--root /mnt/var/lib/containerd
--state /mnt/run/containerd
--address ${SNAP_COMMON}/run/containerd.sock

TODO: web UI dashboard, local image registry,

Resources