Kubernetes On Raspberry

By this memo, i purpose you to discover how deploy a Kubernetes K8S solution on your own cluster composed by some raspberry.

Preparing nodes

We need Raspbian Release with ability to support Docker, so we will use HypriotOS available here.

curl -OJSLs https://github.com/hypriot/image-builder-rpi/releases/download/v1.12.0/hypriotos-rpi-v1.12.0.img.zip
unzip hypriotos-rpi-v1.12.0.img.zip

Now we need utility for Flashing SD Card with forcing some instance parameters, like user, hostname, ... Hypriotos Flash Tool will be most efficience and available here.

curl -LO https://github.com/hypriot/flash/releases/download/2.5.0/flash
chmod +x flash && sudo mv flash /usr/local/bin/flash

Flashing now, i have one master node and four worker nodes

flash --hostname master hypriotos-rpi-v1.12.0.img
flash --hostname north hypriotos-rpi-v1.12.0.img
flash --hostname south hypriotos-rpi-v1.12.0.img
flash --hostname east hypriotos-rpi-v1.12.0.img
flash --hostname west hypriotos-rpi-v1.12.0.img

Configuring

I - Inventary

To configure our cluster, we will use some Ansible playbook, it's more powerful and can be reused with new nodes if we want.

We start by dowloading Ansible playbook available here.

Then, wi start by adjusing host configuration ; setting of master and worker IP, next setting of ssh user and password (for me i use same credentials for all nodes for simplicity)

sed "s/{{masterip}}/[MASTERIP]/" hosts.dist > hosts 
sed -i "s/{{northip}}/[NORTHIP]/" hosts 
sed -i "s/{{southip}}/[SOUTHIP]/"  hosts 
sed -i "s/{{eastip}}/[eastip]/" hosts 
sed -i "s/{{westip}}/[westip]/"  hosts 

sed "s/{{user}}/[USER]/" group_vars/all.yml.dist > group_vars/all.yml
sed -i "s/{{password}}/[PASS]/" group_vars/all.yml

II- Preparing OS

Executing "bootstrap", will configure all raspberry with enabling cgroup for memory , cpu and disabling all swap.

ansible-playbook bootstrap.yml -i hosts --verbose

Next step, installing all common dependencies for Kubernetes on master node before initializing cluster with KubeAdmin.

ansible-playbook master.yml -i hosts --verbose

Finally, install dependencies on worker nodes and join master node.

ansible-playbook node.yml -i hosts --verbose

Set Up Cluster

I- CNI

Kubernetes need a network plugin to manage cluster intra-communication. For our project, i choose to use Flannel with some adjucing (like ARM image arch).

kubectl create -f kube/flannel.yml
kubectl create -f kube/kubedns.yml
# Must be done on all node
sudo sysctl net.bridge.bridge-nf-call-iptables=1

II- Ingress

Ingress manages external access to the services in a cluster, may provide load balancing, SSL termination and name-based virtual hosting. For ower project we use Nginx Ingress.

helm repo add stable https://kubernetes-charts.storage.googleapis.com/
helm install nginx-ingress stable/nginx-ingress --set defaultBackend.image.repository=docker.io/medinvention/ingress-default-backend,controller.image.repository=quay.io/kubernetes-ingress-controller/nginx-ingress-controller-arm,defaultBackend.image.tag=latest,controller.image.tag=0.27.1
helm install ingress stable/nginx-ingress --set controller.hostNetwork=true,controller.kind=DaemonSet

For Cluster public IP

# Check public IP if set
kubectl get svc ingress-nginx-ingress-controller -o jsonpath="{.status.loadBalancer.ingress[0].ip}"
# You can set it manual
kubectl patch svc nginx-ingress-controller -p '{"spec": {"type": "LoadBalancer", "externalIPs":["[YOUR-PUBLIC-IP]"]}}'

Trooblshooting

If Pod cannot communicate, or if CoreDNS canno't be ready, run in all node

sudo systemctl stop docker
sudo iptables -t nat -F
sudo iptables -P FORWARD ACCEPT
sudo ip link del docker0
sudo ip link del flannel.1
sudo systemctl start docker

III- Storage

For Storage solution, we can deploy a CEPH server or NFS service and configure cluster to use it. For our project, we will install NFS server on master node

sudo apt-get install nfs-kernel-server nfs-common
sudo systemctl enable nfs-kernel-server
sudo systemctl start nfs-kernel-server

sudo cat >> /etc/exports <<EOF
/data/kubernetes-storage/ north(rw,sync,no_subtree_check,no_root_squash)
/data/kubernetes-storage/ south(rw,sync,no_subtree_check,no_root_squash)
/data/kubernetes-storage/ east(rw,sync,no_subtree_check,no_root_squash)
/data/kubernetes-storage/ west(rw,sync,no_subtree_check,no_root_squash)
EOF

sudo exportfs -a  

On worker node

sudo apt-get install nfs-common

Next, we deploy NFS service

kubectl apply -f storage/nfs-deployment.yml

It's will create new storage class and marked it to default then deploy storage pod.

Now to test configuration

kubectl apply -f storage/nfs-testing.yml

Tricks

I- Cluster backup

To backup our cluster, we need to run

./os/backup.sh # cluset data will be saved in ~/bkp

II- Cluster tear down

To reset cluster, just run on master node

kubeadm reset

It's Ready

After a little time, all nodes must be ready

kubectl get nodes -o wide
NAME     STATUS   ROLES    AGE   VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION   CONTAINER-RUNTIME
east     Ready    <none>   23h   v1.17.4   192.168.1.30   <none>        Raspbian GNU/Linux 10 (buster)   4.19.75-v7l+     docker://19.3.5
master   Ready    master   63d   v1.17.1   192.168.1.17   <none>        Raspbian GNU/Linux 10 (buster)   4.19.75-v7+      docker://19.3.5
north    Ready    <none>   63d   v1.17.1   192.168.1.54   <none>        Raspbian GNU/Linux 10 (buster)   4.19.75-v7+      docker://19.3.5
south    Ready    <none>   63d   v1.17.1   192.168.1.11   <none>        Raspbian GNU/Linux 10 (buster)   4.19.75-v7+      docker://19.3.5
west     Ready    <none>   23h   v1.17.4   192.168.1.85   <none>        Raspbian GNU/Linux 10 (buster)   4.19.75-v7l+     docker://19.3.5

Enjoy ...