k8s集群安装以及kubeedge部署前准备-en

Lab environment: Debian GNU/Linux 12 (bookworm) x86_64

Install containerd

Before Kubernetes you need containerd installed. KubeEdge edge nodes only need containerd; the cloud side needs full Kubernetes.

Prep before installing Kubernetes

Enable packet forwarding

Tune iptables and load br_netfilter so Kubernetes can bridge and forward traffic.

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward=1 # better than modify /etc/sysctl.conf
EOF

sudo sysctl --system

Disable Linux swap

For security (e.g. Secrets promised to live only in memory) and consistent node behavior, Kubernetes has documented since 1.8 that swap is unsupported by default. If swap stays on, the cluster may fail to start.

sudo cp /etc/fstab /etc/fstab_bak
sudo swapoff -a
sudo sed -ri '/\\sswap\\s/s/^#?/#/' /etc/fstab

Register the apt repository

I used Tsinghua’s mirror; upstream docs: https://mirrors.tuna.tsinghua.edu.cn/help/kubernetes/

sudo apt install -y apt-transport-https ca-certificates curl

sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg <https://packages.cloud.google.com/apt/doc/apt-key.gpg>

Create /etc/apt/sources.list.d/kubernetes.list with:

deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] <https://mirrors.tuna.tsinghua.edu.cn/kubernetes/apt> kubernetes-xenial main

Then:

sudo apt update

Install kubeadm, kubelet, kubectl

What each tool does is left as reading homework.

sudo apt install kubeadm kubelet kubectl

I didn’t pin versions; if you need that, see the reference posts.

Check versions:

cloud@cloud:~$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"28", GitVersion:"v1.28.10", GitCommit:"21be1d76a90bc00e2b0f6676a664bdf097224155", GitTreeState:"clean", BuildDate:"2024-05-14T10:51:30Z", GoVersion:"go1.21.9", Compiler:"gc", Platform:"linux/amd64"}
cloud@cloud:~$ kubectl version
Client Version: v1.28.10
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
The connection to the server localhost:8080 was refused - did you specify the right host or port?
cloud@cloud:~$ kubelet --version
Kubernetes v1.28.10

Images required for that release:

cloud@cloud:~$ sudo kubeadm config images list --kubernetes-version v1.28.10
registry.k8s.io/kube-apiserver:v1.28.10
registry.k8s.io/kube-controller-manager:v1.28.10
registry.k8s.io/kube-scheduler:v1.28.10
registry.k8s.io/kube-proxy:v1.28.10
registry.k8s.io/pause:3.9
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/coredns/coredns:v1.10.1

Bootstrap the control plane

  1. Start kubelet and enable it on boot:
sudo systemctl start kubelet
sudo systemctl enable kubelet

  1. Run kubeadm init (I switched to root for this):
kubeadm init \\
--image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \\
--pod-network-cidr=10.10.0.0/16 \\
--apiserver-advertise-address=10.129.196.8 \\
--kubernetes-version=v1.28.10 \\
--v=5

Parameter cheat-sheet (lifted from the reference I followed):

  • --image-repository — pull core images from Aliyun instead of Google.
  • --pod-network-cidr — pod CIDR; must match what your CNI (here, Flannel) expects.
  • --kubernetes-version — pin the cluster version.
  • --v=5 — verbose logs; see kubectl verbosity.
  • --apiserver-advertise-address — API server advertise IP; pick the right NIC if you have several. Many objects store this address—changing it later is painful.
  1. Configure kubeconfig as the log instructs. I stayed on root so later KubeEdge steps find $HOME/.kube/config at /root/.kube/config.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

Install a CNI (Flannel)

Kubernetes networking plugins deserve their own study—I still need to dig deeper. I followed the same reference and chose Flannel.

Grab kube-flannel.yml from GitHub and edit the embedded config:

net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }

Change Network to match the pod CIDR from kubeadm init:

net-conf.json: |
    {
      "Network": "10.10.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }

Apply:

kubectl apply -f kube-flannel.yml

Or straight from upstream:

kubectl apply -f <https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml>

Version choice is fairly forgiving, but you do need the pod CIDR to line up.

Flannel + edge gotcha

While testing EdgeMesh later, edge pods stuck in ContainerCreating; edgecore logs complained that /run/flannel/subnet.env was missing. Copying that file from the cloud node to the edge fixed scheduling. Worth remembering—I’ll repeat this in the EdgeMesh post.

Images behind restrictive networks

Before I configured a proxy for containerd, Flannel images wouldn’t pull. I mirrored them via docker.aityp.com and patched the object with kubectl edit pod/deployment/daemonset … -n …. If a tag isn’t mirrored yet, their guide explains how to add one.

Remove the control-plane taint

Taints block workloads on the control plane; deploying cloudcore tripped me up until I removed it:

kubectl taint nodes cloud node-role.kubernetes.io/control-plane:NoSchedule-

The reference also mentions widening the NodePort range—I haven’t needed it yet. Adding workers wasn’t necessary for my lab either; see the same write-ups if you do.

Install Docker

The joint-inference example builds helmet-detection images via build_image.sh, so install Docker CE as well.

Tsinghua mirror guide: https://mirrors.tuna.tsinghua.edu.cn/help/docker-ce/

Configure proxies / registry mirrors as needed (search for recipes).

References

  • Kubernetes on Debian (Chinese walkthrough, includes taint removal): demonlee.tech