官方参考文档:
https://kubernetes.io/docs/setup/production-environment/
https://kubernetes.io/docs/setup/production-environment/container-runtimes/
https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/
节点初始化
配置主机名
1 2 3
| hostnamectl set-hostname master hostnamectl set-hostname worker1 hostnamectl set-hostname worker2
|
配置hosts解析
1 2 3 4 5
| cat >> /etc/hosts << EOF 10.0.0.1 master 10.0.0.2 worker1 10.0.0.3 worker2 EOF
|
更新系统
1 2
| sudo apt update sudo apt -y full-upgrade
|
关闭swap
1 2 3 4
| swapoff -a cp /etc/fstab{,.bak} sed -e '/swap/ s/^#*/#/' -i /etc/fstab swapon --show
|
确认时间同步
1 2 3 4 5 6
| apt install -y chrony systemctl enable --now chrony chronyc sources
#确认时间是否同步 timedatectl
|
加载ipvs内核模块
参考:https://github.com/kubernetes/kubernetes/tree/master/pkg/proxy/ipvs
针对Linux kernel 4.19以上的内核版本使用nf_conntrack 代替nf_conntrack_ipv4
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
| cat <<EOF | tee /etc/modules-load.d/ipvs.conf # Load IPVS at boot ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack EOF
modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack
#确认内核模块加载成功 lsmod | grep -e ip_vs -e nf_conntrack
#安装ipset和ipvsadm apt install -y ipset ipvsadm
|
安装containerd
安装containerd容器运行时的前置条件
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
| cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf overlay br_netfilter EOF
sudo modprobe overlay sudo modprobe br_netfilter
# 设置必需的 sysctl 参数,这些参数在重新启动后仍然存在。 cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 EOF
# 应用 sysctl 参数而无需重新启动 sudo sysctl --system
|
安装containerd容器运行时
下载地址:https://github.com/containerd/nerdctl/releases
1 2
| wget https://github.com/containerd/nerdctl/releases/download/v0.18.0/nerdctl-full-0.18.0-linux-amd64.tar.gz tar Cxzvvf /usr/local nerdctl-full-0.18.0-linux-amd64.tar.gz
|
创建containerd配置文件
1 2
| sudo mkdir -p /etc/containerd containerd config default | sudo tee /etc/containerd/config.toml
|
配置使用 systemd cgroup 驱动程序
1
| sed -i "s#SystemdCgroup = false#SystemdCgroup = true#g" /etc/containerd/config.toml
|
修改基础设施镜像
1
| sed -i 's#k8s.gcr.io/pause:3.6#registry.aliyuncs.com/google_containers/pause:3.6#g' /etc/containerd/config.toml
|
启动containerd服务
1
| systemctl enable --now containerd
|
安装kubeadm
添加kubernetes源,使用阿里云apt源进行替换
1 2 3 4 5 6
| apt-get install -y apt-transport-https curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main EOF
|
安装kubeadm、kubelet及kubectl
1 2 3 4 5 6 7 8 9
| #查看可安装的版本 sudo apt-get update apt-cache madison kubectl | more
#执行安装 sudo apt-get install -y kubelet=1.23.5-00 kubeadm=1.23.5-00 kubectl=1.23.5-00
#锁定版本 sudo apt-mark hold kubelet kubeadm kubectl
|
启动kubelet服务
1
| systemctl enable --now kubelet
|
部署master节点
查看可安装的kubernetes版本
初始化master节点
1 2 3 4
| kubeadm init --kubernetes-version=v1.23.5 \ --apiserver-advertise-address=10.0.0.1 \ --image-repository registry.aliyuncs.com/google_containers \ --pod-network-cidr=172.16.0.0/16
|
安装calico网络插件
1
| 参考:https://projectcalico.docs.tigera.io/getting-started/kubernetes/quickstart
|
下载yaml文件
部署网络插件,pod-cidr无需修改,calico自动识别
1 2
| wget https://docs.projectcalico.org/archive/v3.22/manifests/calico.yaml kubectl apply -f calico.yaml
|
worker节点加入集群
如果master初始化后未记录节点加入集群命令,可以通过运行以下命令重新生成:
1
| kubeadm token create --print-join-command --ttl 0
|
worker加入集群
1
| kubeadm join 192.168.72.30:6443 --token xxxxx --discovery-token-ca-cert-hash xxxxxxx
|
开启ipvs模式
在master节点执行以下操作
修改kube-proxy configmap,添加mode:ipvs
1 2
| kubectl -n kube-system get cm kube-proxy -o yaml | sed 's/mode: ""/mode: "ipvs"/g' | kubectl replace -f - kubectl -n kube-system patch daemonset kube-proxy -p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}"
|
验证工作模式
1 2
| # curl 127.0.0.1:10249/proxyMode ipvs
|
查看代理规则
master节点调度pod
默认情况下,出于安全原因,群集不会在master节点上调度pod
1 2 3 4 5 6 7 8 9 10 11 12 13 14
| #master节点默认打了taints [root@master ~]# kubectl describe nodes | grep Taints Taints: node-role.kubernetes.io/master:NoSchedule
#执行以下命令去掉taints污点 [root@master ~]# kubectl taint nodes master node-role.kubernetes.io/master- node/master untainted
#再次查看 taint字段为none [root@master ~]# kubectl describe nodes | grep Taints Taints: <none>
#如果要恢复Master Only状态,执行如下命令: kubectl taint node k8s-master node-role.kubernetes.io/master=:NoSchedule
|
清理集群
1 2 3 4 5 6
| kubeadm reset -f systemctl restart containerd rm -rf /etc/cni/net.d/* rm -rf /var/lib/calico ip link delete vxlan.calico ip link delete kube-ipvs0
|
其他可选配置项
- 安装 Kubernetes dashboard 仪表板
- 安装 Metrics Server(用于检查 Pod 和节点资源使用情况)
- 部署 Prometheus / Grafana 监控
- 部署EFK、Grafana Loki日志系统
- 部署持久化存储,可选NFS、Rook-ceph、Openebs、Longhorn等
- 安装Ingress Controller、官方Ingress-Nginx、traefic、apache apisix等
- 安装负载均衡插件MetaLB、OpenELB等