0%

部署kubenetes集群 < 1.23

用于工作快速搭建k8s,适用于kuberbetes小于1.23

系统初始化

配置yum仓库

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
#添加yum源仓库

mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
curl -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
yum install -y https://www.elrepo.org/elrepo-release-7.0-4.el7.elrepo.noarch.rpm

#更新yum源仓库
yum -y update

#添加阿里云YUM软件源
# cat > /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
# yum clean all && yum makecache && yum repolist

配置hosts

1
2
3
4
5
6
# hostnamectl set-hostname master01

# cat >> /etc/hosts << EOF
10.0.x.x master01
10.0.x.x node01
EOF

⚠未配置会hosts在集群检测会出现以下警告:

1
2
[WARNING Hostname]: hostname "master01" could not be reached    
[WARNING Hostname]: hostname "master01": lookup master01 on 8.8.8.8:53: no such host

关闭selinux、swap、firewalld

1
2
3
4
5
6
7
# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g'  /etc/selinux/config
# sed -i "s@/dev/mapper/centos-swap@#/dev/mapper/centos-swap@g" /etc/fstab
# swapoff -a
# systemctl disable --now firewalld
# yum remove -y firewalld*
# systemctl disable --now NetworkManager
# systemctl disable --now dnsmasq

调整文件打开数等配置

1
2
3
4
5
6
7
8
cat> /etc/security/limits.conf <<EOF
* soft nproc 1000000
* hard nproc 1000000
* soft nofile 1000000
* hard nofile 1000000
* soft memlock unlimited
* hard memlock unlimited
EOF

配置时间同步

  • 服务端(本处以master节点为服务端)
1
2
3
4
5
6
# yum install chrony -y
# vim /etc/chrony.conf
server 127.127.1.0 iburst #表示与本机IP同步时间,其他server注释或删除
allow 192.168.2.0/24 #指定一台主机、子网,或者网络以允许或拒绝NTP连接到扮演时钟服务器的机器
local stratum 10 #不去同步任何人的时间。时间同步服务级别
# systemctl restart chronyd && systemctl enable chronyd
  • 客户端(node节点)
1
2
3
4
5
# yum install chrony -y
# vim /etc/chrony.conf
server 服务端IP iburst
# systemctl restart chronyd && systemctl enable chronyd
# chronyc sources -v #查看同步状态

服务器免密登陆(高可用)

1
2
3
# ssh-keygen -t rsa

# for i in k8s-master01 k8s-master02 k8s-master02 k8s-node01 k8s-node02;do ssh-copy-id -i .ssh/id_rsa.pub $i;done

内核升级

1
2
3
4
5
6
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
yum -y install https://www.elrepo.org/elrepo-release-7.0-4.el7.elrepo.noarch.rpm
yum --enablerepo=elrepo-kernel install kernel-lt -y
grub2-set-default 0
grub2-mkconfig -o /etc/grub2.cfg
reboot //重启服务器

配置IPVS

非必须操作,kube-proxy安装后默认使用iptable规则,也可以修改为IP VS规则。

  • 内核 < 4.19
1
2
3
4
5
6
7
8
9
10
11
12
13
# yum install  -y ipvsadm ipset sysstat conntrack libseccomp  

:> /etc/modules-load.d/ipvs.conf
module=(
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
br_netfilter
)
for kernel_module in ${module[@]};do
/sbin/modinfo -F filename $kernel_module |& grep -qv ERROR && echo $kernel_module >> /etc/modules-load.d/ipvs.conf || :
done
  • 内核 >= 4.19
1
2
3
4
5
6
7
8
9
10
11
12
:> /etc/modules-load.d/ipvs.conf
module=(
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
br_netfilter
)
for kernel_module in ${module[@]};do
/sbin/modinfo -F filename $kernel_module |& grep -qv ERROR && echo $kernel_module >> /etc/modules-load.d/ipvs.conf || :
done
  • 加载ipvs模块
1
2
systemctl daemon-reload
systemctl enable --now systemd-modules-load.service
  • 查询ipvs是否加载
1
2
3
4
5
6
7
8
#  lsmod | grep ip_vs
ip_vs_sh 16384 0
ip_vs_wrr 16384 0
ip_vs_rr 16384 0
ip_vs 172032 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack 172032 6 xt_conntrack,nf_nat,xt_state,ipt_MASQUERADE,xt_CT,ip_vs
nf_defrag_ipv6 20480 4 nf_conntrack,xt_socket,xt_TPROXY,ip_vs
libcrc32c 16384 3 nf_conntrack,nf_nat,ip_vs

ummy0网卡和kube-ipvs0网卡:在安装k8s集群时,启用了ipvs的话,就会有这两个网卡。(将service的IP绑定在kube-ipvs0网卡上)

配置内核参数

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
#系统优化
cat > /etc/sysctl.d/k8s_better.conf << EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
EOF

sysctl -p /etc/sysctl.d/k8s_better.conf

说明:

lnet.bridge.bridge-nf-call-iptables:开启桥设备内核监控(ipv4)
lnet.ipv4.ip_forward:开启路由转发
lnet.bridge.bridge-nf-call-ip6tables:开启桥设备内核监控(ipv6)

docker部署

1
2
3
4
5
6
7
8
9
# yum  install  docker-ce-19.03.8
# systemctl start docker && systemctl enable docker
# cat > /etc/docker/daemon.json <<EOF
{
"registry-mirrors": ["https://xcg41ct3.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
# systemctl daemon-reload && systemctl restart docker.service

安装kubelet、kubectl、kubeadm

1
2
3
4
5
# yum install  -y kubelet-1.20.6
# yum install -y kubectl-1.20.6
# yum install -y kubeadm-1.20.6
# systemctl start kubelet && systemctl enable kubelet
# systemctl status kubelet

设置table键补全

  • Kubectl命令补全
1
# kubectl completion  bash >  /etc/bash_completion.d/kubelet
  • Kubeadm命令补全
1
# kubeadm  completion  bash >  /etc/bash_completion.d/kubeadm

安装master节点

检测环境

检测主机环境是否达到集群的要求,可根据结果提示进行逐一排除故障,

1
# kubeadm   init  --dry-run

初始化master节点

1
kubeadm   config  print  init-defaults> init-default.yaml

初始化前确保kubelet服务已经启动并设置为开机自启。

1
2
3
4
# kubeadm   init  --kubernetes-version=1.20.6 \
--pod-network-cidr=10.244.0.0/16 \
--apiserver-advertise-address=10.0.x.x \
--image-repository registry.aliyuncs.com/google_containers

Node部署

  • 在master节点执行
1
# kubeadm token create --print-join-command
  • 将node节点加入集群
1
kubeadm join 192.168.2.11:6443 --token xxxx     --discovery-token-ca-cert-hash xxxx

安装插件

安装Calico网络插件

  • 下载资源清单文件
1
# wget  https://docs.projectcalico.org/manifests/calico.yaml  --no-check-certificate
  • 更新资源清单
1
# kubectl apply  -f  calico.yaml