0%

使用kubeadm安装kubernetes 1.12.x

基本安装方法与其他版本安装无异,只是镜像名更改了,去掉了amd64

安装准备

验证MAC地址和product_uuid对于每个节点都是唯一的

1
2
ip link或获取网络接口的MAC地址ifconfig -a
sudo cat /sys/class/dmi/id/product_uuid

检查端口是否开放

ps 在一次客户提供的服务器安装k8s,无法操作k8s读取日志文件,才发生端口被禁

1
2
3
4
5
6
7
8
9
10
master
TCP 6443 Kubernetes API server
TCP 2379-2380 etcd server client API
TCP 10250 Kubelet API
TCP 10251 kube-scheduler
TCP 10252 kube-controller-manager

node
TCP 10250 Kubelet API
TCP 30000-32767 NodePort Services

关闭Selinux/firewalld

1
2
3
4
5
6
systemctl stop firewalld
systemctl disable firewalld

setenforce 0
#以允许容器访问主机文件系统,例如pod网络所需
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

关闭交换分区

如采用云服务,可略过此步骤,云服务默认禁止使用交换分区,如阿里云、腾讯云、华为云

1
2
3
4
5
swapoff -a
cp -p /etc/fstab /etc/fstab.bak$(date '+%Y%m%d%H%M%S')
sed -i "s/\/dev\/mapper\/rhel-swap/\#\/dev\/mapper\/rhel-swap/g" /etc/fstab
sed -i "s/\/dev\/mapper\/centos-swap/\#\/dev\/mapper\/centos-swap/g" /etc/fstab
mount -a

网桥包经IPTables

由于iptables被绕过而导致流量路由不正确的问题

1
2
3
4
5
6
7
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness=0
EOF
sysctl --system

同步时间

如时间不一致,会导致认证过期,提示cluster-info: x509: certificate has expired or is not yet valid

1
2
yum install -y ntpdate
ntpdate -u ntp1.aliyun.com

开启IPVS

kubernets 1.12 默认更改为ipvs

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
yum install ipvsadm

cat > /etc/sysconfig/modules/ipvs.modules <<EOF

#!/bin/bash
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack_ipv4"
for kernel_module in \${ipvs_modules}; do
/sbin/modinfo -F filename \${kernel_module} > /dev/null 2>&1
if [ $? -eq 0 ]; then
/sbin/modprobe \${kernel_module}
fi
done
EOF

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs

安装dockre ce

目前官方文档提供已支持docker 1806
Kubernetes 1.12.3版本暂时不支持最新的Docker 18.09

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# 卸载原docker
yum remove docker docker-client docker-client-latest docker-common docker-latest docker-latest-logrotate docker-logrotate docker-selinux docker-engine-selinux docker-engine
# 安装docker
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum install -y docker-ce

# 增加加速器
tee /etc/docker/daemon.json <<EOF
{
"registry-mirrors": ["https://bv55mwyn.mirror.aliyuncs.com"]
}
EOF

# 启动docker
systemctl enable docker && systemctl start docker

安装kubeadm

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# 添加阿里云仓库
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

# 安装kubeadm
yum install -y kubelet-1.12.3 kubeadm-1.12.3 kubectl-1.12.3

# 启动kubectl
systemctl enable kubelet && systemctl start kubelet

安装所需镜像

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
# 编写一个脚本执行,通过阿里云镜像服务拉取谷歌镜像并tag
#如安装kubernetes-version=v1.12.3,需要更改KUBE_VERSION=v1.12.3
#!/bin/bash

set -e

KUBE_VERSION=v1.12.3
KUBE_PAUSE_VERSION=3.1
ETCD_VERSION=3.2.24
CORE_DNS_VERSION=1.2.2

GCR_URL=k8s.gcr.io
ALIYUN_URL=registry.cn-shenzhen.aliyuncs.com/hyman0603

images=(kube-proxy:${KUBE_VERSION}
kube-scheduler:${KUBE_VERSION}
kube-controller-manager:${KUBE_VERSION}
kube-apiserver:${KUBE_VERSION}
pause:${KUBE_PAUSE_VERSION}
etcd:${ETCD_VERSION}
coredns:${CORE_DNS_VERSION})

for imageName in ${images[@]} ; do
docker pull $ALIYUN_URL/$imageName
docker tag $ALIYUN_URL/$imageName $GCR_URL/$imageName
docker rmi $ALIYUN_URL/$imageName
done

安装kubernetes 1.12.3

选择了kubernetes-version=v1.12.3,kubernetes镜像文件也需要对应一致

1
kubeadm init --kubernetes-version=v1.12.3 --pod-network-cidr=10.244.0.0/16 ----apiserver-advertise-address=172.16.0.17

重置kubernets

1
2
3
4
5
6
kubeadm reset
ifconfig cni0 down
ip link delete cni0
ifconfig flannel.1 down
ip link delete flannel.1
rm -rf /var/lib/cni/

安装Pod Network

1
2
3
4
5
6
7
8
9
10
11
12
13
14
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f kube-flannel.yml

#如果Node有多个网卡的话,参考[flannel issues 39701](https://github.com/kubernetes/kubernetes/issues/39701),目前需要在kube-flannel.yml中使用–iface参数指定集群主机内网网卡的名称,否则可能会出现dns无法解析。需要将kube-flannel.yml下载到本地,flanneld启动参数加上–iface=<iface-name>

containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.10.0-amd64
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
- --iface=eth1

master node参与工作负载

1
2
3
4
5
kubectl describe node node1 | grep Taint
Taints: node-role.kubernetes.io/master:NoSchedule

kubectl taint nodes node1 node-role.kubernetes.io/master-
node "node1" untainted

测试DNS

1
2
3
4
5
6
7
8
9
10
11
kubectl run curl --image=radial/busyboxplus:curl -it
kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead.
If you don't see a command prompt, try pressing enter.
[ root@curl-5cc7b478b6-r997p:/ ]$

nslookup kubernetes.default
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name: kubernetes.default
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local

集群中移除Node

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
kubectl get nodes
NAME STATUS ROLES AGE VERSION
node1 Ready master 26m v1.12.0
node2 Ready <none> 2m v1.12.0

在master节点上执行:
kubectl drain node2 --delete-local-data --force --ignore-daemonsets
kubectl delete node node2

在node2上执行:
kubeadm reset
ifconfig cni0 down
ip link delete cni0
ifconfig flannel.1 down
ip link delete flannel.1
rm -rf /var/lib/cni/

在node1上执行:

kubectl delete node node2