k8s支持两种(Provisioning)模式:
- 静态模式:集群管理员手工创建许多PV,在定义PV时需要将后端存储的特性进行设置。
- 动态模式:集群管理员无须手工创建PV,而是通过StorageClass的设置对后端存储进行描述,标记为某种类型(Class)。此时要求PVC对存储类型进行声明,系统将自动完成PV的创建及与PVC的绑定。PVC可以声明Class为””,说明该PVC禁止使用动态模式。
yaml部署:
https://github.com/kubernetes-retired/external-storage/tree/master/nfs-client/deploy
helm部署方式:
https://github.com/helm/charts/tree/master/stable/nfs-client-provisioner
搭建nfs
/posts/3b526fbd.html
动态NFS存储卷
使用nfs-client-provisioner这个应用,利用NFS Server给Kubernetes作为持久存储的后端,并且动态提供PV
1 2
| git clone https://github.com/kubernetes-incubator/external-storage.git cd external-storage/nfs-client/deploy/
|
rbac.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
| apiVersion: v1 kind: ServiceAccount metadata: name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: nfs-client-provisioner-runner rules: - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["create", "update", "patch"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: run-nfs-client-provisioner subjects: - kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default roleRef: kind: ClusterRole name: nfs-client-provisioner-runner apiGroup: rbac.authorization.k8s.io --- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default rules: - apiGroups: [""] resources: ["endpoints"] verbs: ["get", "list", "watch", "create", "update", "patch"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default subjects: - kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default roleRef: kind: Role name: leader-locking-nfs-client-provisioner apiGroup: rbac.authorization.k8s.io
|
1
| kubectl apply -f rbac.yaml
|
deployment.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39
| apiVersion: apps/v1 kind: Deployment metadata: name: nfs-client-provisioner labels: app: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default spec: replicas: 1 strategy: type: Recreate selector: matchLabels: app: nfs-client-provisioner template: metadata: labels: app: nfs-client-provisioner spec: serviceAccountName: nfs-client-provisioner containers: - name: nfs-client-provisioner image: quay.io/external_storage/nfs-client-provisioner:latest volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes env: - name: PROVISIONER_NAME value: fuseim.pri/ifs - name: NFS_SERVER value: 10.168.4.6 - name: NFS_PATH value: /data/k8s volumes: - name: nfs-client-root nfs: server: 10.168.4.6 path: /data/k8s
|
1
| kubectl apply -f deployment.yaml
|
class.yaml
1 2 3 4 5 6 7
| apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: managed-nfs-storage provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME' parameters: archiveOnDelete: "false"
|
1
| kubectl apply -f class.yaml
|
注意:
StorageClass 与 nfs-client 的 provisioner: fuseim.pri/ifs 需要对应,可以自定义
- 设置这个 managed-nfs-storage 的 StorageClass 为 Kubernetes 的默认存储后端,我们可以用 kubectl patch 命令来更新:
1
| $ kubectl patch storageclass managed-nfs-storage -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
|
pvc.yaml
1 2 3 4 5 6 7 8 9 10 11 12
| kind: PersistentVolumeClaim apiVersion: v1 metadata: name: test-claim annotations: volume.beta.kubernetes.io/storage-class: "managed-nfs-storage" spec: accessModes: - ReadWriteMany resources: requests: storage: 1Mi
|
pod.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
| kind: Pod apiVersion: v1 metadata: name: test-pod spec: containers: - name: test-pod image: gcr.io/google_containers/busybox:1.24 command: - "/bin/sh" args: - "-c" - "touch /mnt/SUCCESS && exit 0 || exit 1" volumeMounts: - name: nfs-pvc mountPath: "/mnt" restartPolicy: "Never" volumes: - name: nfs-pvc persistentVolumeClaim: claimName: test-claim
|
静态NFS存储卷
不依赖nfs-client-provisioner
pv.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
| apiVersion: v1 kind: PersistentVolume metadata: name: nfs-pv labels: pv: nfs-pv spec: capacity: storage: 10Mi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Recycle storageClassName: nfs nfs: path: /data/k8s/test server: 10.168.4.6
|
pvc.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14
| apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nfs-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Mi storageClassName: nfs #selector: # matchLabels: # pv: nfs-pv
|
pod.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
| kind: Pod apiVersion: v1 metadata: name: nfs-pod spec: containers: - name: myfrontend image: nginx volumeMounts: - mountPath: "/var/www/html" name: nfs-pv volumes: - name: nfs-pv persistentVolumeClaim: claimName: nfs-pvc
|
错误
在使用nfs创建storageclass 实现存储的动态加载,分别创建 rbac、nfs-deployment、nfs-storageclass之后都正常运行
但在创建pvc时一直处于pending状态
kubectl describe pvc test-claim 查看pvc信息提示如下
1
| waiting for a volume to be created, either by external provisioner "fuseim.pri/ifs" or manually created by system administrator
|
查看nfs-client-provisioner日志
1
| E0426 02:06:12.590737 1 controller.go:1004] provision "default/test-claim" class "managed-nfs-storage": unexpected error getting claim reference: selfLink was empty, can't make reference
|
elfLink was empty 在k8s集群 v1.20之前都存在,在v1.20之后被删除,需要在/etc/kubernetes/manifests/kube-apiserver.yaml 添加参数增加 - –feature-gates=RemoveSelfLink=false
1 2 3 4 5
| spec: containers: - command: - kube-apiserver - --feature-gates=RemoveSelfLink=false
|
添加之后使用kubeadm部署的集群会自动加载部署pod
1 2 3
| kubeadm安装的apiserver是Static Pod,它的配置文件被修改后,立即生效。 Kubelet 会监听该文件的变化,当您修改了 /etc/kubenetes/manifest/kube-apiserver.yaml 文件之后,kubelet 将自动终止原有的 kube-apiserver-{nodename} 的 Pod,并自动创建一个使用了新配置参数的 Pod 作为替代。 如果您有多个 Kubernetes Master 节点,您需要在每一个 Master 节点上都修改该文件,并使各节点上的参数保持一致。
|
如果api-server启动失败 需重新在执行一遍
1
| kubectl apply -f /etc/kubernetes/manifests/kube-apiserver.yaml
|
参考:https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/issues/25