Client Provisioner部署
标签:
部署NFS-Client Provisioner的初衷,就是为了根据PVC的需求自动创建符合要求的PV。首先,必须拥有自己的NFS Server, 而且k8s集群能够正常访问之。
之后,在k8s master上应用以下yaml文件:
1 RBAC.yaml
2 Deployment.yaml
apiVersion: apps/v1 kind: Deployment metadata: name: nfs-client-provisioner labels: app: nfs-client-provisioner namespace: default spec: replicas: 1 selector: matchLabels: app: nfs-client-provisioner strategy: type: Recreate template: metadata: labels: app: nfs-client-provisioner spec: serviceAccountName: nfs-client-provisioner containers: - name: nfs-client-provisioner image: quay.io/external_storage/nfs-client-provisioner:latest volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes env: - name: PROVISIONER_NAME value: zbb.test/nfs - name: NFS_SERVER value: 10.0.0.32 - name: NFS_PATH value: /netshare volumes: - name: nfs-client-root nfs: server: 10.0.0.32 path: /netshare注意修改env和volumes中关于NFS Server的参数
3 storageclass.yaml
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: managed-nfs-storage provisioner: zbb.test/nfs # or choose another name, must match deployment‘s env PROVISIONER_NAME‘ parameters: archiveOnDelete: "false"注意:storageclass中的provisioner必须与deployment中的定义一致!
由此, 部署完成, 下面给出个测试示例:
4 test-pvc.yaml
5 test-pod.yaml
kind: Pod apiVersion: v1 metadata: name: test-pod spec: containers: - name: test-pod image: busybox:1.24 command: - "/bin/sh" args: - "-c" - "touch /mnt/SUCCESS && exit 0 || exit 1" volumeMounts: - name: nfs-pvc mountPath: "/mnt" restartPolicy: "Never" volumes: - name: nfs-pvc persistentVolumeClaim: claimName: test-claim到此,, 可以查询测试结果,测试完成
此后就可以使用NFS动态供给PV啦,不需要手工创建咯
测试平台:kubernetes 1.16.3
OS: CentOS Linux release 7.7.1908 (Core)
参考资料:https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client
Kubernetes NFS-Client Provisioner部署
温馨提示: 本文由Jm博客推荐,转载请保留链接: https://www.jmwww.net/file/web/41389.html