k8s部署nacos之二 nfs

1.在linux服务器下载nacos

首先安装git命令 yum install git

git clone https://github.com/nacos-group/nacos-k8s.git

2.部署nfs

 2.1 创建角色 rbac.yaml

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
  resources: ["persistentvolumes"]
  verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
  resources: ["persistentvolumeclaims"]
  verbs: ["get", "list", "watch", "update"]
- apiGroups: [""]
  resources: ["endpoints"]
  verbs: ["get", "list", "watch", "create", "update", "patch"]
- apiGroups: ["storage.k8s.io"]
  resources: ["storageclasses"]
  verbs: ["get", "list", "watch"]
- apiGroups: [""]
  resources: ["events"]
  verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
  name: nfs-client-provisioner
  namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
rules:
- apiGroups: [""]
  resources: ["endpoints"]
  verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
subjects:
- kind: ServiceAccount
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io

命令及查看创建结果

kubectl create -f rbac.yaml
[root@master nfs]# kubectl
get role NAME AGE leader-locking-nfs-client-provisioner 8m40s

 [root@master nfs]# kubectl get clusterrole|grep nfs
  nfs-client-provisioner-runner 10m

 2.2 创建 ServiceAccount 和部署 NFS-Client Provisioner

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: nfs-client-provisioner
spec:
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccount: nfs-client-provisioner
      containers:
      - name: nfs-client-provisioner
        image: quay.io/external_storage/nfs-client-provisioner:latest
        volumeMounts:
        - name: nfs-client-root
          mountPath: /persistentvolumes
        env:
        - name: PROVISIONER_NAME
          value: fuseim.pri/ifs
        - name: NFS_SERVER
          value: 172.17.79.3
        - name: NFS_PATH
          value: /data/nfs-share
      volumes:
      - name: nfs-client-root
        nfs:
          server: 172.17.79.3
          path: /data/nfs-share

创建及查看结果

kubectl apply -f deployment.yaml
查看结果

[root@master nfs]# kubectl get pods|grep nfs
nfs-client-provisioner-594f778474-whhb5 0/1 ContainerCreating 0 12m
[root@master nfs]# kubectl describe pods nfs-client-provisioner
Name: nfs-client-provisioner-594f778474-whhb5

第一次错误信息

mount.nfs: No route to host
Warning FailedMount 100s (x5 over 10m) kubelet, node2 Unable to mount volumes for pod "nfs-client-provisioner-594f778474-whhb5_default(56aef93a-9d31-11e9-a4c4-00163e069f44)": timeout expired waiting for volumes to attach or mount for pod "default"/"nfs-client-provisioner-594f778474-whhb5". list of unmounted volumes=[nfs-client-root]. list of unattached volumes=[nfs-client-root nfs-client-provisioner-token-8dcrx]

扫描二维码关注公众号,回复: 6711561 查看本文章

修改deployment.yaml中server的IP地址为某个node节点的内网IP地址

重启kubectl apply -f deployment.yaml

第二次错误信息

mount.nfs: access denied by server while mounting 172.19.68.8:/data/nfs-share
Warning FailedMount 23s kubelet, node2 Unable to mount volumes for pod "nfs-client-provisioner-5d6996447d-kdp7j_default(cd2c7cc7-9d33-11e9-a4c4-00163e069f44)": timeout expired waiting for volumes to attach or mount for pod "default"/"nfs-client-provisioner-5d6996447d-kdp7j". list of unmounted volumes=[nfs-client-root]. list of unattached volumes=[nfs-client-root nfs-client-provisioner-token-w5txr]
Warning FailedMount 18s kubelet, node2 (combined from similar events): MountVolume.SetUp failed for volume "nfs-client-root" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/cd2c7cc7-9d33-11e9-a4c4-00163e069f44/volumes/kubernetes.io~nfs/nfs-client-root --scope -- mount -t nfs 172.19.68.8:/data/nfs-share /var/lib/kubelet/pods/cd2c7cc7-9d33-11e9-a4c4-00163e069f44/volumes/kubernetes.io~nfs/nfs-client-root
Output: Running scope as unit run-12037.scope.
mount.nfs: access denied by server while mounting 172.19.68.8:/data/nfs-share

猜你喜欢

转载自www.cnblogs.com/mutong1228/p/11124590.html
今日推荐