kubernetes-1-install locally stored pv and pvc

Volume can be divided into the following three categories according to usage:
(1) Volume local and network data volumes
(2) Persistent Volume
(3) Persistent Volume dynamic supply data volume

Note: Volume in Kubernetes provides the ability to mount external storage in a container.

Note: Pod needs to set the volume source (spec.volumes) and mount point (spec.containers.volumeMounts) before using the corresponding volume.

1 k8s volume local storage and network storage

Local data volume and network data volume in volume:
(1) Local data volume: emptyDir, hostPath.
(2) Network data volume: NFS.

1.1 emptyDir (empty directory)

Create an empty volume and mount it to the container in the Pod. Pod deletion of the volume will also be deleted. Application scenario: Data sharing between containers running in the same Pod.
#docker pull busybox
#docker pull centos
#docker pull library/bash: The 4.4.23
file empty.yaml
creates two containers, one for writing and one for reading, to test whether the data is shared.

apiVersion: v1
kind: Pod
metadata:
  name: my-pod
spec:
  containers:
  - name: write
    image: library/bash:4.4.23
    imagePullPolicy: IfNotPresent
    command: ["sh","-c","while true; do echo 'hello' >> /data/hello.txt; sleep 2; done;"]
    volumeMounts:
      #将名为data的数据卷,挂载在容器的/data下面
      - name: data
        mountPath: /data

  - name: read
    image: centos
    imagePullPolicy: IfNotPresent
    command: ["bash","-c","tail -f /data/hello.txt"]
    volumeMounts:
      #将名为data的数据卷,挂载在容器的/data下面
      - name: data
        mountPath: /data

  #定义一个数据卷来源
  volumes:
  #定义数据卷名字
  - name: data
    emptyDir: {
    
    }

#kubectl apply -f empty.yaml

由于通过指定的镜像启动容器后,容器内部没有常驻进程,导致容器启动成功后即退出,从而进行了持续的重启。
command: ["sh","-c","while true; do echo 'hello' >> /data/hello.txt; sleep 2; done;"]

#kubectl logs my-pod -c read is
written every two seconds, so the printed data can be seen continuously.

1.2 hostPath (local mount)

Mount the file or directory on the node's file system to the container in the Pod. Application scenario: The container in the Pod needs to access the host file.
File hostpath.yaml

apiVersion: v1
kind: Pod
metadata:
  name: my-pod
spec:
  containers:
  - name: busybox
    image: busybox
    imagePullPolicy: IfNotPresent
    args:
    - /bin/sh
    - -c
    - sleep 36000
   # 挂载点
    volumeMounts:
    #将名为data的数据卷,挂载在容器的/data下面
    - name: data
      mountPath: /data
  volumes:
  - name: data
    #挂载来源,宿主机的/tmp目录
    hostPath:
      path: /tmp
      type: Directory

#kubectl apply -f hostpath.yaml
Check which node the pod is scheduled to
#kubectl get pods -n default -o wide
Insert picture description hereSee the data on /tmp on the host and successfully display it in the /data directory of the pod.

1.3 NFS (File Sharing Storage Network Volume)

192.168.0.165 mymaster
192.168.0.163 myworker
(1) Install nfs: (nfs service is required, and each machine needs to be installed)

#yum install -y nfs-utils #安装nfs服务
#yum install -y rpcbind#安装rpc服务
#注意:先启动rpc服务,再启动nfs服务,每台机器都启动。
#systemctl start rpcbind #先启动rpc服务
#systemctl enable rpcbind #设置开机启动
#systemctl start nfs-server #启动nfs服务
#systemctl enable nfs-server #设置开机启动

(2) Set mymaster as the server, configure the /some/path directory to be shared, and have read and write permissions

#cat /etc/exports
/some/path 192.168.0.0/24(rw,no_root_squash)

#docker pull nginx
At this time, when you are on the client side, you don't need to deliberately use mount to mount, because k8s will automatically mount it for us.
(3) File nfs.yaml

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        imagePullPolicy: IfNotPresent
        #将名为wwwroot的数据卷,挂载在容器的nginx的html目录下
        volumeMounts:
        - name: wwwroot
          mountPath: /usr/share/nginx/html
        ports:
        - containerPort: 80
    #定义数据卷名字为wwwroot,类型为nfs
      volumes:
      - name: wwwroot
        nfs:
          server: mymaster
          path: /some/path

#kubectl apply -f nfs.yaml
#kubectl exec -it nginx-deployment-5f58c6b8f9-8x4tb-sh
Insert picture description here
Check whether there is content in the directory

#cat /some/path/aa.html
my name is lucy

Insert picture description here

2 Install pv and pvc stored locally

镜像rancher/local-path-provisioner:v0.0.11
#docker load -i local-path-provisioner.tar
文件local-path-storage.yaml

apiVersion: v1
kind: Namespace
metadata:
  name: local-path-storage
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: local-path-provisioner-service-account
  namespace: local-path-storage
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: local-path-provisioner-role
rules:
- apiGroups: [""]
  resources: ["nodes", "persistentvolumeclaims"]
  verbs: ["get", "list", "watch"]
- apiGroups: [""]
  resources: ["endpoints", "persistentvolumes", "pods"]
  verbs: ["*"]
- apiGroups: [""]
  resources: ["events"]
  verbs: ["create", "patch"]
- apiGroups: ["storage.k8s.io"]
  resources: ["storageclasses"]
  verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: local-path-provisioner-bind
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: local-path-provisioner-role
subjects:
- kind: ServiceAccount
  name: local-path-provisioner-service-account
  namespace: local-path-storage
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: local-path-provisioner
  namespace: local-path-storage
spec:
  replicas: 1
  selector:
    matchLabels:
      app: local-path-provisioner
  template:
    metadata:
      labels:
        app: local-path-provisioner
    spec:
      serviceAccountName: local-path-provisioner-service-account
      containers:
      - name: local-path-provisioner
        image: rancher/local-path-provisioner:v0.0.11
        imagePullPolicy: IfNotPresent
        command:
        - local-path-provisioner
        - --debug
        - start
        - --config
        - /etc/config/config.json
        volumeMounts:
        - name: config-volume
          mountPath: /etc/config/
        env:
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
      volumes:
        - name: config-volume
          configMap:
            name: local-path-config
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: local-path
  annotations: #添加为默认StorageClass
    storageclass.beta.kubernetes.io/is-default-class: "true"
provisioner: rancher.io/local-path
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: local-path-config
  namespace: local-path-storage
data:
  config.json: |-
        {
                "nodePathMap":[
                {
                        "node":"DEFAULT_PATH_FOR_NON_LISTED_NODES",
                        "paths":["/opt/local-path-provisioner"]
                }
                ]
        }

#kubectl apply -f local-path-storage.yaml
Insert picture description hereView the default storage
Insert picture description here

Guess you like

Origin blog.csdn.net/qq_20466211/article/details/113121310