statefulset application k8s study notes why it is always pending state


Why k8s pv and not binding on PVC


 Use statefuset deployed stateful applications, and applications are always pending state, before starting to explain what statefuset, in k8s in general management with deployment stateless applications, statefuset for managing stateful applications, such as redis, mysql, zookper distribution, etc. applications, these applications start to stop, there will be a strict order 


A, statefulset  

  • headless (headless service), no cluserIP, resource identifier, for generating a recording resolvable dns 

  • Resource management StatefulSet for pod

  • volumeClaimTemplates provide storage


Two, statefulset deployment

  • Nfs make use of network storage

  • Build nfs

  • Configure shared storage directory

  • Creating pv

  • Choreography yaml 



    Build nfs 

     yum install nfs-utils -y 

    mkdir -p /usr/local/k8s/redis/pv{7..12} # create a mount directory

 cat /etc/exports
 
 /usr/local/k8s/redis/pv7 172.16.0.0/16(rw,sync,no_root_squash)
 /usr/local/k8s/redis/pv8 172.16.0.0/16(rw,sync,no_root_squash)
 /usr/local/k8s/redis/pv9 172.16.0.0/16(rw,sync,no_root_squash)
 /usr/local/k8s/redis/pv10 172.16.0.0/16(rw,sync,no_root_squash)
 /usr/local/k8s/redis/pv11 172.16.0.0/16(rw,sync,no_root_squash

  exportfs -avr

    Creating pv 

   cat nfs_pv2.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv7
spec:
  capacity:
    storage: 500M
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: slow
  nfs:
    server: 172.16.0.59
    path: "/usr/local/k8s/redis/pv7"
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv8

spec:
  capacity:
    storage: 500M
  accessModes:
    - ReadWriteMany
  storageClassName: slow
  persistentVolumeReclaimPolicy: Retain
  nfs:
    server: 172.16.0.59
    path: "/usr/local/k8s/redis/pv8"
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv9

spec:
  capacity:
    storage: 500M
  accessModes:
    - ReadWriteMany
  storageClassName: slow
  persistentVolumeReclaimPolicy: Retain
  nfs:
    server: 172.16.0.59
    path: "/usr/local/k8s/redis/pv9"
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv10

spec:
  capacity:
    storage: 500M
  accessModes:
    - ReadWriteMany
  storageClassName: slow
  persistentVolumeReclaimPolicy: Retain
  nfs:
    server: 172.16.0.59
    path: "/usr/local/k8s/redis/pv10"
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv11

spec:
  capacity:
    storage: 500M
  accessModes:
    - ReadWriteMany
  storageClassName: slow
  persistentVolumeReclaimPolicy: Retain
  nfs:
    server: 172.16.0.59
    path: "/usr/local/k8s/redis/pv11"
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv12

spec:
  capacity:
    storage: 500M
  accessModes:
    - ReadWriteMany
  storageClassName: slow
  persistentVolumeReclaimPolicy: Retain
  nfs:
    server: 172.16.0.59
    path: "/usr/local/k8s/redis/pv12"

   kubectl apply f nfs_pv2.yaml

   

 View # successfully created

image.png

 

  Scheduling application written yaml

apiVersion: v1
kind: Service
metadata:
  name: myapp
  labels:
    app: myapp
spec:
  ports:
  - port: 80
    name: web
  clusterIP: None
  selector:
    app: myapp-pod
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: myapp
spec:
  serviceName: myapp
  replicas: 3
  selector:
    matchLabels:
      app: myapp-pod
  template:
    metadata:
      labels:
        app: myapp-pod
    spec:
      containers:
      - name: myapp
        image: ikubernetes/myapp:v1
        resources:
          requests:
            cpu: "500m"
            memory: "500Mi"
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
        - name: myappdata
          mountPath: /usr/share/nginx/html
  volumeClaimTemplates:
  - metadata:
      name: myappdata
    spec:
      accessModes: ["ReadWriteOnce"]
      storageClassName: "slow"
      resources:
        requests:
          storage: 400Mi

 

  kubectl create -f new-stateful.yaml 

  

  View headless successfully created


image.png


 View pod is successfully created

image.png


Check whether to create a successful pvc

 image.png


pod start without success, depends on pvc, pvc log view, corresponding pvc is not found, clearly written ah


image.png

View related information, has the following attributes

storageClassName: "slow"




image.png 


Three, statefulset troubleshooting

  pvc can not be created, resulting in pod does not start normally, yaml file re-examined several times,

For thought: pvc how to bind pv, go through association storageClassName, pv create successful, there storageClassName: slow this attribute, the results stunned not find

。。。。

。。。。

Pv and pvc back to check whether permission has been

Pv set of permissions found

image.png


pvc declaration of competence: volumeClaimTemplates

image.png


Permissions are inconsistent on both sides,

operating 

Delete pvc kubectl delete pvc myappdata-myapp-0 -n daemon

Yaml delete files, kubectl delete -f new-stateful.yaml -n daemon


Try to modify accessModes: [ "ReadWriteMany"]


Check again


image.png


Tip: pv and PVC set permissions Notes


Four, statefulset test, DNS


kubectl exec -it myapp-0 sh -n daemon


nslookup myapp-0.myapp.daemon.svc.cluster.local


image.png


Rules resolved as follows

          myapp-0       myapp                         daemon  

FQDN: $(podname).(headless server name).namespace.svc.cluster.local


As there is no container nsllokup, to install the corresponding packet, busybox may provide similar functionality

 Provided yaml file

apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: daemon
spec:
  containers:
  - name: busybox
    image: busybox:1.28.4
    command:
      - sleep
      - "7600"
    resources:
      requests:
        memory: "200Mi"
        cpu: "250m"
    imagePullPolicy: IfNotPresent
  restartPolicy: Never


image.png


五、statefulset  的扩缩容

 扩容:

Statefulset 资源的扩缩容与Deployment 资源相似,即通过修改副本数,Statefulset 资源的拓展过程,与创建过程类似,应用名称的索引号,依次增加

可使用 kubectl scale 

          kubectl patch 

 实践:kubectl scale statefulset myapp --replicas=4

image.png


 缩容:

  缩容只需要将pod 副本数调小

kubectl patch statefulset myapp  -p '{"spec":{"replicas":3}}' -n daemon


   image.png

Tip: resource scaling capacity, we need to dynamically create a binding relationship with pv pvc, where use is made nfs persistent storage, pv how much is pre-created


Sixth, rolling statefulset update 

  • Rolling updates

  • Canary released


  Rolling updates

  Rollover is the beginning of the index from the largest pod number, complete termination of a resource, start making the next pod, rollover is statefulset default updating policy

 kubectl set image statefulset/myapp myapp=ikubernetes/myapp:v2 -n daemon


The upgrade process 


image.png


View pod status

kubectl get pods -n daemon


image.png

After a review of whether to update the upgrade image

kubectl describe pod myapp-0 -n daemon


image.png




Guess you like

Origin blog.51cto.com/sdsca/2437821