k8s of statefulset controller

operator:

 

statefulset: stateful replica sets

Feature

Run : 1, stable and unique network identifier

2, stable and durable storage

3, orderly, smooth deployment and expansion

4, orderly and smoothly delete and termination

5 and orderly rollover

 

Three components : headless service ( headless service ), statefuleset, volumeClaimTemplate ( storage volumes application template )

First prepare pv

apiVersion: v1

kind: PersistentVolume

metadata:

  name: pv001

  labels:

    name: pv001

    polity: fast

spec:

  nfs:

    path: /data/volumes/v1

    server: node2

  accessModes: ["ReadWriteMany","ReadWriteOnce"]

  capacity:

    storage: 5Gi

---

apiVersion: v1

kind: PersistentVolume

metadata:

  name: pv002

  labels:

    name: pv002

    polity: fast

spec:

  nfs:

    path: /data/volumes/v2

    server: node2

  accessModes: ["ReadWriteOnce"]

  capacity:

    storage: 5Gi

---

 

apiVersion: v1

kind: PersistentVolume

metadata:

  name: pv003

  labels:

    name: pv003

    polity: fast

spec:

  nfs:

    path: /data/volumes/v3

    server: node2

  accessModes: ["ReadWriteMany","ReadWriteOnce"]

  capacity:

    storage: 5Gi

---

 

apiVersion: v1

kind: PersistentVolume

metadata:

  name: pv004

  labels:

    name: pv004

    polity: fast

spec:

  nfs:

    path: /data/volumes/v4

    server: node2

  accessModes: ["ReadWriteMany","ReadWriteOnce"]

  capacity:

    storage: 10Gi

---

 

apiVersion: v1

kind: PersistentVolume

metadata:

  name: pv005

  labels:

    name: pv005

    polity: fast

spec:

  nfs:

    path: /data/volumes/v5

    server: node2

  accessModes: ["ReadWriteMany","ReadWriteOnce"]

  capacity:

    storage: 10Gi

 

 

kubectl apply -f pv-demo.yaml

kubectl get pv

 

 

Examples

apiVersion: v1

kind: Service

metadata:

  name: myapp-svc  service

  labels:

    app: myapp

spec:

  ports:

  - port: 80 service port

   name: web service port name

  clusterIP: None statefulset requirements headless Service

  selector:   the pod associated label

    app: myapp-pod

---

apiVersion: apps/v1

kind: StatefulSet

metadata:

  name: myapp statefulset controller name created pod name also for the

spec:

  serviceName: myapp-svc associated service service name, service must be headless

  replicas: 2

  selector:   management which pod, associated pod of label

    matchLabels:

      app: myapp-pod

  template: define pod template

    metadata:

      labels:   the definition of pod label label

        app: myapp-pod

   spec:

     containers:

     - name: myapp pod container name

      image: ikubernetes/myapp:v1

      ports:

      - containerPort: 80

       name: web

      volumeMounts:

       - name: myappdata  mount myappdata storage volume

         mountPath: / usr / share / nginx  / html container loading path

  volumeClaimTemplates: pvc template for each pod defined volume is automatically created pvc

  - metadata:

      name: myappdata    defined pvc name

    spec:

      accessModes: [ "ReadWriteOnce"]   single write

      resources:    Resources

        requests: Request

          storage: 5Gi size. 5 Gi of storage

 

 

create

kubectl explain sts

kubectl apply -f stateful-demo.yaml

 

 

verification:

kubectl get sts

kubectl get pvc

kubectl get svc

kubectl get pv

kubectl get pods

 

Delete sts when

Reverse Delete pod

kubectl delete -f stateful-demo.yaml

When delete a pvc still, and has been reserved for a fixed pod

 

statefulset support rolling updates, scale expansion

Reverse Update

 

dns resolve

kubectl exec -it myapp-0 -- /bin/sh

nslookup  myapp-3.myapp-svc.default.svc.cluster.local

Domain name constitutes   pod name service name namespace cluster domain svc.cluster.local        

域名 pod_name.service_name.namaspace_name.svc.cluster.local

nslookup  myapp-3.myapp-svc

 

 

Su Yung expansion

kubectl scale sts myapp --replicas=3

kubectl patch sts myapp -p '{"spec":{"replicas":2}}'

 

 

Update Policy

ubectl explain sts.spec.updateStrategy

kubectl explain sts.spec.updateStrategy.rollingUpdate

 

 

Zoning update

kubectl explain sts.spec.updateStrategy.rollingUpdate.partition

myapp-0

myapp-1

myapp-2

myapp-3

myapp-4

 

score: N

N>=3

That update 3 and 4 , that is myapp-3, myapp-4 called canary release

verification

method one

kubectl patch sts myapp -p '{"spec":{"replicas":5}}'

kubectl patch sts myapp -p '{"spec":{"updateStrategy":{"rollingUpdate":{"partition":4}}}}' 打上补丁 partition>=4

kubectl describe sts myapp  View update policy

kubectl set image sts / myapp myapp =  ikubernetes / myapp: v2 changing image will be updated

kubectl get sts -o wide

 

Method Two

vim stateful-demo.yaml

kind: StatefulSet

   ...

spec:

  updateStrategy:

    rollingUpdate:

      partition: 3

 

 kubectl apply -f stateful-demo.yaml

 

If the version is no problem, it is all the updates

vim stateful-demo.yaml

kind: StatefulSet

   ...

spec:

  updateStrategy:

    rollingUpdate:

      partition: 0

 

kubectl apply -f stateful-demo.yaml 

Guess you like

Origin www.cnblogs.com/leiwenbin627/p/11317274.html