Table of contents
1 What is a StatefulSet
Official address: StatefulSet | Kubernetes
StatefulSet is 有状态应用
a workload API object used to manage.
Stateless application: An application that does not store any data in the application itself is called a stateless application.
Stateful application: The application itself needs to store related data. The application is called a stateful application.
Blog: front-end vue back-end java mysql redis es …
Data Acquisition: Stateful Application of Acquisition Procedures
StatefulSet is used to manage the deployment and expansion of a Pod collection, and provide persistent storage and persistent identifiers for these Pods.
Similar to Deployment, StatefulSet manages a set of Pods based on the same container specification. But unlike Deployments, StatefulSets maintain a sticky ID for each of their Pods. These Pods are created based on the same protocol, but are not interchangeable: each Pod has a permanent ID no matter how it is scheduled.
If you want to use storage volumes to provide persistent storage for your workloads, you can use StatefulSets as part of your solution. Although individual Pods in a StatefulSet can still fail, persistent Pod identifiers make it easier to match existing volumes with new Pods that replace failed Pods.
2 StatefulSet Features
StatefulSets are valuable for applications that need to meet one or more of the following requirements:
-
Stable, unique network identifier.
-
Stable, durable storage.
-
Orderly and graceful deployment and scaling.
-
Orderly, automatic rolling updates.
In the above description, "stable" means that the entire process of Pod scheduling or rescheduling is persistent. If the application does not require any stable identifier or orderly deployment, deletion, or scaling, the application should be deployed using a workload provided by a set of stateless replica controllers, such as a Deployment or ReplicaSet may be more suitable for Your stateless application deployment needs.
3 limits
-
storage class
Storage for a given Pod must be provisioned by the PersistentVolume Provisioner based on the requested , or pre-provisioned by the administrator. -
Deleting or scaling a StatefulSet does not delete its associated storage volumes. This is done to keep data safe, and it is usually more valuable than automatically clearing all related resources of the StatefulSet.
-
StatefulSets currently require a headless service to be responsible for the pod's network identity. You are responsible for creating this service.
-
When a StatefulSet is deleted, the StatefulSet does not provide any guarantees of terminating Pods. To achieve an orderly and graceful termination of Pods in a StatefulSet, the StatefulSet can be scaled down to 0 before deletion.
-
OrderedReady
Using rolling updates with the default pod management policy ( ), can get into a broken state that requires manual intervention to fix.
4 Using StatefulSets
1 Build NFS service
#安装nfs-utils
$ yum install -y rpcbind nfs-utils
#创建nfs目录
mkdir -p /root/nfs/data
#编辑/etc/exports输入如下内容
# insecure:通过 1024 以上端口发送 rw: 读写 sync:请求时写入共享 no_root_squash:root用户有完全根目录访问权限
echo "/root/nfs/data *(insecure,rw,sync,no_root_squash)" >> /etc/exports
#启动相关服务并配置开机自启动
systemctl start rpcbind
systemctl start nfs-server
systemctl enable rpcbind
systemctl enable nfs-server
#重新挂载 使 /etc/exports生效
exportfs -r
#查看共享情况
exportfs
2 client testing
# 1.安装客户端 所有节点安装
$ yum install -y nfs-utils
# 2.创建本地目录
$ mkdir -p /root/nfs
# 3.挂载远程nfs目录到本地
$ mount -t nfs 10.15.0.9:/root/nfs /root/nfs
# 4.写入一个测试文件
$ echo "hello nfs server" > /root/nfs/test.txt
# 5.去远程 nfs 目录查看
$ cat /root/nfs/test.txt
# 挂取消载
$ umount -f -l nfs目录
3 use statefulset
-
class.yml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-client
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
archiveOnDelete: "false"
-
nfs-client-provider
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: kube-system
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: chronolaw/nfs-subdir-external-provisioner:v4.0.2
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: k8s-sigs.io/nfs-subdir-external-provisioner
- name: NFS_SERVER
value: 10.15.0.10
- name: NFS_PATH
value: /root/nfs/data
volumes:
- name: nfs-client-root
nfs:
server: 10.15.0.10
path: /root/nfs/data
-
rbac.yml
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: kube-system
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: kube-system
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: kube-system
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: kube-system
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
-
mysql.yml
apiVersion: v1
kind: Namespace
metadata:
name: ems
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: mysql-nfs-sc
namespace: ems
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
parameters:
onDelete: "remain"
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql
labels:
app: mysql
namespace: ems
spec:
serviceName: mysql #headless 无头服务 保证网络标识符唯一 必须存在
replicas: 1
template:
metadata:
name: mysql
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql/mysql-server:8.0
imagePullPolicy: IfNotPresent
env:
- name: MYSQL_ROOT_PASSWORD
value: root
volumeMounts:
- mountPath: /var/lib/mysql #自己容器写入数据目录
name: data #保存到指定一个变量中 变量名字就是 data
ports:
- containerPort: 3306
restartPolicy: Always
volumeClaimTemplates: #声明动态创建数据卷模板
- metadata:
name: data # 数据卷变量名称
namespace: ems # 在哪个命名空间创建数据卷
spec:
accessModes: # 访问数据卷模式是什么
- ReadWriteMany
storageClassName: mysql-nfs-sc # 使用哪个 storage class 模板存储数据
resources:
requests:
storage: 2G
selector:
matchLabels:
app: mysql
---