kubernetes 1.5 talk about StatefulSets

StatefulSets have the following benefits for applications:

 Stable unique network identifier

 stable persistent storage

 Smooth deployment and scaling

 graceful termination and deletion

Above, for pod planning, stability and durability are synonymous. If an application does not require any stable identity or sequential deployment, deletion, or scaling, you can deploy your application as a controller for stateless replicas. For stateless services, Deployment and ReplicaSet may be more suitable.

 

 

Limitations:

   StatefulSet is a beta release and didn't have this feature prior to Kubernetes 1.5.

   For all alpha/beta versions, it can be disabled by adding the --runtime-config option to the apiserver.

   The storage space allocated to the Pod must be pre-allocated by the PersistentVolume Provisione that calls the Storage class, or by the administrator.

   Deleting or shrinking a StatefulSet will not delete the storage volume capacity allocated for this StatefulSet. This is to ensure data security, because it is more practical to treat StatefulSet data in a common way than to automatically recycle resources.

   The current version of StatefulSets requires a Headless Service to respond to the network identity of pods. So, this service needs to be created.

   Updating a StatefulSet requires manual action.

 

Components:

The following example is the component composition of StatefulSet;

 

   A Headless Service, named nginx, that controls the network

  The StatefulSet is named web, and the replicas is set to 3 in the spec, which will create three containers, and the three containers are in different Pods.

  volumeClaimTemplates will provide stable storage,  pre-allocated by PersistentVolumes .

 

[plain]  view plain  copy
 
  1. ---  
  2. apiVersion: v1  
  3. kind: Service  
  4. metadata:  
  5.   name: nginx  
  6.   labels:  
  7.     app: nginx  
  8. spec:  
  9.   ports:  
  10.   - port: 80  
  11.     name: web  
  12.   clusterIP: None  
  13.   selector:  
  14.     app: nginx  
  15. ---  
  16. apiVersion: apps/v1beta1  
  17. kind: StatefulSet  
  18. metadata:  
  19.   name: web  
  20. spec:  
  21.   serviceName: "nginx"  
  22.   replicas: 3  
  23.   template:  
  24.     metadata:  
  25.       labels:  
  26.         app: nginx  
  27.     spec:  
  28.       terminationGracePeriodSeconds: 10  
  29.       containers:  
  30.       - name: nginx  
  31.         image: gcr.io/google_containers/nginx-slim:0.8  
  32.         ports:  
  33.         - containerPort: 80  
  34.           name: web  
  35.         volumeMounts:  
  36.         - name: www  
  37.           mountPath: /usr/share/nginx/html  
  38.   volumeClaimTemplates:  
  39.   - metadata:  
  40.       name: www  
  41.     spec:  
  42.       accessModes: [ "ReadWriteOnce" ]  
  43.       resources:  
  44.         requests:  
  45.           storage: 1Gi  



 

Pod name

The names of StatefulSet pods cannot be the same. The names contain a sequence number, a fixed network identifier, and a fixed storage. Regardless of which node the pod is assigned to, these identifiers are glued to the pod.

 

ordered index

If a StatefulSet has N replicas, each Pod in the StatefulSet will be assigned an integer in sequence, and its range is [0, N), that is, between 0 and N-1, so all numbers in this group are only.

 

 

Fixed network ID

 

Each Pod in the StatefulSet derives the Pod's hostname from the StatefulSet, which is a combination of the StatefulSet's name and the Pod's sequence number. The pattern for Pod hostname combinations is $(statefulset name)-$(ordinal). The three Pods that will be created in the above example will be named web-0, web-1, web-2. StatefulSet can use headless Service to control the domain of Pods. Headless service via$(service name).$(namespace).svc.cluster.local管理域。其中cluster.local是集群的域。每当一个Pod创建,都会在DNS中得到一个子域名,域名为:$(podname).$(governing service domain)。 其中governing service在StatefulSet中定义的serviceName.

 

See the example below:

 

Cluster Domain Service (ns/name) StatefulSet (ns/name) StatefulSet Domain Pod DNS Pod Hostname
cluster.local default/nginx default/web nginx.default.svc.cluster.local web-{0..N-1}.nginx.default.svc.cluster.local web-{0..N-1}
cluster.local foo/nginx foo/web nginx.foo.svc.cluster.local web-{0..N-1}.nginx.foo.svc.cluster.local web-{0..N-1}
 

 

 

 

 

 

 

Stable Storage

Kubernetes creates one PersistentVolume for each VolumeClaimTemplate. In the nginx example above, each Pod will receive a single PersistentVolume with a storage class of anything and 1 Gib of provisioned storage. When a Pod is (re)scheduled onto a node, its volumeMounts mount the PersistentVolumes associated with its PersistentVolume Claims. Note that, the PersistentVolumes associated with the Pods’ PersistentVolume Claims are not deleted when the Pods, or StatefulSet are deleted. This must be done manually.

Deployment and Scaling Guarantee

  • For a StatefulSet with N replicas, when Pods are being deployed, they are created sequentially, in order from {0..N-1}.
  • When Pods are being deleted, they are terminated in reverse order, from {N-1..0}.
  • Before a scaling operation is applied to a Pod, all of its predecessors must be Running and Ready.
  • Before a Pod is terminated, all of its successors must be completely shutdown.

The StatefulSet should not specify a pod.Spec.TerminationGracePeriodSeconds of 0. This practice is unsafe and strongly discouraged. For further explanation, please refer to force deleting StatefulSet Pods.

When the nginx example above is created, three Pods will be deployed in the order web-0, web-1, web-2. web-1 will not be deployed before web-0 is Running and Ready, and web-2 will not be deployed until web-1 is Running and Ready. If web-0 should fail, after web-1 is Running and Ready, but before web-2 is launched, web-2 will not be launched until web-0 is successfully relaunched and becomes Running and Ready.

If a user were to scale the deployed example by patching the StatefulSet such thatreplicas=1, web-2 would be terminated first. web-1 would not be terminated until web-2 is fully shutdown and deleted. If web-0 were to fail after web-2 has been terminated and is completely shutdown, but prior to web-1’s termination, web-1 would not be terminated until web-0 is Running and Ready.

http://blog.csdn.net/wenwst/article/details/54091984

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=326608370&siteId=291194637
Recommended