k8s resource object in the pod (the namespace, acquisition policy, restart policy, health check)

A, k8s resource object

Deployment, Service, Pod is k8s the core of the three resource object

Deployment: The controller most common stateless applications, support applications scalable capacity, rolling upgrades and other operations.

Service: provides a fixed access interface changes and there is an elastic Pod object life cycle for service discovery and service access.

Pod: is the minimum unit of scheduling and operation of the container. The same pod can run multiple containers, which share the net, UTS, IPC, in addition to USER, PID, MOUNT.

ReplicationController: to ensure that each copy of Pod can meet at any time the number of goals, in simple terms, it is for each container or group of containers is always running and accessible: the older generation stateless Pod application controller.

RwplicatSet: a new generation of stateless Pod Controller application, different RC it differs in that the tag selector support, supports only equivalent RC selector (key-value pairs), RS additionally supports a set of selectors .

StatefulSet: for there is persistent application state management, such as database service program, which differs from Deployment in that it creates a unique persistent identifier for each pod, and to ensure that the order between each pod sex.

DaemonSet: to ensure that each node runs a copy of a pod, as new nodes are added to such a pod, when a node is removed, this pod will be recycled.

Job: an application can be terminated after the completion of operational management, such as batch processing homework tasks;

1.Pod life cycle is defined as the following stages.

  • Pending: Pod has been created, but not yet created one or more containers, including Pod scheduling stage, and the download process the container image.
  • Running: Pod has been scheduled to Node, all containers have been created, and at least one receptacle running or is restarting.
  • Succeeded: Pod of all container normal exit.
  • Failed: Pod in all containers exit, there is at least one container is a withdrawal.

2. Features

Pod is to be created, the smallest unit of scheduling and management;
each has a separate Pod the IP;
a Pod container is constituted by one or more, and shared memory and a shared namespace like; Pod all containers in the same Node ;
container life cycle management;
resource use restrictions, resources (requests, limits);
the containers probe: livenessProbe;
can be any access between Pod within the cluster, which is generally achieved by a Layer 2 network.

3.Pod container

In Docker, the container is the smallest processing unit, deletions change search objects are the vessel, the vessel is a virtualization technology, isolation between the container, the isolation is achieved based on Linux Namespace.
In the K8S, Pod includes one or more associated container, Pod may be considered to be a container extending extension, a Pod is a separator, and comprising a group of containers inside the Pod is shared (including PID, Network , IPC, UTS). In addition, Pod in the container can access shared data volumes to achieve the shared file system.

4. resource request and limitations

When creating Pod, computing resources can be specified (currently supported resource types CPU and memory), i.e., each container specified resource request (Request), and resource limits (Limit), the resource request is the minimum desired container resource requirements, resource limit is the upper limit of the container resource can not be exceeded. Relationship is: 0 <= request <= limit <= infinity
resource request is Pod Pod request and resources of all containers. Pod K8S in scheduling, based on the total amount of resources in Node (obtained by cAdvisor interface), and computing resources used by the Node is to determine whether the Node needs.
Pod resource request to ensure there are sufficient resources to run, and resource constraints are preventing a Pod unlimited use of resources, leading to the collapse of other Pod. Especially in the public cloud scenario, there will always be malicious software to seize memory to platform.
See specific configuration http://blog.csdn.net/liyingke112/article/details/77452630

5. a multi-pod container

Pod primarily to establish an application-oriented "logical hosts" model environment in the container, which may comprise one or more containers in close contact with each other. When any container in which abnormal, the abnormality will also Pod.

A multi-pod container, so that a single container with multiple applications into a single class virtual machine, so that all containers share the resources of a vm, increase the degree of coupling, thus facilitating the replica, improving overall usability.

A multi container pod advantages:

Can more easily share data and communications, using the same network name with a space between the container under a Pod, IP address and port section, and to discover localhost can communicate with each other.

Shared storage container in the same run a Pod (if provided), data in the storage volume is not restarted after the container is lost, while the other can be read at the same Pod container.

Container compared to the native interface, Pod by providing a higher level of abstraction, simplifying the deployment and management of applications, different containers provide different services. Pod managed as a unit transversely deployed, hosting, resource sharing, copying and coordinated dependency management can be handled automatically.

6.Pod- use

The core principles are: multiple applications across multiple in Pod
reasons: based on the rational use of resources; scalable capacity, there should be different for different applications of expanding volume reduction strategies.
If between the container must not run together, then put it in a different Pod
if the vessel before the components are independent, then put in the Pod different
if the policy is not the same scalable capacity before the container, then put Pod in a different
conclusion: single Pod single container application, unless special reasons

lab environment

Host computer IP addresses service
master 192.168.1.21 k8s
node01 192.168.1.22 k8s
node02 192.168.1.23 k8s

Based on the experiments continued https://blog.51cto.com/14320361/2464655

Two, Namespace: namespace

The default namespace: Default

Namespace (namespace) is another important concept kubernetes system, objects within the system by the "assigned" to a different Namespace form different items logically grouped, the group or groups of users, to facilitate the sharing of different packet using the resources of the entire cluster while still being managed separately.

After Kubernetes cluster starts, it will create a Namespace called "default", if not otherwise specified Namespace, user-created Pod, RC, Service were created to "default" of the Namespace system.

1. Check the namespace

[root@master ~]# kubectl get namespaces

k8s resource object in the pod (the namespace, acquisition policy, restart policy, health check)

2. Check the name Space Details

[root@master ~]# kubectl describe ns default

k8s resource object in the pod (the namespace, acquisition policy, restart policy, health check)

3. Create namespaces

[root@master ~]# kubectl create ns bdqn

Look

[root@master ~]# kubectl get namespaces

k8s resource object in the pod (the namespace, acquisition policy, restart policy, health check)

4. Create the namespace file yaml

(1) View format

[root@master ~]# kubectl explain ns
//查看nasespace的yaml文件的格式

(2) create the namespace file yaml

[root@master ~]# vim test-ns.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: test

(3) run the namespace file yaml

[root@master ~]# kubectl apply -f test-ns.yaml 

(4) look

[root@master ~]# kubectl get ns

k8s resource object in the pod (the namespace, acquisition policy, restart policy, health check)

4. Delete the name space

[root@master ~]# kubectl delete ns test 
[root@master ~]# kubectl delete -f test-ns.yaml 

Note: namespace resource object for isolating feed resource object, and can not cut off the communication between the Pod different namespaces. It is a functional network policy resources.

5. View the specified name space

Or the -n option can be used --namespace

[root@master ~]# kubectl get pod -n kube-system 
[root@master ~]# kubectl get pod --namespace kube-system 

Three, Pod

1. Write a pod of yaml file

[root@master ~]# vim pod.yaml
kind: Pod
apiVersion: v1
metadata:
  name: test-pod
spec:
  containers:
  - name: test-app
    image: 192.168.1.21:5000/web:v1

pod of yaml files are not supported replicas field

(1) run it

[root@master ~]# kubectl apply -f pod.yaml 

(2)查看一下

[root@master ~]# kubectl get pod

k8s resource object in the pod (the namespace, acquisition policy, restart policy, health check)

ps:这个pod因为是自己创建的,所以删除之后k8s并不会自动生成,相当于docker中创建

2.指定pod的namespace名称空间

(1)修改pod的yaml文件

[root@master ~]# vim pod.yaml
kind: Pod        #资源类型
apiVersion: v1   #api版本
metadata:
  name: test-pod    #指定控制器名称
  namespace: bdqn   #指定namespace(名称空间)
spec:
  containers:      #容器
  - name: test-app  #容器名称
    image: 192.168.1.21:5000/web:v1  #镜像
执行一下
[root@master ~]# kubectl apply -f pod.yaml 

(2)查看一下

[root@master ~]#  kubectl get pod -n bdqn 
//根据namespace名称查看

k8s resource object in the pod (the namespace, acquisition policy, restart policy, health check)

3.pod中镜像获取策略

Always:镜像标签为“laster”或镜像不存在时,总是从指定的仓库中获取镜像。

IfNotPresent:仅当本地镜像不存在时才从目标仓库下载。

Never:禁止从仓库中下载镜像,即只使用本地镜像。

注意:对于标签为“laster”或者标签不存在,其默认的镜像下载策略为“Always”,而对于其他的标签镜像,默认策略为“IfNotPresent”。

4.观察pod和service的不同并关联

(1)pod的yaml文件(指定端口)

[root@master ~]# vim pod.yaml 
kind: Pod          #资源类型
apiVersion: v1      #api版本
metadata:
  name: test-pod       #指定控制器名称
  namespace: bdqn   #指定namespace(名称空间)
spec:
  containers:                          #容器
  - name: test-app                    #容器名称
    image: 192.168.1.21:5000/web:v1   #镜像
    imagePullPolicy: IfNotPresent   #获取的策略
    ports:
    - protocol: TCP
      containerPort: 80  

<1>删除之前的pod

[root@master ~]# kubectl delete pod -n bdqn test-pod 

<2>执行一下

[root@master ~]# kubectl apply -f pod.yaml 

<3>查看一下

[root@master ~]# kubectl get pod -n bdqn 

k8s resource object in the pod (the namespace, acquisition policy, restart policy, health check)

(2)pod的yaml文件(修改端口)

[root@master ~]# vim pod.yaml 
kind: Pod
apiVersion: v1
metadata:
  name: test-pod
  namespace: bdqn
spec:
  containers:
  - name: test-app
    image: 192.168.1.21:5000/web:v1
    imagePullPolicy: IfNotPresent
    ports:
    - protocol: TCP
      containerPort: 90   #改一下端口

<1>删除之前的pod

[root@master ~]# kubectl delete pod -n bdqn test-pod 

<2>执行一下

[root@master ~]# kubectl apply -f pod.yaml 

<3>查看一下

[root@master ~]# kubectl get pod -n bdqn -o wide

k8s resource object in the pod (the namespace, acquisition policy, restart policy, health check)

<4>访问一下

k8s resource object in the pod (the namespace, acquisition policy, restart policy, health check)

会发现修改的90端口并不生效,他只是一个提示字段并不生效。

(3)pod的yaml文件(添加标签)

[root@master ~]# vim pod.yaml 
kind: Pod
apiVersion: v1
metadata:
  name: test-pod
  namespace: bdqn
  labels:                 #标签
    app: test-web          #标签名称
spec:
  containers:
  - name: test-app
    image: 192.168.1.21:5000/web:v1
    imagePullPolicy: IfNotPresent
    ports:
    - protocol: TCP
      containerPort: 90   #改一下端口

--------------------------------------pod---------------------------------------------

(4)编写一个service的yaml文件

[root@master ~]# vim test-svc.yaml 
apiVersion: v1      #api版本
kind: Service          #资源类型
metadata:
  name: test-svc       #指定控制器名称
  namespace: bdqn   #指定namespace(名称空间)
spec:
  selector:          #标签
    app: test-web    #标签名称(须和pod的标签名称一致)
  ports:              
  - port: 80          #宿主机端口
    targetPort: 80    #容器端口

会发现添加的80端口生效了,所以不能乱改。

<1>执行一下

[root@master ~]# kubectl apply -f test-svc.yaml

<2>查看一下

[root@master ~]# kubectl get svc -n bdqn 

k8s resource object in the pod (the namespace, acquisition policy, restart policy, health check)

[root@master ~]# kubectl describe svc -n bdqn test-svc 

k8s resource object in the pod (the namespace, acquisition policy, restart policy, health check)

<4>访问一下

[root@master ~]# curl 10.98.57.97 

k8s resource object in the pod (the namespace, acquisition policy, restart policy, health check)

--------------------------------------service---------------------------------------------

四,容器的重启策略

Pod restart policy (RestartPolicy) and the inner container all applications Pod, and only restarting operation and judgment on which the Node Pod by kubelet. When a container or from abnormal health check fails, kubelet will perform a corresponding operation according to the settings RestartPolicy.

Always: (using the default) will be terminated whenever the object of restart Pod;
OnFailure: if there are errors which appear only restart Pod objects;
Never: never restarted;

Five, pod default health checks

Each vessel will be executed when you start a process, the process specified by the CMD Dockerfile or ENTRYPOINT. If the return code is non-zero exit when the process is considered to container failure, Kubernetes will be based on restartPolicythe restart vessel.

(1) written health check yaml file

Here we simulate a scenario of container failure, Pod configuration file as follows:

[root@master ~]# vim healcheck.yaml 
apiVersion: v1
kind: Pod
metadata:
  labels:
    test: healcheck
  name:  healcheck
spec:
  restartPolicy: OnFailure  #指定重启策略
  containers:
  - name:  healcheck
    image: busybox:latest
    args:                   #生成pod时运行的命令
    - /bin/sh
    - -c
    - sleep 20; exit 1 

<1> execute it

[root@master ~]# kubectl apply -f  healcheck.yaml

<2> look

[root@master ~]# kubectl get pod -o wide

k8s resource object in the pod (the namespace, acquisition policy, restart policy, health check)

[root@master ~]# kubectl get pod -w | grep healcheck

k8s resource object in the pod (the namespace, acquisition policy, restart policy, health check)

In the example above, the process returns a nonzero value container, the container is considered Kubernetes fails to restart. But there are many cases has failed, but the process does not quit.

Six, small experiment

1) in its own name to create a k8s namespace, all of the following in this namespace.

(1) create a namespace

[root@master ~]# kubectl create ns xgp

(2) look

[root@master ~]# kubectl get ns xgp 

k8s resource object in the pod (the namespace, acquisition policy, restart policy, health check)

2) create a Pod resource object, using a private warehouse private Mirror, Mirror download strategy is: NEVER. Pod restart strategy: Never.

[root@master ~]# vim pod.yaml
kind: Pod
apiVersion: v1
metadata:
  name: test-pod
  namespace: xgp
  labels:
    app: test-web
spec:
  restartPolicy: Never
  containers:
  - name: www
    image: 192.168.1.21:5000/web:v1
    imagePullPolicy: Never
    args:                   
    - /bin/sh
    - -c
    - sleep 90; exit 1
    ports:
    - protocol: TCP
      containerPort: 80

After 3) create a container, an ungraceful exit, view the final status of the Pod.

(1) about the implementation of the above documents pod of yaml

[root@master ~]# kubectl apply -f pod.yaml 

(2) Dynamic ns to view the information in the test-pod

[root@master ~]# kubectl get pod -n xgp  -w | grep test-pod

k8s resource object in the pod (the namespace, acquisition policy, restart policy, health check)

Delete test-pod

[root@master ~]# kubectl delete pod -n xgp test-pod 

4) Creating a Service resource object, the object in association with the Pod, to verify their relevance.

(1) modify the pod of yaml file

[root@master ~]# vim pod.yaml
kind: Pod
apiVersion: v1
metadata:
  name: test-pod
  namespace: xgp
  labels:
    app: test-web
spec:
  restartPolicy: Never
  containers:
  - name: www
    image: 192.168.1.21:5000/web:v1
    imagePullPolicy: Never
    ports:
    - protocol: TCP
      containerPort: 80

(1) yaml service of documents prepared

[root@master ~]# vim svc.yaml 
apiVersion: v1
kind: Service
metadata:
  name: test-svc
  namespace: xgp
spec:
  selector:
    app: test-web
  ports:
  - port: 80
    targetPort: 80

(2) the implementation of some

[root@master ~]# kubectl apply -f svc.yaml 

(3) look

[root@master ~]# kubectl get  pod -o wide -n xgp 

k8s resource object in the pod (the namespace, acquisition policy, restart policy, health check)

(4) access it

[root@master ~]# curl 10.244.1.21

k8s resource object in the pod (the namespace, acquisition policy, restart policy, health check)

Guess you like

Origin blog.51cto.com/14320361/2465560