Pod resource management and probe

One, pod overview

■ port
port is the port for accessing the service inside the k8s cluster, that is, a certain service can be accessed through clusterIP: port

■ nodePort
nodePort is the port for external access to the service in the k8s cluster, and a certain service can be accessed from the outside through nodeIP: nodePort

■ targetPort
targetPort is the port of the pod. The traffic from port and nodePort flows into the targetPort of the back-end pod through kube-proxy, and finally enters the container

■ containerPort
containerPort is the port of the container inside the pod, targetPort is mapped to containerPort

Insert picture description here

Note:

■ Access from outside The
nodeport port passes through kube-proxy, kube-proxy then sends the request to the targetPort, and the targetPort will be mapped to the containerPort of the container to directly access the service

■ Access the
Port port from the inside and send the request to the targetPort through kube-proxy, and the targetPort will be mapped to the containerPort of the container to directly access the service

1.1, the characteristics of pod

  • Minimum deployment unit
  • A collection of containers
  • Containers in a Pod share network namespace
  • Pod is short-lived (pod cannot be restarted)

1.2, Pod container classification

■ infrastructure container Basic container
● Maintain the entire Pod network space
● Node node operation
● View the container's network

[root@node01 ~]# cd /opt/kubernetes/
[root@node01 kubernetes]# ls
bin  cfg  ssl
[root@node01 kubernetes]# cd cfg
[root@node01 cfg]# ls
bootstrap.kubeconfig  flanneld  kubelet  kubelet.config  kubelet.kubeconfig  kube-proxy  kube-proxy.kubeconfig
[root@node01 cfg]# vim kubelet

Insert picture description here
#It will be created every time a Pod is created, corresponding to the Pod, which is transparent to the user

[root@node01 ~]# docker ps

Insert picture description here

■ initcontainers The initialization container (which can provide a dependency environment and an initialization environment for the business container)
starts execution before the business container. The containers in the Pod were originally opened in parallel, but now they have been improved.

■ Container Business container
starts in parallel

1.3, image pull policy (image PullPolicy)

■ IfNotPresent: default value, only pull when the image does not exist on the host

■ Always: Every time a Pod is created, the image will be pulled again

■ Never: Pod will never actively pull this image

[root@master01 ~]# kubectl edit deployment/nginx-deployment

Insert picture description here

1.4. Automatically test the correctness of the command without executing the creation

[root@master01 ~]# kubectl run nginx-deployment --image=nginx --port=80 --replicas=3 --dry-run  #可检查语法

Insert picture description here

1.5, view the generated yaml format

[root@master01 ~]# kubectl run nginx-deployment --image=nginx --port=80 --replicas=3 --dry-run -o yaml     #o为out,输出

Insert picture description here

[root@master01 ~]# kubectl run nginx-deployment --image=nginx --port=80 --replicas=3 --dry-run -o yaml > /opt/nginx-test.yaml

Insert picture description here

1.6, view the generated json format

[root@master01 ~]# kubectl run nginx-deployment --image=nginx --port=80 --replicas=3 --dry-run -o json

Insert picture description here

1.7. Export the existing resource generation template (backup the running pod)

[root@master01 ~]# kubectl get deploy/nginx-deployment --export -o yaml

Insert picture description here

1.7.1, save to file

[root@master01 ~]# kubectl get deploy/nginx-deployment --export -o yaml > my-deploy.yaml

Insert picture description here

1.7.2, view field help information

[root@master01 ~]# kubectl explain pods.spec.containers

1.7.3, forcibly delete pod

kubectl delete pod xxx --force --grace-period=0

2. Deploy harbor to create a private project

[root@harbor ~]# cd /usr/local/bin
[root@harbor bin]# rz

[root@harbor bin]# ls
docker-compose
[root@harbor bin]# chmod +x docker-compose 
[root@harbor bin]# ls
docker-compose
[root@harbor bin]# cd ..
[root@harbor local]# rz

Insert picture description here

[root@harbor local]# tar zxvf harbor-offline-installer-v1.2.2.tgz
[root@harbor local]# cd harbor/
[root@harbor harbor]# vim harbor.cfg 

Insert picture description here

[root@harbor harbor]# sh install.sh  #启动安装

Insert picture description here

2.1, login

Insert picture description here

2.2, node node configuration to connect to private warehouse

[root@node01 ~]# vim /etc/docker/daemon.json 

Insert picture description here

[root@node02 ~]# vim /etc/docker/daemon.json 

Insert picture description here

[root@node02 docker]# systemctl restart docke
[root@node01 docker]# systemctl restart docke
[root@node01 ~]# docker login 192.168.140.70
[root@node02 ~]# docker login 192.168.140.70

Insert picture description here

2.3. Download the Tomcat image for push

[root@node01 ~]# docker pull tomcat:8.0.52   #需要登录才能下载,login

Insert picture description here

[root@node01 ~]# docker tag tomcat:8.0.52 192.168.140.70/project/tomcat   #镜像打标签

Insert picture description here

[root@node01 ~]# docker push 192.168.140.70/project/tomcat
# 上传镜像到harbor

Insert picture description here

[root@master01 ~]# vim tomcat-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: my-tomcat
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: my-tomcat
    spec:
      containers:
      - name: my-tomcat
        image: docker.io/tomcat:8.0.52
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: my-tomcat
spec:
  type: NodePort
  ports:
  - port: 8080
    targetPort: 8080
    nodePort: 31111
  selector:
    app: my-tomcat
[root@master01 ~]# kubectl create -f tomcat-deployment.yaml 

Insert picture description here

Insert picture description here

Note:
If you encounter a resource that cannot be deleted in the Terminating state, how to deal with it. In
this case, you can use the forced delete command:
kubectl delete pod [pod name] --force --grace-period=0 -n [namespace]

2.4. View login credentials and decode

[root@node02 ~]# cd .docker/
[root@node02 .docker]# ls
config.json
[root@node02 .docker]# cat config.json 
{
    
    
	"auths": {
    
    
		"192.168.140.20": {
    
    
			"auth": "emhhbmdzYW46QWJjMTIzNDU="
		},
		"192.168.140.70": {
    
    
			"auth": "YWRtaW46SGFyYm9yMTIzNDU="
		}
	}
}
[root@master01 ~]# vim registry-pull-secret.yaml    #创建凭证

Insert picture description here

2.5, create a secret resource

[root@master01 ~]# kubectl create -f registry-pull-secret.yaml 
[root@master01 ~]# kubectl get secret   #查看secret资源

Insert picture description here

2.6, download the image from the private warehouse

[root@master01 ~]# kubectl delete -f tomcat-deployment.yaml 
[root@master01 ~]# vim tomcat-deployment.yaml 

Insert picture description here

[root@master01 ~]# kubectl create -f tomcat-deployment.yaml 
[root@master01 ~]# kubectl get pods -w

Insert picture description here
Insert picture description here

[root@master01 ~]# kubectl describe pod my-tomcat

Insert picture description here

Insert picture description here

Three, Pod and Container resource requests and restrictions

spec.containers[].resources.limits.cpu     #cpu上限 
spec.containers[].resources.limits.memory   #内存上限
spec.containers[].resources.requests.cpu   #创建时分配的基本CPU资源
spec.containers[].resources.requests.memory  #创建时分配的基本内存资源

Note: The requested resource refers to the resource
that was initialized when the container was created. The limited resource refers to the maximum resource that cannot be exceeded when the container is running.

3.1, example

[root@master01 ~]# mkdir demo
[root@master01 ~]# cd demo/
[root@master01 demo]# vim pod2.yaml

Insert picture description here

[root@master01 demo]# kubectl apply -f pod2.yaml 
pod/frontend created
[root@master01 demo]# kubectl describe pod frontend

Insert picture description here

[root@master01 demo]# kubectl get pods -o wide

Insert picture description here

[root@master01 demo]# kubectl describe nodes 192.168.140.20  #查看具体事件

Insert picture description here

3.2, restart strategy

Restart strategy: Pod restarts (recreates) after encountering a failure.
1: Always: Always restart the container when the container terminates and exits. Default strategy
2: OnFailure: When the container exits abnormally (exit status code is not 0), Restart the container
3: Never: When the container terminates and exits, never restart the container

(Note: Pod resource restart is not supported in k8s, only delete and rebuild)

3.2.1, example

[root@master01 demo]# vim pod3.yaml
apiVersion: v1
kind: Pod
metadata:
  name: foo
spec:
  containers:
  - name: busybox
    image: busybox
    command: [ "/bin/bash", "-ce", "tail -f /dev/null" ]
    args:                           #使用参数
    - /bin/sh                    
    - -c                              #使用命令
    - sleep 30; exit 3       #休眠30秒,然后异常退出
[root@master01 demo]# kubectl apply -f pod3.yaml 
[root@master01 demo]# kubectl get pods -w
[root@master01 demo]# kubectl get pods 

Insert picture description here

[root@master01 demo]# kubectl delete -f pod3.yaml 
[root@master01 demo]# vim pod3.yaml 

Insert picture description here

[root@master01 demo]#  
[root@master01 demo]# kubectl get pods 

Insert picture description here

Note: the same level as the container, the
completion status will not restart

4. Health check: Probe

4.1. Probe strategy (action performed)

规则可以同时定义
livenessProbe 如果检查失败,将杀死容器,根据Pod的restartPolicy来操作。
ReadinessProbe 如果检查失败,kubernetes会把Pod从service endpoints(proxy的负载均衡)中剔除

4.2, Probe supports three inspection methods

  • httpGet sends http request and returns 200-400 status code as success
  • exec executes the Shell command and returns a status code of 0 for success
  • tcpSocket initiated TCP Socket establishment successfully

4.2.1, example

[root@master01 demo]# vim pod4.yaml 
apiVersion: v1
kind: Pod
metadata:
  labels:
    test: liveness
  name: liveness-exec
spec:
  containers:
  - name: liveness
    image: busybox
    args:
    - /bin/sh
    - -c
    - touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy;sleep 10
    livenessProbe:
      exec:
        command:
        - cat
        - /tmp/healthy
      initialDelaySeconds: 5
      periodSeconds: 5
        
[root@master01 demo]# kubectl get pods -w

Insert picture description here

Guess you like

Origin blog.csdn.net/weixin_50344814/article/details/115279307