[Cloud native] kubectl statement resource management in k8s

1. k8s statement of resource management method description

1.1 Three basic methods of managing k8s core resources 

Declarative Resource Management Approach

--Mainly rely on the command line tool kubectl for management
Advantages
It can meet more than 90% of the usage scenarios
It is easier to add, delete, and check resources

Disadvantages
Commands are lengthy, complicated, and difficult to remember.
In specific scenarios, management requirements cannot be met.
It is troublesome to modify resources, and patch is required to use json string changes.

Declarative Resource Management Approach

Mainly rely on the unified resource configuration list for management


GUI-style resource management method

Mainly rely on graphical operation interface for management

1.2 Description of the kubectl command line tool

k8s adds kubectl auto-completion after version 2.0

For k8s installed in binary, the kubectl tool has no auto-completion function (unverified if installed in other ways), you can use the following method to enable command auto-completion

vim /etc/bashrc
source <(kubectl completion bash)
 
 
su

  2. View the basic information in the k8s cluster

2.1 Viewing of basic management information in k8s 

(1) View version information

kubectl version

(2) View resource object shorthand

kubectl api-resources 

(4) node node view log

journalctl -u kubelet -f

2.2 Check the basic information of k8s

1) Get resource information

 
kubectl get <resource> [-o wide | json | yaml] [-n namespace]
Get information about resources, -n specifies the command space, -o specifies the output format
resource can be a specific resource name, such as pod nginx -xxx; or a resource type, such as pod; or all (only several core resources are displayed, not complete) --all-namespaces or -A: means to display all command spaces, --show-labels: to display all labels- l app: Only display resources with the tag app -l app=nginx : Only display resources that contain the app tag and
the
value
is
nginx

(2) View the status of the master node 

kubectl get componentstatuses
kubectl get cs

(3) View command space

kubectl get namespaces
kubectl get ns

(4) View all resources in the default namespace 

2.3 Basic management of k8s resources 

 (1) Create a namespace

kubectl create ns test
kubectl get ns

(2) Delete the namespace 

(3) Create a replica controller in the namespace to start the Pod 

kubectl create deployment nginx-test --image=nginx -n kube-public
kubectl get pod -n kube-public

(4) View the detailed information of the specified resource 

 
kubectl describe deployment nginx-test -n kube-public
kubectl describe pod nginx-test-795d659f45-hnpnm  -n kube-public

(5) View pod information in the namespace 

kubectl get pods -n  kube-system

(6) Cross-host login container

kubectl exec -it nginx-test-795d659f45-qf84v bash -n kube-public

(7) Deletion of pod resources 

kubectl get pods -n kube-public 
 
kubectl delete pod nginx-test-795d659f45-hnpnm  -n kube-public 
 
kubectl get pods -n kube-public 

(8) Forcibly delete the pod 

kubectl delete pod <pod-name> -n <namespace> --force --grace-period=0


#grace-period indicates the transition survival period, the default is 30s, allowing the POD to slowly terminate the container process on it before deleting the pod, so as to exit gracefully, 0 means immediately terminate the pod

(9) Increase and decrease the same number of pods 

Expand
kubectl scale deployment nginx-test --replicas=3 -n kube-public
sReduce
kubectl scale deployment nginx-test --replicas=1 -n kube-public

(10) Delete the copy controller

 
kubectl delete deployment nginx-test -n kube-public
or
kubectl delete deployment/nginx-test -n kube-public
 
#The / here is equivalent to a space and also has the function of separating and explaining the relationship

 (11) Ways to create special columns

The pod created by kubectl is also called self-service creation. The creation of run can only create one pod at a time (it cannot be expanded or reduced), and it is not managed by the pod controller. That is, when the pod is deleted, it will no longer be pulled up. 

kubectl  run  nginx  --images=nginx

3. Project life cycle management

For a k8s project, its life cycle can be roughly divided into the following steps:

Create—>>Publish—>>Update—>>Rollback—>>Delete

3.1 Create a project 

Create and run one or more container images
Create a deployment or job to manage the container
kubectl run --help or kubectl run -h
 
//Start the nginx instance, expose the container port 80, and set the number of replicas to 3
kubectl create deployment nginx --image=nginx:1.14 --port=80 --replicas=3
 
kubectl get pods
kubectl get all

3.2 Publish project 

 
1. Expose resources as a new Service
kubectl expose --help
 
2. Create a service for nginx of the deployment, and forward it to port 80 of the container through port 80 of the Service. The name of the Service is nginx-service, and the type is NodePort kubectl expose deployment nginx --port=8000 --target-port=80 --name=nginx-
service --type  = NodePort ======================
=========================================================================================================================================================================================================================================================================================================
== Service access to a set of Pods through label Selector.
For container applications, Kubernetes provides a VIP (virtual IP)-based bridge to access the Service, and then the Service redirects to the corresponding Pod.
 
service type:
ClusterIP: Provide a virtual IP within the cluster for Pod access (service default type)
NodePort: Open a port on each Node. for external access, and Kubernetes will open a port on each Node. The port of each Node is the same, and programs outside the Kubernetes cluster can access the Service through NodeIp: NodePort .
Note: Each port can only be one service, and the port range can only be 30000-32767
LoadBalancer: access through an external load balancer, and usually additional fees are required to deploy LoadBalancer on the cloud platform .
====================================================================================================================================================================================================================================================
, POD network status of the POD network status
.get
 
endpoints5. View service description informationKubect1 Describe SVC Nginx-APP

 


 

6. Operate on node01 and node02 in half, check the load balancing port
yum install ipvsadm -y
ipvsadm -Ln

7. Write webpage files in three pods on the master01 master node

kubectl get pods

kubectl exec -it nginx-app-64ffbd575f-7hjxn  bash
echo "<h1>this is test1</h1>" > /usr/share/nginx/html/index.html
kubectl exec -it nginx-app-64ffbd575f-bs29c bash
echo "<h1>this is test2</h1>" > /usr/share/nginx/html/index.html
kubectl exec -it nginx-app-64ffbd575f-r49dt  bash
echo "<h1>this is test3</h1>" >/usr/share/nginx/html/index.html
 
8.浏览器访问clusterip和nodeport

curl 10.96.76.164:8000
curl 192.168.73.105:31539
curl 192.168.73.106:31539
curl 192.168.73.107:31539
 
 
 
8. Check the access log
kubectl logs nginx-app-64ffbd57 in master01 operation 5f-7hjxn
kubectl logs nginx-app-64ffbd575f-bs29c 
kubectl logs nginx-app-64ffbd575f-r49dt

3.3 Project update 

 (1) View the use of resource templates

kubectl set --help
 
//Get the modified template
kubectl set image --help
 

(2) Update the version of the application in the project 

#View the version number of nginx in the current pod
curl -I 192.168.73.105:31539
curl -I 192.168.73.106:31539
curl -I 192.168.73.107:31539
 


#Version replacement
kubectl set image deployment/nginx nginx=nginx:1.16
 


 
#Before the version replacement, you can open a new terminal, track the pod dynamically, and observe the changes of the pod
kubectl get pods -w
 


curl -I 192.168.73.105:31539
curl -I 192.168.73.106:31539
curl -I 192.168.73.107:31539

3.4 Rollback of the project 

Every project version update will be tested. Not all new versions will be better than the old version. In order to prevent the service optimization of the new version after the update, it is found that the service optimization of the new version is not as good as the old version. I regret it at this time. So k8s set the rollback function in the project 

(1) View historical rollback points 

kubectl rollout history deployment/nginx-app

(2) Rollback 

#Roll back to the previous rollback point
kubectl rollout undo deployment/nginx
 
#Make a specified rollback point rollback
kubectl rollout undo deployment/nginx --to-revision=3

3.5 Delete items 

1. Delete the replica controller
kubectl delete deployment.apps/nginx-app
 
2. Delete the service resource
kubectl delete service/nginx-service
 
kubectl get all

 4. Three commonly used project release methods

The biggest challenge facing application upgrades is switching between old and new services, bringing the software from the final stage of testing to the production environment, and at the same time ensuring that the system provides uninterrupted services. The three most common release methods are: blue-green release, grayscale release, and rolling release.

The ultimate goal of the three release methods is to reduce or avoid the impact on customers when updating the application project.

4.1 Blue-green release 


 First, divide all application service clusters into blue and green groups. First, remove the clusters of the green group from load balancing, and the blue group will continue to provide services to users. At this time, the removed green group performs service upgrades, and after the upgrade is completed, the green group is reconnected to the load balancer to provide services for users.

Then remove the blue group, upgrade the service, and connect it to the load-balanced cluster after the upgrade is complete. At this point, the entire project cluster has to be upgraded, which we call blue-green release
 

features


If there is a problem, the scope of influence is relatively large;

The release strategy is simple;

The user has no perception, smooth transition;

Upgrade/rollback is fast.

shortcoming


It is necessary to prepare servers with more than twice the resources used by normal business to prevent a single group from being unable to carry business bursts during the upgrade period;

A certain resource cost is wasted in a short period of time;

The infrastructure remains unchanged, increasing the stability of the upgrade.

Blue-green releases were relatively expensive in the early days of physical servers. Due to the popularity of cloud computing, the cost has also been greatly reduced.
 

4.2 Gray scale release 


Grayscale release is also called canary release. Grayscale refers to a release method that can smoothly transition between black and white.

This process is similar to the experience server in the game. First, some users will be allowed to test the use. If there are no problems, it will be gradually promoted to completely replace the old version. 
 

features

  • Ensure the stability of the overall system. Problems can be found and adjusted at the initial grayscale, and the scope of influence is controllable;

  • The new function gradually evaluates the performance, stability and health status. If there is a problem, the scope of impact is small and the relative user experience is relatively small;

  • The user is imperceptible and the transition is smooth.

shortcoming

  • High automation requirements

4.3 Rolling release 

      Rolling release is the project service update method we just used in k8s. Rolling release means that only one or more services are upgraded each time. After the upgrade is completed, they join the production environment and continue to perform this process until all the old versions in the cluster are upgraded to the new version .  

features

  • The user has no perception, smooth transition;

  • save resources.

shortcoming

  • The deployment time is slow, depending on the update time of each stage;

  • The release strategy is more complicated;

  • It is impossible to determine the OK environment, and it is not easy to roll back.

Summary of the comparison of the three methods

  • Blue-green release: The two environments are upgraded alternately, and the old version is kept for a certain period of time for easy rollback.

  • Grayscale release: The old version is upgraded according to the proportion, for example, 80% of the user access is the old version, and 20% of the user access is the new version.

  • Rolling release: Stop the old version instances in batches and start the new version instances.

5. Application of canary release method 

 The Deployment controller supports custom control of the scrolling cadence during the update process, such as "pause" or "resume" the update operation. For example, wait for the first batch of new Pod resources to be created and immediately suspend the update process. At this time, only a part of the new version of the application exists, and the main part is still the old version. Then, filter a small number of user requests and route them to the new version of the Pod application, and continue to observe whether it can run stably and as expected. After confirming that there is no problem, continue to complete the remaining Pod resource rolling update, otherwise immediately roll back the update operation. This is known as a canary release.
 

(1) How to perform canary update

 
kubectl create deployment nginx --image=nginx:1.14 --port=80 --replicas=3
   
kubectl expose deployment nginx --port=8000 --target-port=80 --name=nginx-service --type=NodePort #Create resources first and create 3 copies kubectl set image deployment nginx nginx=nginx:1.16 && kubectl rollout pause deployment nginx #Update and pause kubectl rollout status
 
deployment
 
nginx #Observe update status

(2) Monitor the update process

It can be seen that a new resource has been added, but an old resource has not been deleted as expected, because the pause command
kubectl get pods, svc -o wide
kubectl get pods -w
curl -I 192.168.73.106:31589

(3) Continue to update 

kubectl rollout resume deployment nginx

Summarize 

1. View version information
kubectl version
 
2. View resource object abbreviation
kubectl api-resources 
 
3. View cluster information
kubectl cluster-info
 
4. Configure kubectl automatic completion
source <(kubectl completion bash)
 
5. Node node view log
journalctl -u kubelet -f
 
 
create
kubectl create controller
kubectl run self-service
kubectl scale --repl icas=
 
publish, create service
kubectl expose
 
update
kubectl set image
 
rollback
kubectl rollout undo
 
delete
kubectl delete controller|pod ...
 
view resource status
kubectl get ... -o wide [-l label] [--show-labels] [-n ...] [-A] -w
 
view detailed information
kubectl describe
 
view log
kubectl logs
 
enter pod container
kubectl exec -it
 
view resource abbreviation
kubectl api-resource
 
gray release/canary release
kubectl set image ... && kubectl rollout pause (pause operation) ...
 
if there is no problem in the verification
kubectl rollout resume (resume operation)
rolling update
first update some pod resources (number|ratio default 25%), and then delete some resources after updating

Guess you like

Origin blog.csdn.net/zhangchang3/article/details/131417736