Detailed kubectl application deployment command
1. Preparation
The command-line tool for the cluster control plane (master node) provided by Kubernetes to communicate with the Kubernetes APIServer - kubectl. kubectl default configuration file directory $HOME/.kube/config. The configuration file of kubectl can be specified through the --kubeconfig parameter.
If you have already done the following operations, you can skip them.
1.1、Replication Controller
(1) Create myhello-rc.yaml and write the following content:
vim myhello-rc.yaml
content:
apiVersion: v1
kind: ReplicationController # 副本控制器 RC
metadata:
namespace: default
name: myhello-rc # RC名称,全局唯一
labels:
name: myhello-rc
spec:
replicas: 5 # Pod副本期待数量
selector:
name: myhello-rc-pod
template: # pod的定义模板
metadata:
labels:
name: myhello-rc-pod
spec:
containers: # Pod 内容的定义部分
- name: myhello #容器的名称
image: nongtengfei/hello:1.0.0 #容器对应的 Docker Image
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
env: # 注入到容器的环境变量
- name: env1
value: "k8s-env1"
- name: env2
value: "k8s-env2"
Usually, pods are not configured separately, and pods are deployed through a certain type of replica controller resources. Reason: If pods are configured separately, all pods on the current node need to be drained when the cluster is upgraded, which will cause problems, because the pod does not have any replica controllers to control it, and the cluster has no expectations for him. When the node is drained, Pods will not be scheduled and respawned.
(2) Create a service for RC.
vim myhello-svc.yaml
content:
apiVersion: v1
kind: Service
metadata:
name: myhello-svc
labels:
name: myhello-svc
spec:
type: NodePort # 对外提供端口
ports:
- port: 80
protocol: TCP
targetPort: 80
name: http
nodePort: 30000
selector:
name: myhello-rc-pod
(3) Application configuration.
kubectl apply -f myhello-svc.yaml -f myhello-rc.yaml
1.2、Deployment
(1) Create myapp-deployment.yaml and write the following content:
vim myapp-deployment.yaml
content:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deployment
labels:
name: myapp-deploy
spec:
replicas: 5
selector:
matchLabels:
name: myapp-deploy-pod
template:
metadata:
labels:
name: myapp-deploy-pod
spec:
#nodeSelector:
#nodetype: worker
containers: # Pod 内容的定义部分
- name: myhello #容器的名称
image: nongtengfei/hello:1.0.0 #容器对应的 Docker Image
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
env: # 注入到容器的环境变量
- name: env1
value: "k8s-env1"
- name: env2
value: "k8s-env2"
resources:
requests:
cpu: 100m
- name: myredis #容器的名称
image: redis #容器对应的 Docker Image
imagePullPolicy: IfNotPresent
ports:
- containerPort: 6379
env: # 注入到容器的环境变量
- name: env1
value: "k8s-env1"
- name: env2
value: "k8s-env2"
resources:
requests:
cpu: 100m
(2) Create a service for the deployment.
vim myapp-svc.yaml
content:
apiVersion: v1
kind: Service
metadata:
name: myapp-svc
labels:
name: myapp-svc
spec:
type: NodePort # 对外提供端口
ports:
- port: 80
protocol: TCP
targetPort: 80
name: http
nodePort: 30001
selector:
name: myapp-deploy-pod
(3) Application configuration.
kubectl apply -f myapp-svc.yaml -f myapp-deployment.yaml
1.3、DaemonSet
(1) Create myapp-deployment.yaml and write the following content:
vim myapp-ds.yaml
content:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: myapp-ds
namespace: default
labels:
app: myapp-ds
spec:
selector:
matchLabels:
app: myapp-ds
template:
metadata:
labels:
app: myapp-ds
spec:
tolerations:
- key: node-role.kubernetes.io/control-plane
operator: Exists
effect: NoSchedule
containers: # Pod 内容的定义部分
- name: myhello #容器的名称
image: nongtengfei/hello:1.0.0 #容器对应的 Docker Image
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
env: # 注入到容器的环境变量
- name: env1
value: "k8s-env1"
- name: env2
value: "k8s-env2"
(2) Create a service for DaemonSet.
vim myapp-ds-svc.yaml
content:
apiVersion: v1
kind: Service
metadata:
name: myapp-ds-svc
labels:
name: myapp-ds-svc
spec:
type: NodePort # 对外提供端口
ports:
- port: 8080
protocol: TCP
targetPort: 80
name: http
nodePort: 30002
selector:
app: myapp-ds
(3) Application configuration:
kubectl apply -f myapp-ds-svc.yaml -f myapp-ds.yaml
1.4. View the created svc and pod
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 45h
myapp-ds-svc NodePort 10.96.41.180 <none> 8080:30002/TCP 4m3s
myapp-svc NodePort 10.98.20.127 <none> 80:30001/TCP 6m32s
myhello-svc NodePort 10.106.252.61 <none> 80:30000/TCP 14m
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
myapp-deployment-5659dbddd8-l6m87 0/2 Pending 0 6m41s
myapp-deployment-5659dbddd8-lxxls 0/2 Pending 0 6m41s
myapp-deployment-5659dbddd8-pqqlx 0/2 Pending 0 6m41s
myapp-deployment-5659dbddd8-xb8xp 0/2 Pending 0 6m41s
myapp-deployment-5659dbddd8-zjgsx 0/2 Pending 0 6m41s
myapp-ds-2zqf9 1/1 Running 0 2m43s
myhello-rc-2tjmr 0/1 Pending 0 12m
myhello-rc-44ksd 0/1 Pending 0 12m
myhello-rc-86g79 0/1 Pending 0 12m
myhello-rc-df225 0/1 Pending 0 12m
myhello-rc-lfbzb 0/1 Pending 0 12m
Only one node is established here, and there is only one pod.
1.5, kubectl command auto-completion settings
# 安装自动补全插件
sudo apt-get install -y bash-completion
# 添加.bashrc文件内容
echo "source <(kubectl completion bash)" >> ~/.bashrc
# 加载最新的.bashrc
source ~/.bashrc
2. Application deployment command
2.1、diff
Show the difference between the current version and the version to be applied, only comparing the items defined in the yaml file.
usage:
kubectl diff -f FILENAME
Example:
# 通过文件对比
kubectl diff -f myapp-deployment.yaml
# 通过输入对比
cat myapp-deployment.yaml | kubectl diff -f -
# 对比当前目录yaml后缀的文件
kubectl diff -f '*.yaml'
2.2、apply
Apply new configuration to resources based on a file or standard input.
usage:
kubectl apply -f FILENAME
Example:
# 将配置应用到资源
kubectl apply -f myapp-deployment.yaml
# 通过输入的方式讲配置应用到资源
cat myapp-deployment.yaml | kubectl apply -f -
# 将当前目录yaml后缀的文件应用到资源
kubectl apply -f '*.yaml'
2.3、replace
Apply the new configuration to a resource in a way that replaces it, based on a file or standard input.
usage:
kubectl replace -f FILENAME
Example:
# 将配置应用到资源
kubectl replace -f myapp-deployment.yaml
# 通过输入的方式讲配置应用到资源
cat myapp-deployment.yaml | kubectl replace -f -
2.4、rollout
Manage resources online, support deployments, daemonsets, statefulsets and other resource objects.
usage:
kubectl rollout SUBCOMMAND
Following are the supported SUBCOMMANDs.
2.4.1、history
View historical revisions and configurations.
usage:
kubectl rollout history (TYPE NAME | TYPE/NAME) [flags]
Example:
# 查看DaemonSet/cadvisor 的发布历史
kubectl rollout history ds/myapp-ds
# 查看修订版本号为3的历史记录详细信息
kubectl rollout history daemonset/myapp-ds --revision=3
2.4.2、pause
Mark the provided resource as suspended. The controller does not coordinate suspended resources. Use "kubectl rollout resume" to resume suspended resources.
Currently, only deployment resource objects are supported. Due to the rolling update mechanism of deployment, if pause is used during the deployment process, it will cause an inconsistent pod version in a deployment to suspend the Deployment, then trigger one or more updates, and finally continue (resume ) of the Deployment. This approach allows multiple updates to the Deployment between pauses and continuations without triggering unnecessary rolling updates. In short: After multiple modifications, after the resume command is executed, the previous modifications are reflected to the Pod together. However, the expansion and contraction of the service is not subject to suspension constraints.
usage:
kubectl rollout pause RESOURCE
Example:
# 暂停部署
kubectl rollout pause deployment myapp-deployment
2.4.3、resume
Resume a suspended resource.
The controller does not coordinate suspended resources. By restoring the resource, we can coordinate the resource again. Currently only supports restoring deployments.
usage:
kubectl rollout resume RESOURCE
Example:
kubectl rollout resume deployment myapp-deployment
2.4.4、restart
Restart the resource object.
usage:
kubectl rollout restart RESOURCE
Example:
# 重启部署
kubectl rollout restart deployment/myapp-deployment
# 重启守护进程
kubectl rollout restart daemonset/myapp-ds
# 根据selector 重启部署
kubectl rollout restart deployment --selector=name=myapp-deploy
2.4.5、status
Check the status.
usage:
kubectl rollout status (TYPE NAME | TYPE/NAME) [flags]
Example:
# 查看发布状态
kubectl rollout status deployment/myapp-deployment
2.4.6、undo
Roll back to the previous version.
usage:
kubectl rollout undo (TYPE NAME | TYPE/NAME) [flags]
Example:
# 回滚deployment/myapp-deployment 到上一个版本
kubectl rollout undo deployment/myapp-deployment
# 回滚到指定版本
kubectl rollout undo daemonset/myapp-ds --to-revision=2
# 演习回滚,查看结果。并未做真正的操作
kubectl rollout undo --dry-run=server deployment/myapp-deployment
Note: Continuous undo will not always roll back to a very old version, but will switch back and forth between the latest two versions.
Example:
# 分三次修改镜像版本,分别改为:1.0.0 1.0.1 1.0.2
kubectl edit ds/myapp-ds
# 回滚到上一个版本,查看详情镜像版本为:1.0.1
kubectl rollout undo ds/myapp-ds
# 回滚到上一个版本,查看详情镜像版本为:1.0.2
kubectl rollout undo ds/myapp-ds
2.5、scale
Set the number of pod replicas for deployment, replica set, replication controller, and statefulset.
usage:
kubectl scale [--resource-version=version] [--current-replicas=count] -- replicas=COUNT (-f FILENAME | TYPE NAME)
Example:
# 修改副本数量为3
kubectl scale --replicas 3 deployment myapp-deployment
# 修改文件定义资源的副本数量为30
kubectl scale --replicas=30 -f myapp-deployment.yaml
# 如果当前副本数为30,则将副本数改为10
kubectl scale --current-replicas=30 --replicas=10 deployment/myapp-deployment
# 将指定 rc 和 deployment的副本数改为6
kubectl scale --replicas=6 rc/myhello-rc deployment/myapp-deployment
2.6、autoscale
Create an autoscaler that automatically selects and sets the number of PODs to run in a Kubernetes cluster. Supports resource objects such as deployment, replicaset, stateful set, and replication controller. When the CPU or memory usage exceeds the set value, automatic capacity expansion will begin. When the indicator recovers, about 5 minutes later, it will start shrinking. For automatic scaling support, the required minimum resources must be set for each container in the pod.
usage:
kubectl autoscale (-f FILENAME | TYPE NAME | TYPE/NAME) [--min=MINPODS] --max=MAXPODS [--cpu-percent=CPU]
Example:
# 最少2个pod ,最多10个pod,采用默认缩放策略
kubectl autoscale deployment myapp-deployment --min=2 --max=10
# 最多15个pod,目标pod cpu利用率40%
kubectl autoscale deployment myapp-deployment --min=2 --max=15 --cpu-percent=40
# 查看自动扩展器
kubectl get horizontalpodautoscalers
2.6.1、metrics server
For automatic scaling, the metrics server must be installed. The metrics server is used to get node metrics. The metrics server installation condition, the k8s cluster must enable the aggregation layer (configured by default); the node kubelet service enables webhook authentication (enabled by default).
The metrics server startup item adds the --kubelet-insecure-tls option.
Documentation:
- metrics server。
- k8s aggregation layer .
- k8s extension service .
- k8s webhook authentication .
- scaling strategy .
2.6.1, metrics server installation
components.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-view: "true"
name: system:aggregated-metrics-reader
rules:
- apiGroups:
- metrics.k8s.io
resources:
- pods
- nodes
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
name: system:metrics-server
rules:
- apiGroups:
- ""
resources:
- nodes/metrics
verbs:
- get
- apiGroups:
- ""
resources:
- pods
- nodes
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: metrics-server
name: metrics-server-auth-reader
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: metrics-server
name: metrics-server:system:auth-delegator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: metrics-server
name: system:metrics-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:metrics-server
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
spec:
ports:
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
spec:
selector:
matchLabels:
k8s-app: metrics-server
strategy:
rollingUpdate:
maxUnavailable: 0
template:
metadata:
labels:
k8s-app: metrics-server
spec:
containers:
- args:
- --cert-dir=/tmp
- --kubelet-insecure-tls
- --secure-port=4443
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --kubelet-use-node-status-port
- --metric-resolution=15s
image: registry.aliyuncs.com/google_containers/metrics-server:v0.6.2
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /livez
port: https
scheme: HTTPS
periodSeconds: 10
name: metrics-server
ports:
- containerPort: 4443
name: https
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /readyz
port: https
scheme: HTTPS
initialDelaySeconds: 20
periodSeconds: 10
resources:
requests:
cpu: 100m
memory: 200Mi
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
volumeMounts:
- mountPath: /tmp
name: tmp-dir
nodeSelector:
kubernetes.io/os: linux
priorityClassName: system-cluster-critical
serviceAccountName: metrics-server
volumes:
- emptyDir: {
}
name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
labels:
k8s-app: metrics-server
name: v1beta1.metrics.k8s.io
spec:
group: metrics.k8s.io
groupPriorityMinimum: 100
insecureSkipTLSVerify: true
service:
name: metrics-server
namespace: kube-system
version: v1beta1
versionPriority: 100
implement:
kubectl apply -f components.yaml