Kubernetes单节点可伸缩集群的可行性验证

忙里偷闲,在自己的PC用kubeadm构建了一个单节点Kubernetes可伸缩集群,平时编写yml或者构建helm包时只开启master节点并将pod调度到master上使用,需要做分布式验证时在启动另外1~2台node节点并加入到Kubernetes集群进行联合工作。
以下是验证单节点工作状态的过程:
[root@kubernetes-master ~]# kubectl get pod --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-bccdc95cf-2g9rw 1/1 Running 0 15h
kube-system coredns-bccdc95cf-xzfrl 1/1 Running 0 15h
kube-system etcd-kubernetes-master 1/1 Running 2 15h
kube-system kube-apiserver-kubernetes-master 1/1 Running 2 15h
kube-system kube-controller-manager-kubernetes-master 1/1 Running 2 15h
kube-system kube-flannel-ds-amd64-t5prf 1/1 Running 0 3h12m
kube-system kube-proxy-npszc 1/1 Running 2 15h
kube-system kube-scheduler-kubernetes-master 1/1 Running 2 15h
[root@kubernetes-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubernetes-master Ready master 15h v1.15.0
[root@kubernetes-master ~]# kubectl get pod --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-bccdc95cf-2g9rw 1/1 Running 0 15h 10.244.0.2 kubernetes-master <none> <none>
kube-system coredns-bccdc95cf-xzfrl 1/1 Running 0 15h 10.244.0.3 kubernetes-master <none> <none>
kube-system etcd-kubernetes-master 1/1 Running 2 15h 192.168.207.128 kubernetes-master <none> <none>
kube-system kube-apiserver-kubernetes-master 1/1 Running 2 15h 192.168.207.128 kubernetes-master <none> <none>
kube-system kube-controller-manager-kubernetes-master 1/1 Running 2 15h 192.168.207.128 kubernetes-master <none> <none>
kube-system kube-flannel-ds-amd64-t5prf 1/1 Running 0 3h14m 192.168.207.128 kubernetes-master <none> <none>
kube-system kube-proxy-npszc 1/1 Running 2 15h 192.168.207.128 kubernetes-master <none> <none>
kube-system kube-scheduler-kubernetes-master 1/1 Running 2 15h 192.168.207.128 kubernetes-master <none> <none>
[root@kubernetes-master ~]#

部署名为httpd-app的应用,运行镜像的方式
[root@kubernetes-master ~]# kubectl run httpd-app --image=httpd --replicas=2
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/httpd-app created
[root@kubernetes-master ~]# kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
httpd-app 0/2 2 0 6m56s
[root@kubernetes-master ~]#
[root@kubernetes-master ~]# kubectl taint node kubernetes-master node-role.kubernetes.io/master-
node/kubernetes-master untainted
[root@kubernetes-master ~]# kubectl describe nodes kubernetes-master | grep -E '(Roles|Taints)'
Roles: master
Taints: <none>
[root@kubernetes-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
httpd-app-5bc589d9f7-pvfrs 1/1 Running 0 36m
httpd-app-5bc589d9f7-qkrls 1/1 Running 0 36m
[root@kubernetes-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
httpd-app-5bc589d9f7-pvfrs 1/1 Running 0 39m 10.244.0.4 kubernetes-master <none> <none>
httpd-app-5bc589d9f7-qkrls 1/1 Running 0 39m 10.244.0.5 kubernetes-master <none> <none>
[root@kubernetes-master ~]# kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
httpd-app 2/2 2 2 43m
[root@kubernetes-master ~]#
[root@kubernetes-master ~]# kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
httpd-app 2/2 2 2 43m
[root@kubernetes-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubernetes-master Ready master 16h v1.15.0
[root@kubernetes-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
httpd-app-5bc589d9f7-pvfrs 1/1 Running 0 46m
httpd-app-5bc589d9f7-qkrls 1/1 Running 0 46m
[root@kubernetes-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
httpd-app-5bc589d9f7-pvfrs 1/1 Running 0 46m 10.244.0.4 kubernetes-master <none> <none>
httpd-app-5bc589d9f7-qkrls 1/1 Running 0 46m 10.244.0.5 kubernetes-master <none> <none>
[root@kubernetes-master ~]# kubectl delete pods httpd-app-5bc589d9f7-qkrls
pod "httpd-app-5bc589d9f7-qkrls" deleted
[root@kubernetes-master ~]# kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
httpd-app 2/2 2 2 47m
[root@kubernetes-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
httpd-app-5bc589d9f7-4f98l 1/1 Running 0 41s
httpd-app-5bc589d9f7-pvfrs 1/1 Running 0 47m
[root@kubernetes-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
httpd-app-5bc589d9f7-4f98l 1/1 Running 0 79s 10.244.0.6 kubernetes-master <none> <none>
httpd-app-5bc589d9f7-pvfrs 1/1 Running 0 48m 10.244.0.4 kubernetes-master <none> <none>
[root@kubernetes-master ~]# kubectl delete deployment httpd-app
deployment.extensions "httpd-app" deleted
[root@kubernetes-master ~]# kubectl get deployments
No resources found.
[root@kubernetes-master ~]#

Kubernetes用run直接创建资源
部署名为Nginx-deployment的应用,以镜像的方式
[root@kubernetes-master ~]# kubectl run nginx-deployment --image=nginx:1.7.9 --replicas=2
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/nginx-deployment created
[root@kubernetes-master ~]# kubectl describe deployment
Name: nginx-deployment
Namespace: default
CreationTimestamp: Wed, 27 May 2020 03:34:46 -0400
Labels: run=nginx-deployment
Annotations: deployment.kubernetes.io/revision: 1
Selector: run=nginx-deployment
Replicas: 2 desired
2 updated 2 total 0 available 2 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: run=nginx-deployment
Containers:
nginx-deployment:
Image: nginx:1.7.9
Port: <none>
Host Port: <none>
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason

Available False MinimumReplicasUnavailable
Progressing True ReplicaSetUpdated
OldReplicaSets: <none>
NewReplicaSet: nginx-deployment-748ff87d9d (2/2 replicas created)
Events:
Type Reason Age From Message


Normal ScalingReplicaSet 27s deployment-controller Scaled up replica set nginx-deployment-748ff87d9d to 2
[root@kubernetes-master ~]# kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 1/2 2 1 51s
[root@kubernetes-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-deployment-748ff87d9d-dcmxj 1/1 Running 0 67s
nginx-deployment-748ff87d9d-zblb9 1/1 Running 0 67s
[root@kubernetes-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-748ff87d9d-dcmxj 1/1 Running 0 76s 10.244.0.7 kubernetes-master <none> <none>
nginx-deployment-748ff87d9d-zblb9 1/1 Running 0 76s 10.244.0.8 kubernetes-master <none> <none>
[root@kubernetes-master ~]#
[root@kubernetes-master ~]# kubectl get replicaset
NAME DESIRED CURRENT READY AGE
nginx-deployment-748ff87d9d 2 2 2 4m7s
[root@kubernetes-master ~]#
[root@kubernetes-master ~]# kubectl describe replicaset
Name: nginx-deployment-748ff87d9d
Namespace: default
Selector: pod-template-hash=748ff87d9d,run=nginx-deployment
Labels: pod-template-hash=748ff87d9d
run=nginx-deployment
Annotations: deployment.kubernetes.io/desired-replicas: 2
deployment.kubernetes.io/max-replicas: 3
deployment.kubernetes.io/revision: 1
Controlled By: Deployment/nginx-deployment
Replicas: 2 current / 2 desired
Pods Status: 2 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: pod-template-hash=748ff87d9d
run=nginx-deployment
Containers:
nginx-deployment:
Image: nginx:1.7.9
Port: <none>
Host Port: <none>
Environment: <none>
Mounts: <none>
Volumes: <none>
Events:
Type Reason Age From Message


Normal SuccessfulCreate 3m34s replicaset-controller Created pod: nginx-deployment-748ff87d9d-dcmxj
Normal SuccessfulCreate 3m34s replicaset-controller Created pod: nginx-deployment-748ff87d9d-zblb9
[root@kubernetes-master ~]#
[root@kubernetes-master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-deployment-748ff87d9d-dcmxj 1/1 Running 0 5m53s
nginx-deployment-748ff87d9d-zblb9 1/1 Running 0 5m53s
[root@kubernetes-master ~]# kubectl describe pod nginx-deployment-748ff87d9d-zblb9
Name: nginx-deployment-748ff87d9d-zblb9
Namespace: default
Priority: 0
Node: kubernetes-master/192.168.207.128
Start Time: Wed, 27 May 2020 03:34:46 -0400
Labels: pod-template-hash=748ff87d9d
run=nginx-deployment
Annotations: <none>
Status: Running
IP: 10.244.0.8
Controlled By: ReplicaSet/nginx-deployment-748ff87d9d
Containers:
nginx-deployment:
Container ID: docker://dbdbb1bc079ae093fe70c3090f7d0de122fb9ab5e11d18b1510ee49e8348b7be
Image: nginx:1.7.9
Image ID: docker-pullable://nginx@sha256:e3456c851a152494c3e4ff5fcc26f240206abac0c9d794affb40e0714846c451
Port: <none>
Host Port: <none>
State: Running
Started: Wed, 27 May 2020 03:35:44 -0400
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-fdxhh (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-fdxhh:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-fdxhh
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message


Normal Scheduled 6m19s default-scheduler Successfully assigned default/nginx-deployment-748ff87d9d-zblb9 to kubernetes-master
Normal Pulling 6m18s kubelet, kubernetes-master Pulling image "nginx:1.7.9"
Normal Pulled 5m21s kubelet, kubernetes-master Successfully pulled image "nginx:1.7.9"
Normal Created 5m21s kubelet, kubernetes-master Created container nginx-deployment
Normal Started 5m21s kubelet, kubernetes-master Started container nginx-deployment
[root@kubernetes-master ~]#
[root@kubernetes-master yml]# kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 2/2 2 2 15m
[root@kubernetes-master yml]# kubectl delete deployment nginx-deployment
deployment.extensions "nginx-deployment" deleted
[root@kubernetes-master yml]# kubectl get deployments
No resources found.
[root@kubernetes-master yml]#

Kubernetes通过配置文件和kubectl apply -f 自定义.yml 创建资源
[root@kubernetes-master ~]# ls
anaconda-ks.cfg initial-setup-ks.cfg kube-flannel.yml
[root@kubernetes-master ~]# mkdir yml
[root@kubernetes-master ~]# cd yml/
[root@kubernetes-master yml]# ls -F
[root@kubernetes-master yml]#
[root@kubernetes-master yml]# gedit nginx.yml

(gedit:69571): WARNING : 03:53:43.560: Set document metadata failed: Setting attribute metadata::gedit-spell-language not supported

(gedit:69571): WARNING : 03:53:43.560: Set document metadata failed: Setting attribute metadata::gedit-encoding not supported

(gedit:69571): WARNING : 03:53:45.606: Set document metadata failed: Setting attribute metadata::gedit-position not supported
[root@kubernetes-master yml]# ls -F
nginx.yml
[root@kubernetes-master yml]# cat -n nginx.yml
1 apiVersion: extensions/v1beta1
2 kind: Deployment
3 metadata:
4 name: nginx-deploy1c2
5 spec:
6 replicas: 2
7 template:
8 metadata:
9 labels:
10 app: web_server
11 spec:
12 containers:
13 - name: nginx
14 image: nginx:1.7.9
[root@kubernetes-master yml]#
[root@kubernetes-master yml]# pwd -P
/root/yml
[root@kubernetes-master yml]# kubectl apply -f nginx.yml
deployment.extensions/nginx-deploy1c2 created
[root@kubernetes-master yml]# kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deploy1c2 2/2 2 2 13s
[root@kubernetes-master yml]# kubectl get replicaset
NAME DESIRED CURRENT READY AGE
nginx-deploy1c2-5d76d6897d 2 2 2 29s
[root@kubernetes-master yml]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deploy1c2-5d76d6897d-8nlhk 1/1 Running 0 42s 10.244.0.10 kubernetes-master <none> <none>
nginx-deploy1c2-5d76d6897d-wwqnb 1/1 Running 0 42s 10.244.0.9 kubernetes-master <none> <none>
[root@kubernetes-master yml]#
[root@kubernetes-master yml]# kubectl delete deployment nginx-deploy1c2
deployment.extensions "nginx-deploy1c2" deleted
[root@kubernetes-master yml]# kubectl get deployment
No resources found.
[root@kubernetes-master yml]#

扩容,增加pod的副本数至3个。并用yml配置文件删除部署的资源
[root@kubernetes-master yml]# gedit nginx1.yml

(gedit:74120): WARNING : 04:01:30.966: Set document metadata failed: Setting attribute metadata::gedit-spell-language not supported

(gedit:74120): WARNING : 04:01:30.966: Set document metadata failed: Setting attribute metadata::gedit-encoding not supported

(gedit:74120): WARNING : 04:01:33.317: Set document metadata failed: Setting attribute metadata::gedit-position not supported
[root@kubernetes-master yml]# ls -F
nginx1.yml nginx.yml
[root@kubernetes-master yml]# cat -n nginx1.yml
1 apiVersion: extensions/v1beta1
2 kind: Deployment
3 metadata:
4 name: nginx-deploy2c3
5 spec:
6 replicas: 3
7 template:
8 metadata:
9 labels:
10 app: web_server
11 spec:
12 containers:
13 - name: nginx
14 image: nginx:1.7.9
[root@kubernetes-master yml]#
[root@kubernetes-master yml]# kubectl apply -f nginx1.yml
deployment.extensions/nginx-deploy2c3 created
[root@kubernetes-master yml]# kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deploy2c3 3/3 3 3 21s
[root@kubernetes-master yml]# kubectl get replicaset
NAME DESIRED CURRENT READY AGE
nginx-deploy2c3-5d76d6897d 3 3 3 33s
[root@kubernetes-master yml]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deploy2c3-5d76d6897d-j66n2 1/1 Running 0 43s 10.244.0.11 kubernetes-master <none> <none>
nginx-deploy2c3-5d76d6897d-ljrvt 1/1 Running 0 43s 10.244.0.13 kubernetes-master <none> <none>
nginx-deploy2c3-5d76d6897d-vxcrz 1/1 Running 0 43s 10.244.0.12 kubernetes-master <none> <none>
[root@kubernetes-master yml]#
[root@kubernetes-master yml]# kubectl delete -f nginx1.yml
deployment.extensions "nginx-deploy2c3" deleted
[root@kubernetes-master yml]# kubectl get deployment
No resources found.
[root@kubernetes-master yml]#
【总结:Deployment 资源部署的Pod 会分布在各个 Node 上,每个 Node 可能运行好几个副本。】

创建DaemonSet资源
[root@kubernetes-master yml]# kubectl get daemonset --namespace=kube-system
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-flannel-ds-amd64 1 1 1 1 1 beta.kubernetes.io/arch=amd64 5h2m
kube-flannel-ds-arm 0 0 0 0 0 beta.kubernetes.io/arch=arm 5h2m
kube-flannel-ds-arm64 0 0 0 0 0 beta.kubernetes.io/arch=arm64 5h2m
kube-flannel-ds-ppc64le 0 0 0 0 0 beta.kubernetes.io/arch=ppc64le 5h2m
kube-flannel-ds-s390x 0 0 0 0 0 beta.kubernetes.io/arch=s390x 5h2m
kube-proxy 1 1 1 1 1 beta.kubernetes.io/os=linux 17h
[root@kubernetes-master yml]#
【DaemonSet资源中 kube-flannel-ds 和 kube-proxy 分别负责每个节点上运行 flannel 和 kube-proxy 组件。由于我这里是单节点的集群,所以它们各有 1个副本】
[root@kubernetes-master yml]# ls -F
nginx1.yml nginx.yml
[root@kubernetes-master yml]# gedit prometheusnodeexporter.yml
[root@kubernetes-master yml]# ls -F
nginx1.yml nginx.yml prometheusnodeexporter.yml
[root@kubernetes-master yml]# cat -n prometheusnodeexporter.yml
1 apiVersion: extensions/v1beta1
2 kind: DaemonSet
3 metadata:
4 name: node-exporter-daemonset
5 spec:
6 template:
7 metadata:
8 labels:
9 app: promtheus
10 spec:
11 hostNetwork: true
12 containers:
13 - image: prom/node-exporter
14 name: node-exporter
15 imagePullPolicy: IfNotPresent
16 command:
17 - /bin/node_exporter
18 - --path.procfs
19 - /host/proc
20 - --path.sysfs
21 - /host/sys
22 - --collector.filesystem.ignored-mount-points "^/(sys
proc dev host etc)($ /)"
23 volumeMounts:
24 - name: proc
25 mountPath: /host/proc
26 - name: sys
27 mountPath: /host/sys
28 - name: root
29 mountPath: /host/rootfs
30 volumes:
31 - name: proc
32 hostPath:
33 path: /proc
34 - name: sys
35 hostPath:
36 path: /sys
37 - name: root
38 hostPath:
39 path: /
[root@kubernetes-master yml]# kubectl apply -f prometheusnodeexporter.yml
daemonset.extensions/node-exporter-daemonset created
[root@kubernetes-master yml]# kubectl get daemonset
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
node-exporter-daemonset 1 1 0 1 0 <none> 5m20s
[root@kubernetes-master yml]# kubectl get pod
NAME READY STATUS RESTARTS AGE
node-exporter-daemonset-tn9rz 0/1 CrashLoopBackOff 3 99s
[root@kubernetes-master yml]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
node-exporter-daemonset-tn9rz 0/1 CrashLoopBackOff 4 3m10s 192.168.207.128 kubernetes-master <none> <none>
[root@kubernetes-master yml]#
[root@kubernetes-master yml]# kubectl get pod
NAME READY STATUS RESTARTS AGE
node-exporter-daemonset-tn9rz 0/1 CrashLoopBackOff 6 8m46s
[root@kubernetes-master yml]# kubectl describe pod node-exporter-daemonset-tn9rz
Name: node-exporter-daemonset-tn9rz
Namespace: default
Priority: 0
Node: kubernetes-master/192.168.207.128
Start Time: Wed, 27 May 2020 04:36:17 -0400
Labels: app=promtheus
controller-revision-hash=7778d5cb88
pod-template-generation=1
Annotations: <none>
Status: Running
IP: 192.168.207.128
Controlled By: DaemonSet/node-exporter-daemonset
Containers:
node-exporter:
Container ID: docker://fdddf6619393999117f95d8bc51502f4e538a69175d6d967884532e6184cfb91
Image: prom/node-exporter
Image ID: docker-pullable://prom/node-exporter@sha256:8a3a33cad0bd33650ba7287a7ec94327d8e47ddf7845c569c80b5c4b20d49d36
Port: <none>
Host Port: <none>
Command:
/bin/node_exporter
--path.procfs
/host/proc
--path.sysfs
/host/sys
--collector.filesystem.ignored-mount-points "^/(sys
proc dev host etc)($ /)"
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Wed, 27 May 2020 04:47:42 -0400
Finished: Wed, 27 May 2020 04:47:42 -0400
Ready: False
Restart Count: 7
Environment: <none>
Mounts:
/host/proc from proc (rw)
/host/rootfs from root (rw)
/host/sys from sys (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-fdxhh (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
proc:
Type: HostPath (bare host directory volume)
Path: /proc
HostPathType:
sys:
Type: HostPath (bare host directory volume)
Path: /sys
HostPathType:
root:
Type: HostPath (bare host directory volume)
Path: /
HostPathType:
default-token-fdxhh:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-fdxhh
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/disk-pressure:NoSchedule
node.kubernetes.io/memory-pressure:NoSchedule
node.kubernetes.io/network-unavailable:NoSchedule
node.kubernetes.io/not-ready:NoExecute
node.kubernetes.io/pid-pressure:NoSchedule
node.kubernetes.io/unreachable:NoExecute
node.kubernetes.io/unschedulable:NoSchedule
Events:
Type Reason Age From Message

Normal Scheduled 12m default-scheduler Successfully assigned default/node-exporter-daemonset-tn9rz to kubernetes-master
Normal Pulling 12m kubelet, kubernetes-master Pulling image "prom/node-exporter"
Normal Pulled 12m kubelet, kubernetes-master Successfully pulled image "prom/node-exporter"
Normal Created 10m (x5 over 12m) kubelet, kubernetes-master Created container node-exporter
Normal Started 10m (x5 over 12m) kubelet, kubernetes-master Started container node-exporter
Normal Pulled 10m (x4 over 12m) kubelet, kubernetes-master Container image "prom/node-exporter" already present on machine
Warning BackOff 2m45s (x46 over 12m) kubelet, kubernetes-master Back-off restarting failed container
[root@kubernetes-master yml]#
[root@kubernetes-master yml]# kubectl logs node-exporter-daemonset-tn9rz
node_exporter: error: unknown long flag '--collector.filesystem.ignored-mount-points "^/(sys|proc|dev|host|etc)($|/)"', try --help
[root@kubernetes-master yml]#
【经查看日志得知,出现的pod状态CrashLoopBackOff是因为在yml配置文件中指定的容器操作命令和pod标签不相符。如果在3节点集群中来部署这个DaemonSet资源,就会看到这样的pod状态:
[root@kubernetes-master yml]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE
node-exporter-daemonset-b2w0x 1/1 Running 0 6s 192.168.207.129 kubernetes-node0
node-exporter-daemonset-kvmkr 1/1 Running 0 6s 192.168.207.130 kubernetes-node1

【Prometheus 是流行的系统监控方案,Node Exporter 是 Prometheus 的 agent,以 Daemon 的形式运行在每个被监控节点上。】
[root@kubernetes-master yml]# kubectl delete -f prometheusnodeexporter.yml
daemonset.extensions "node-exporter-daemonset" deleted
[root@kubernetes-master yml]#
【总结:DaemonSet 资源:每个 Node 上最多只能运行一个副本。】

[root@kubernetes-master yml]# ls -F
nginx1.yml nginx.yml prometheusnodeexporter.yml
[root@kubernetes-master yml]# gedit firstjob0.yml

(gedit:110687): WARNING : 05:11:24.563: Set document metadata failed: Setting attribute metadata::gedit-spell-language not supported

(gedit:110687): WARNING : 05:11:24.567: Set document metadata failed: Setting attribute metadata::gedit-encoding not supported

(gedit:110687): WARNING : 05:11:26.473: Set document metadata failed: Setting attribute metadata::gedit-position not supported
[root@kubernetes-master yml]# ls -F
firstjob0.yml nginx1.yml nginx.yml prometheusnodeexporter.yml
[root@kubernetes-master yml]# cat -n firstjob0.yml
1 apiVersion: batch/v1
2 kind: Job
3 metadata:
4 name: firstjob0
5 spec:
6 template:
7 metadata:
8 name: firstjob0
9 spec:
10 containers:
11 - image: busybox
12 name: node-exporter
13 command:
14 - ["echo", "this is my 2st JOB resources"]
15 restartPolicy: Never
[root@kubernetes-master yml]#
[root@kubernetes-master yml]# kubectl apply -f firstjob0.yml
error: error validating "firstjob0.yml": error validating data: ValidationError(Job.spec.template.spec.containers[0].command[0]): invalid type for io.k8s.api.core.v1.Container.command: got "array", expected "string"; if you choose to ignore these errors, turn validation off with --validate=false
[root@kubernetes-master yml]# kubectl apply -f firstjob0.yml --validate=false
Error from server (BadRequest): error when creating "firstjob0.yml": Job in version "v1" cannot be handled as a Job: v1.Job.Spec: v1.JobSpec.Template: v1.PodTemplateSpec.Spec: v1.PodSpec.Containers: []v1.Container: v1.Container.Command: []string: ReadString: expects " or n, but found [, error found in #10 byte of ...|ommand":[["echo","th|..., bigger context ...|":"firstjob0"},"spec":{"containers":[{"command":[["echo","this is my 2st JOB resources"]],"image":"b|...
[root@kubernetes-master yml]#
【这个报错提示是apiVersion: batch/v1不能处理job类资源,这可能和我们的yml配置文件指定项有关,也有可能和当前Kubernetes部分组件版本太低有关,有待进一步排查】
【job资源用于处理一次性任务】

孟伯,20200527

交流联系:微信 1807479153 ,QQ 1807479153

猜你喜欢

转载自blog.51cto.com/6286393/2499042