本文源链接地址:https://www.93bok.com
kubernetes通过kube-apiserver作为整个集群管理的入口。Apiserver是整个集群的主管理节点,用户通过Apiserver配置和组织集群,同时集群中各个节点同etcd存储的交互也是通过Apiserver进行交互。Apiserver实现了一套RESTfull的接口,用户可以直接使用API同Apiserver交互。另外官方还提供了一个客户端kubectl随工具集打包,用于可直接通过kubectl以命令行的方式同集群交互。
一、get
get命令用于获取集群的一个或一些resource信息。使用--help查看详细信息。kubectl的帮助信息、示例相当详细,而且简单易懂。建议大家习惯使用帮助信息。kubectl可以列出集群所有resource的详细。resource包括集群节点、运行的pod,ReplicationController,service等。
1、获取pod信息“kubectl get po”
可以直接使用"kubectl get po“获取当前运行的所有pods的信息,或使用”kubectl get po -o wide“获取pod运行在哪个节点上的信息。注:集群中可以创建多个namespace,未显示的指定namespace的情况下,所有操作都是针对default namespace。
[root@master ~]# kubectl get po
NAME READY STATUS RESTARTS AGE
nginx-8f8f7665f-l8trc 1/1 Running 1 1d
nginx-8f8f7665f-t2ntn 1/1 Running 1 1d
[root@master ~]# kubectl get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
nginx-8f8f7665f-l8trc 1/1 Running 1 1d 192.168.80.3 192.168.1.90 <none>
nginx-8f8f7665f-t2ntn 1/1 Running 1 1d 192.168.80.2 192.168.1.90 <none>
[root@master ~]# kubectl get po --namespace=kube-system
NAME READY STATUS RESTARTS AGE
kubernetes-dashboard-767dc7d4d-h8mbg 0/1 CrashLoopBackOff 4 2m
[root@master ~]# kubectl get po --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default nginx-8f8f7665f-l8trc 1/1 Running 1 1d
default nginx-8f8f7665f-t2ntn 1/1 Running 1 1d
kube-system kubernetes-dashboard-767dc7d4d-h8mbg 0/1 Error 5 2m
[root@master ~]# kubectl get po kubernetes-dashboard-767dc7d4d-h8mbg -o yaml -n kube-system
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: 2018-09-27T02:01:04Z
generateName: kubernetes-dashboard-767dc7d4d-
labels:
k8s-app: kubernetes-dashboard
pod-template-hash: "323873808"
name: kubernetes-dashboard-767dc7d4d-h8mbg
namespace: kube-system
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: kubernetes-dashboard-767dc7d4d
uid: 30b9dd56-c1f9-11e8-8789-000c29c7492a
resourceVersion: "117275"
selfLink: /api/v1/namespaces/kube-system/pods/kubernetes-dashboard-767dc7d4d-h8mbg
uid: 30be1ee6-c1f9-11e8-8789-000c29c7492a
spec:
containers:
- args:
- --auto-generate-certificates
image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /
port: 8443
scheme: HTTPS
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 30
name: kubernetes-dashboard
ports:
- containerPort: 8443
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /certs
name: kubernetes-dashboard-certs
- mountPath: /tmp
name: tmp-volume
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kubernetes-dashboard-token-wbn47
readOnly: true
dnsPolicy: ClusterFirst
nodeName: 192.168.1.91
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: kubernetes-dashboard
serviceAccountName: kubernetes-dashboard
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
volumes:
- name: kubernetes-dashboard-certs
secret:
defaultMode: 420
secretName: kubernetes-dashboard-certs
- emptyDir: {}
name: tmp-volume
- name: kubernetes-dashboard-token-wbn47
secret:
defaultMode: 420
secretName: kubernetes-dashboard-token-wbn47
status:
conditions:
- lastProbeTime: null
lastTransitionTime: 2018-09-27T02:01:04Z
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: 2018-09-27T02:01:04Z
message: 'containers with unready status: [kubernetes-dashboard]'
reason: ContainersNotReady
status: "False"
type: Ready
- lastProbeTime: null
lastTransitionTime: null
message: 'containers with unready status: [kubernetes-dashboard]'
reason: ContainersNotReady
status: "False"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: 2018-09-27T02:01:04Z
status: "True"
type: PodScheduled
containerStatuses:
- containerID: docker://42886a9530070f87567d6f8d2ba6ce0990b7b2aa5a01e42395f66bb11370fbb8
image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0
imageID: docker://sha256:0dab2435c100b32892e676b9709978617a5472390ac951f764c292950b902b1f
lastState:
terminated:
containerID: docker://42886a9530070f87567d6f8d2ba6ce0990b7b2aa5a01e42395f66bb11370fbb8
exitCode: 1
finishedAt: 2018-09-27T02:37:22Z
reason: Error
startedAt: 2018-09-27T02:37:22Z
name: kubernetes-dashboard
ready: false
restartCount: 12
state:
waiting:
message: Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-767dc7d4d-h8mbg_kube-system(30be1ee6-c1f9-11e8-8789-000c29c7492a)
reason: CrashLoopBackOff
hostIP: 192.168.1.91
phase: Running
podIP: 192.168.64.2
qosClass: BestEffort
startTime: 2018-09-27T02:01:04Z
2、获取namespace信息“kubectl get namespace”
[root@master ~]# kubectl get namespace
NAME STATUS AGE
default Active 8d
kube-public Active 8d
kube-system Active 8d
3、类似还可以使用“kubectl get rc”、“kubectl get svc”、“kubectl get nodes”等获取其它resource信息
二、describe
describe类似于get,同样用于获取resource的相关信息。不同的是,get获得的是更详细的resource个性的详细信息,describe获得的是resource集群相关的信息
例如我们查询一下上边kubernetes-dashboard-767dc7d4d-h8mbg这个pod的相关信息,为什么状态会是error,主要看Evevts以下的内容
[root@master ~]# kubectl describe po kubernetes-dashboard-767dc7d4d-h8mbg -n kube-system
Name: kubernetes-dashboard-767dc7d4d-h8mbg
Namespace: kube-system
Node: 192.168.1.91/192.168.1.91
Start Time: Thu, 27 Sep 2018 10:01:04 +0800
Labels: k8s-app=kubernetes-dashboard
pod-template-hash=323873808
Annotations: <none>
Status: Running
IP: 192.168.64.2
Controlled By: ReplicaSet/kubernetes-dashboard-767dc7d4d
Containers:
kubernetes-dashboard:
Container ID: docker://66a929286c8c4fe5a6e11874ad594df29fa360cab63b3eaca3644a66f14899c2
Image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0
Image ID: docker://sha256:0dab2435c100b32892e676b9709978617a5472390ac951f764c292950b902b1f
Port: 8443/TCP
Host Port: 0/TCP
Args:
--auto-generate-certificates
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Thu, 27 Sep 2018 10:11:47 +0800
Finished: Thu, 27 Sep 2018 10:11:47 +0800
Ready: False
Restart Count: 7
Liveness: http-get https://:8443/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/certs from kubernetes-dashboard-certs (rw)
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kubernetes-dashboard-token-wbn47 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kubernetes-dashboard-certs:
Type: Secret (a volume populated by a Secret)
SecretName: kubernetes-dashboard-certs
Optional: false
tmp-volume:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
kubernetes-dashboard-token-wbn47:
Type: Secret (a volume populated by a Secret)
SecretName: kubernetes-dashboard-token-wbn47
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node-role.kubernetes.io/master:NoSchedule
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 15m default-scheduler Successfully assigned kube-system/kubernetes-dashboard-767dc7d4d-h8mbg to 192.168.1.91
Normal Pulled 14m (x4 over 15m) kubelet, 192.168.1.91 Container image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0" already present on machine
Normal Created 14m (x4 over 15m) kubelet, 192.168.1.91 Created container
Normal Started 14m (x4 over 15m) kubelet, 192.168.1.91 Started container
Warning MissingClusterDNS 5m (x57 over 15m) kubelet, 192.168.1.91 pod: "kubernetes-dashboard-767dc7d4d-h8mbg_kube-system(30be1ee6-c1f9-11e8-8789-000c29c7492a)". kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to "Default" policy.
Warning BackOff 15s (x72 over 15m) kubelet, 192.168.1.91 Back-off restarting failed container
三、create
kubectl命令用于根据文件或输入创建集群resource。
kubectl create -f xxxxx.yaml ##创建一个资源
kubectl create -f xxxxx.yaml -f xxxxx.yaml ##创建多个资源
kubectl create -f /dir ##使用目录下的所有清单创建资源
四、replace
replace命令用于对已有资源进行更新、替换。比如create创建了一个nginx,当我们需要更新resource的一些属性的时候,如果修改副本数量,增加、修改label,更改image版本,修改端口等。都可以直接修改原yaml文件,然后执行replace命令更新资源。
五、path
如果一个容器已经在运行,这时需要对一些容器属性进行修改,又不想删除容器,或不方便通过replace的方式进行更新。kubernetes还提供了一种在容器运行时,直接对容器进行修改的方式,就是patch命令。
例如之前创建的pod的label是app=nginx-2,如果在运行过程中,需要把它的label改为app=nginx-3,命令如下:
kubectl patch pod xxxxxx -p '{"metadata":{"labels":{"app":"nginx-3"}}}'
六、edit
edit提供了另一种更新resource源的操作,通过edit能够灵活的在一个common上边直接修改resource
例如使用edit直接更新前面创建的pod的命令为:
[root@master ~]# kubectl edit po nginx-8f8f7665f-l8trc
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: 2018-09-25T07:46:34Z
generateName: nginx-8f8f7665f-
labels:
pod-template-hash: "494932219"
run: load-balancer-example
name: nginx-8f8f7665f-l8trc
namespace: default
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: nginx-8f8f7665f
uid: 662db288-c08d-11e8-bd85-000c29c7492a
......
七、delete
删除resource
kubectl delete -f kubernetes-dashboard.yaml ##删除.yaml文件中定义的资源
kubectl delete po nginx-8f8f7665f-l8trc ##删除名为nginx-8f8f7665f-l8trc的pod
kubectl delete po -lapp=nginx-3 ##删除带有nginx-3标签的pod
八、apply
apply命令提供了比patch,edit等更严格的更新resource的方式。通过apply,用户可以将resource的configuration使用source control的方式维护在版本库中。每次有更新时,将配置文件push到server,然后使用kubectl apply将更新应用到resource。kubernetes会在引用更新前将当前配置文件中的配置同已经应用的配置做比较,并只更新更改的部分,而不会主动更改任何用户未指定的部分。
apply命令的使用方式同replace相同,不同的是,apply不会删除原有resource,然后创建新的。apply直接在原有resource的基础上进行更新。同时kubectl apply还会resource中添加一条注释,标记当前的apply。类似于git操作。
九、logs
logs命令用于显示pod运行中,容器内程序输出到标准输出的内容。跟docker的logs命令类似。如果要获得tail -f 的方式,也可以使用-f选项。
[root@master ~]# kubectl logs -f kubernetes-dashboard-767dc7d4d-h8mbg -n kube-system
2018/09/27 03:02:57 Starting overwatch
2018/09/27 03:02:57 Using in-cluster config to connect to apiserver
2018/09/27 03:02:57 Using service account token for csrf signing
2018/09/27 03:02:57 No request provided. Skipping authorization
2018/09/27 03:02:57 Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service account's configuration) or the --apiserver-host param points to a server that does not exist. Reason: Get https://10.0.0.1:443/version: x509: certificate is valid for 127.0.0.1, 192.168.1.89, not 10.0.0.1
Refer to our FAQ and wiki pages for more information: https://github.com/kubernetes/dashboard/wiki/FAQ
十、rolling-update
rolling-update是一个非常重要的命令,对于已经部署并且正在运行的业务,rolling-update提供了不中断业务的更新方式。rolling-update每次起一个新的pod,等新pod完全起来后删除一个旧的pod,然后再起一个新的pod替换旧的pod,直到替换掉所有的pod。
rolling-update需要确保新的版本有不同的name,Version和label,否则会报错 。
kubectl rolling-update kubernetes-dashboard-767dc7d4d-h8mbg -f kubernetes-dashboard.yaml
如果在升级过程中,发现有问题还可以中途停止update,并回滚到前面的版本
kubectl rolling-update kubernetes-dashboard-767dc7d4d-h8mbg --rollback
十一、scale
scale用于程序在负载加重或缩小时副本进行扩容或缩小,如前面创建的nginx有两个副本,可以轻松的使用scale命令对副本数进行扩展或缩小。
1、扩展副本数到4:
kubectl scale rc rc-nginx-3 --replicas=4
2、重新缩减副本数到2:
kubectl scale rc rc-nginx-3 --replicas=2
十二、autoscale
scale虽然能够很方便的对副本数进行扩展或缩小,但是仍然需要人工介入,不能实时自动的根据系统负载对副本数进行扩、缩。autoscale命令提供了自动根据pod负载对其副本进行扩缩的功能。
autoscale命令会给一个rc指定一个副本数的范围,在实际运行中根据pod中运行的程序的负载自动在指定的范围内对pod进行扩容或缩容。
例如前面创建的nginx,可以用如下命令指定副本范围在1~4
kubectl autoscale rc rc-nignx-3 --min=1 --max=4
十三、exec
进入容器
[root@master ~]# kubectl exec -it nginx-8f8f7665f-l8trc bash
root@nginx-8f8f7665f-l8trc:/# ls
bin dev home lib64 mnt proc run srv tmp var
boot etc lib media opt root sbin sys usr
root@nginx-8f8f7665f-l8trc:/#