Master:
kube-apiserver:6443,8080
kube-scheduler,也是apiserver的客户端,调度节点上未使用的pod
kube-controller-manager:包含多个控制器如pod控制器,serveice控制器,node控制器等,运行为一个进程
etcd:k8s存储资源的类型
Node:
kubelet:apiserver的客户端,执行pod的配置创建容器
docker
kubernetes API
API groups
为了便于独立进行版本演进,kubernetes将API划分了称为"API群组"的逻辑集合,每个群组的REST路劲为"/apis/$GROUP_NAME/$VERSION",例如/apis/apps/v1;
核心组core使用简化的REST路劲/api/v1
每个群组可同时存在多个不通级别的版本,主要包括alpha、beta和stable三个,使用的级别标识有如vlalpha1、v1beat2和v1等;
命令kubectl api-versions
[root@master01 ~ ]#kubectl api-versions
admissionregistration.k8s.io/v1beta1
apiextensions.k8s.io/v1beta1
apiregistration.k8s.io/v1
apiregistration.k8s.io/v1beta1
apps/v1
apps/v1beta1
apps/v1beta2
authentication.k8s.io/v1
组名/版本号
API资源类型
kubernetes系统把管理的绝大多数事物都抽象成了资源,它们分别代表着不同的事务类型,例如Node、Service、Pod、Controller等等;
每种类型均可通过"属性赋值"进行实例化,从而构建出"对象(Objects)";
kubernetes Object are persistent entities in the Kubernetes system.
kubernetes uses these entities to represent the state of your cluster.
对象主要用于描述在集群中运行的"应用程序(Pod)",以及应用程序相关的控制(Controllers)、配置(ConfigMap和Secret)、服务暴露(Service和Ingress)、存储(Volume)等;
用户使用这些对象来规划、部署、配置、维护和监控应用程序并记录运行日志;
每种类型的资源对象都支持响应的一组方法(管理操作),它们可用标准的HTTP Verb进行表示,例如PUT、delete、POST和GET等;
和解循环(Reconciliation Loop)
客户端向API server提交POST请求以创建对象
通过JSON格式的body提交
YAML格式需要实现完成向JSON的转换
对象配置信息保存于etcd中,其定义出的状态也称为"期望的状态(spec)"控制器负责将其创建为kubernetes集群上的具体对象(活动对象),并确保其当前状态(status)与用户定义的期望状态相同;
status由控制器自行维护,而spec则由用户进行提交
活动对象在运行过程中因节点故障灯原因可能会在某一时刻导致其status不在吻合于spec
控制器通过和解循环(loop)不间断地监控着相关对象的当前状态,在对象的当前状态发生改变时运行合适的操作让其当前状态不限接近期望的状态
图9-30-2
资源对象的配置格式
API server接受和返回的所有JSON对象都遵循同一个模式,它们都具有"kind"和“apiVersion”字段,用于标识对象所属的资源类型、API群组及相关的版本
大多数的对象或类型的资源还需要具有三个嵌套的字段metadata、spec和status
metadata字段为资源提供元数据信息,例如名称、隶属的名称空间和标签等;
spec用户定义用户期望的状态,不同的资源类型,其状态的意义各有不同,例如Pod资源最为核心的功能在于运行容器;
status则记录着活动对象的当前状态信息,它由kubernetes系统自行维护,对用户来说为只读字段;
"kubectl api-resoures"命令可以获取集群支持使用的所有资源类型
[root@master01 ~ ]#kubectl run myapp --image=ikubernetes/myapp:v1 --replicas=2 --expose --port=80
service/myapp created
deployment.apps/myapp created
[root@master01 ~ ]#kubectl get pods
NAME READY STATUS RESTARTS AGE
myapp-6865459dff-4w4hb 0/1 ContainerCreating 0 20s
myapp-6865459dff-x2c82 0/1 ContainerCreating 0 20s
[root@master01 ~ ]#kubectl get pods myapp-6865459dff-4w4hb -o yaml
[root@master01 ~ ]#mkdir manifests
[root@master01 ~ ]#cd manifests/
[root@master01 ~ ]#vim pod-demo.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-demo
namespace: default
spec:
containers:
- name: nginx
image: nginx:1.10-alpine
- name: busybox
image: busybox:latest
imagePullPolicy: IfNotPresent #镜像获取策略
command: ['/bin/sh','-c','sleep 600']
[root@master01 ~ ]#kubectl create -f pod-demo.yaml
pod/pod-demo created
[root@master01 manifests ]#kubectl get pods
NAME READY STATUS RESTARTS AGE
myapp-6865459dff-4w4hb 1/1 Running 0 11m
myapp-6865459dff-x2c82 1/1 Running 0 11m
nginx-5c89469986-5tqj2 0/1 ImagePullBackOff 0 1h
nginx-5c89469986-z4mfd 0/1 Unknown 0 3d
pod-demo 0/2 ContainerCreating 0 30s
tomcat-667d6c9-2rzhc 1/1 Unknown 1 3d
tomcat-667d6c9-4qwfn 1/1 Unknown 1 3d
tomcat-667d6c9-glk7r 1/1 Running 0 1h
tomcat-667d6c9-xx6hj 1/1 Running 0 1h
[root@master01 ~ ]#kubectl get pods pod-demo -o yaml
====
What is Pod?
A group of one or more containers that are always co-located and co-scheduled that share the context
Containers in a pod share the same IP address,ports,hostname and storage
MOdeled like a virtual machine
Each container represents one process
Tightly coupled with other containers in the same pod
Pods are scheduled in Nodes
Fundamental unit of deployment in Kuernetes
===
What is Pod?
Containers within the same pod communicate each other using IPC;
Containers can find each other via localhost;
Each container inherits the same of the pod
Each pod has an IP address in a flat shared networking space
Volumes are shared by containers in a pod
图9-30-3
kubectl
API server:
陈述式命令:kubectl run/expose/delete/get/…
陈述式配置清单:
create -f /PATH/TO/SOMEFILE
delete -f
replace -f
声明式配置清单:(打补丁)
apply -f /PATH/TO/SOMEFILE
patch -p “”
[root@master01 ~ ]#kubectl get pods
NAME READY STATUS RESTARTS AGE
myapp-6865459dff-4w4hb 1/1 Running 0 47m
myapp-6865459dff-x2c82 1/1 Running 0 47m
nginx-5c89469986-5tqj2 0/1 ImagePullBackOff 0 1h
nginx-5c89469986-z4mfd 0/1 Unknown 0 3d
pod-demo 2/2 Running 3 37m
[root@master01 ~ ]#cd manifests/
[root@master01 manifests ]#ls
pod-demo.yaml
[root@master01 manifests ]#kubectl delete -f pod-demo.yaml
pod "pod-demo" deleted
谷歌搜索kubernetes api reference第一个链接
https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.12/
#名称空间默认为default
[root@master01 manifests ]#kubectl get namespaces
NAME STATUS AGE
default Active 3d
kube-public Active 3d
kube-system Active 3d
#通过命令创建名称空间
[root@master01 manifests ]#kubectl create namespace dev
namespace/dev created
[root@master01 manifests ]#kubectl get namespaces
NAME STATUS AGE
default Active 3d
dev Active 2s
kube-public Active 3d
kube-system Active 3d
[root@master01 manifests ]#kubectl get ns dev -o yaml
apiVersion: v1
kind: Namespace
metadata:
creationTimestamp: 2018-09-30T09:33:20Z
name: dev
resourceVersion: "135006"
selfLink: /api/v1/namespaces/dev
uid: de6349d4-c493-11e8-8632-000c293bcf92
spec:
finalizers:
- kubernetes
status:
phase: Active
#通过配置文件创建名称空间
[root@master01 manifests ]#vim ns-demo.yaml
apiVersion: v1
kind: Namespace
metadata:
name: product
[root@master01 manifests ]#kubectl create -f ns-demo.yaml
namespace/product created
[root@master01 manifests ]#kubectl get namespaces
NAME STATUS AGE
default Active 3d
dev Active 1m
kube-public Active 3d
kube-system Active 3d
product Active 4s
[root@master01 manifests ]#vim myapp-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
namespace: dev
spec:
containers:
- {"name":"myapp-container","image":"ikubernetes/myapp:v1"}
等同于一下两行
- name: myapp-container
image: ikubernetes/myapp:v1
[root@master01 manifests ]#kubectl create -f myapp-pod.yaml
pod/myapp-pod created
[root@master01 manifests ]#kubectl get pods
NAME READY STATUS RESTARTS AGE
myapp-6865459dff-4w4hb 1/1 Running 0 1h
myapp-6865459dff-x2c82 1/1 Running 0 1h
nginx-5c89469986-5tqj2 0/1 ImagePullBackOff 0 2h
nginx-5c89469986-z4mfd 0/1 Unknown 0 3d
tomcat-667d6c9-2rzhc 1/1 Unknown 1 3d
tomcat-667d6c9-4qwfn 1/1 Unknown 1 3d
tomcat-667d6c9-glk7r 1/1 Running 0 2h
tomcat-667d6c9-xx6hj 1/1 Running 0 2h
[root@master01 manifests ]#kubectl get pods -n dev
NAME READY STATUS RESTARTS AGE
myapp-pod 1/1 Running 0 12s
[root@master01 manifests ]#kubectl exec -it myapp-pod -n dev -- sh
/ # netstat -tnl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN
[root@master01 manifests ]#kubectl logs myapp-pod -n dev
管理Pod对象的容器
镜像获取策略:imagePullPolicy
定义暴露的端口:ports
自定义运行的容器命令:command和args
环境变量:env
共享节点的网络名称空间:hostNetwork
安全上下文:securityContext
imagePullPolicy: IfNotPresent #镜像获取策略,本地没有自动下载
imagePullPolicy: Always #从仓库获取
imagePullPolicy: Never #镜像获取从不获取,需要手动获取
定义DNAT规则
[root@master01 ~ ]#kubectl explain pod
[root@master01 ~ ]#kubectl explain pod.spec
[root@master01 ~ ]#kubectl explain pod.spec.containers
[root@master01 ~ ]#vim pod-demo.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-demo
namespace: default
spec:
containers:
- name: nginx
image: nginx:1.10-alpine
- name: busybox
image: busybox:latest
imagePullPolicy: IfNotPresent #镜像获取策略
command: ['/bin/sh','-c','sleep 600']
ports:
- name: http
containerPort: 80
[root@master01 manifests ]#kubectl delete -f pod-demo.yaml
pod "pod-demo" deleted
[root@master01 manifests ]#kubectl apply -f myapp-pod.yml
[root@master01 ~ ]#vim pod-demo.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-demo
namespace: default
spec:
containers:
- name: nginx
image: nginx:1.10-alpine
- name: busybox
image: busybox:latest
imagePullPolicy: IfNotPresent #镜像获取策略
command: ['/bin/sh']
args: ["-c","while true; do echo $(HOST_ID) >> /tmp/date.txt; sleep 1; done"]
env:
- name: HOST_ID
value: "My_Pod"
[root@master01 manifests ]#kubectl delete -f pod-demo.yaml
pod "pod-demo" deleted
[root@master01 manifests ]#kubectl apply -f myapp-pod.yml
标签选择器(Label Selector)
标签选择器用于表达标签的查询条件或选择标准,kubernetes API目前支持两个选择器
基于等值关系(equality-based)
操作符有=、==和!=三种,其中前两个意义相同,都表示"等值"关系,最后一个表示"不等"关系
基于集合关系(set-based)
KEY in (VALUE,VALUE2,...)
KEY not in (VALUE1,VALUE2,...)
KEY:所有存在此键名标签的资源
!KEY:所有不存在此键名标签的资源。
使用标签选择器时还将遵循以下逻辑:
同时制定的多个选择器之间的逻辑关系为“与”操作;
使用空值的标签选择器意味着每个资源对象豆浆被选中
空的标签选择器将无法选出任何资源。
[root@master01 ~ ]#kubectl get pods -n dev --show-labels
NAME READY STATUS RESTARTS AGE LABELS
myapp-pod 1/1 Running 0 1h <none>
[root@master01 ~ ]#kubectl label pods myapp-pod -n dev dhy=haiyang
pod/myapp-pod labeled
[root@master01 ~ ]#kubectl get pods -n dev --show-labels
NAME READY STATUS RESTARTS AGE LABELS
myapp-pod 1/1 Running 0 2h dhy=haiyang
[root@master01 ~ ]#kubectl get pods -n dev -l dhy=haiyang
NAME READY STATUS RESTARTS AGE
myapp-pod 1/1 Running 0 2h
[root@master01 ~ ]#kubectl get pods -n dev -l \!dhy
No resources found.
[root@master01 ~ ]#vim pod-demo.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-demo
namespace: default
labels:
app: dhyapp
relase: beta
annotations:
dhyannotatons/environment: Development
[root@master01 ~ ]#kubectl describe pods -n dev myapp-pod
[root@master01 ~ ]#kubectl annotate pods myapp-pod dhyannotations.com/environ=haiyang --overwrite
Pod生命周期
Pod对象总是应该处于其生命周期过程中一下几个相位(phase)之一
Pending:API Server创建了Pod资源对象并已存入etcd中,但它尚未被调度完成,或仍处于从仓库下载镜像的过程中;
Running:Pod已经被调度至某节点,并且所有容器都已经被kubelet创建完成;
Succeeded:Pod中的所有容器都已经成功终止并且不会被重启;
Faild:所有容器都已经终止,但至少有一个容器终止失败,即容器返回了非0值的退出状态或已经被系统终止;
Unknown:API Server无法正常获取到Pod对象的状态信息,通常是由于其无法与所在工作节点的kubelet同行所致
Pod生命周期中的重要阶段
初始化容器
生命周期钩子函数
postStart
preStop
容器探测
探测类型
存活状态探测:liveness porbe
就绪状态探测:readiness probe
探测行为
ExecAction
TCPSocketAction
HTTPGetAction
#启动后钩子
[root@master01 ~ ]#kubectl explain pods.spec.containers.lifecycle
[root@master01 manifests ]#vim podlift.yaml
apiVersion: v1
kind: Pod
metadata:
name: podlifecycle
namespace: dev
spec:
containers:
- name: bbox-container
image: busybox
imagePullPolicy: IfNotPresent
command: ["/bin/sh","-c",sleep 3600]
lifecycle:
postStart:
exec:
command: ["/bin/sh","-c"," mkdir -p /data/html"]
[root@master01 manifests ]#kubectl apply -f podlift.yaml
pod/podlifecycle created
[root@master01 manifests ]#kubectl get pods -n dev
NAME READY STATUS RESTARTS AGE
myapp-pod 1/1 Running 0 3h
podlifecycle 1/1 Running 0 40s
[root@master01 manifests ]#kubectl exec podlifecycle -n dev -- ls /
bin
data
dev
etc
home
proc
root
sys
tmp
usr
var
===
存活状态探测:liveness porbe
[root@master01 ~ ]#kubectl explain pods.spec.containers.livenessProbe
[root@master01 manifests ]#vim podliveness.yaml
apiVersion: v1
kind: Pod
metadata:
name: podliveness.dhy
namespace: dev
spec:
containers:
- name: bbox-container-liveness
image: ikubernetes/myapp:v1
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:
port: 80
failureThreshold: 2
[root@master01 manifests ]#kubectl exec podliveness.dhy -n dev -it -- sh
/ # cd /usr/share/nginx/html/
/usr/share/nginx/html # ls
50x.html index.html
/usr/share/nginx/html # rm -f index.html
/usr/share/nginx/html # exit
#重启一次了
[root@master01 manifests ]#kubectl get pods -n dev
NAME READY STATUS RESTARTS AGE
myapp-pod 1/1 Running 0 3h
podlifecycle 1/1 Running 0 18m
podliveness.dhy 1/1 Running 1 5m
[root@master01 manifests ]#kubectl apply -f podliveness.yaml
pod/podliveness.dhy created
[root@master01 manifests ]#kubectl get pods -n dev
NAME READY STATUS RESTARTS AGE
myapp-pod 1/1 Running 0 3h
podlifecycle 1/1 Running 0 14m
podliveness.dhy 1/1 Running 0 1m
#查看详细信息
[root@master01 manifests ]#kubectl describe pods podliveness.dhy -n dev
====
就绪状态探测:readiness probe
[root@master01 manifests ]#vim podreadinessprobe.yaml
apiVersion: v1
kind: Pod
metadata:
name: podreadiness.dhy
namespace: dev
spec:
containers:
- name: bbox-container-liveness
image: ikubernetes/myapp:v1
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:
port: 80
failureThreshold: 2
readinessProbe:
exec:
command: ["/bin/sh","-c","ls /data/html"]
periodSeconds: 2
[root@master01 manifests ]#kubectl apply -f podreadinessprobe.yaml
[root@master01 manifests ]#kubectl apply -f podreadinessprobe.yaml
pod/podreadiness.dhy created
[root@master01 manifests ]#kubectl get pods -n dev
NAME READY STATUS RESTARTS AGE
myapp-pod 1/1 Running 0 3h
podlifecycle 1/1 Running 0 24m
podliveness.dhy 1/1 Running 1 11m
podreadiness.dhy 0/1 Running 0 11s
[root@master01 manifests ]#kubectl exec podreadiness.dhy -n dev -- mkdir -p /data/html
[root@master01 manifests ]#kubectl get pods -n dev
NAME READY STATUS RESTARTS AGE
myapp-pod 1/1 Running 0 3h
podlifecycle 1/1 Running 0 26m
podliveness.dhy 1/1 Running 1 12m
podreadiness.dhy 1/1 Running 0 1m
容器的重启策略
Pod对象因容器程序奔溃或容器申请超出限制的资源等原因都可能导致其被终止,此时是否应该重建它,则取决于其重启策略(restartPolicy)属性的定义
Always:但凡Pod对象终止就将其重启,此为默认设定
OnFailure: 仅在Pod对象出现错误时方才将其重启;
Never:从不重启;
资源需求及资源限制
容器的计算资源配额
CPU属于可压缩(compressible)型资源,即资源额度可按需收缩,而内存(当前)则是不可压缩型资源,对其执行收缩操作可能会导致某种程度的问题
CPU资源的计量方式
一个核心相当于1000个微核心,即1=1000m,0.5=500m
内存资源的计量方式
默认单位为字节,也可以使用E、P、T、G、M和K后缀单位,或Ei、Pi、Ti、Gi、Mi和Ki形式的单位后缀
图9-30-4
Pod服务质量类别
根据Pod对象的requests和limit属性,Kubernetes把Pod对象归类到BestEffort、Burstable和Guarantee三个服务质量类别(Quality of Service,Qos)类别下
Guaranteed:每个容器都为CPU资源设置了具有相同值的requests和limits属性,以及每个容器都为内存资源设置了具有相同值的requests和limits属性的pod资源会自动归属此类别,这类pod资源具有最高优先级
Burstable:至少有一个容器设置了CPU或内存资源的requests属性,但不满足Guaranteed类别要求的pod资源自动归属此类别,它们具有中等优先级
BestEffort:未为任何一个容器设置requests或limits属性的pod资源自动归属此类别,它们的优先级为最低级别
root@master01 manifests ]#vim podresource.yaml
apiVersion: v1
kind: Pod
metadata:
name: podresource.dhy
namespace: dev
spec:
containers:
- name: bbox-container-liveness
image: ikubernetes/myapp:v1
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:
port: 80
failureThreshold: 2
resources:
requests:
cpu: 200m
memory: 32Mi
limits:
cpu: 2
memory: 1Gi
[root@master01 manifests ]#kubectl apply -f podresource.yaml
pod/podresource.dhy created
[root@master01 manifests ]#kubectl describe pods podresource.dhy -n dev
Ready: True
Restart Count: 0
Limits:
cpu: 2
memory: 1Gi
图9-30-4
ReplicaSet
A ReplicaSet ensures that a spenified number of pod replicas are running at any given time.
ReplicaSet Spen
Pod Template
Pod Selector
Replicas
Working with ReplicaSets
Deleting a ReplicaSet and its Pods
Deleting just a ReplicaSet
Isolating pods from a ReplicaSet
Scaling a ReplicaSet
[root@master01 manifests ]#vim apiController.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: rs-demo
namespace: dev
spec:
replicas: 2
selector:
matchLabels:
app: myapp
env: dev
template:
metadata:
labels:
app: myapp
env: dev
spec:
containers:
- name: myapp
image: ikubernetes/myapp:v1
[root@master01 manifests ]#kubectl apply -f apiController.yaml
replicaset.apps/rs-demo created
[root@master01 manifests ]#kubectl get rs -n dev
NAME DESIRED CURRENT READY AGE
rs-demo 2 2 2 11s
[root@master01 manifests ]#kubectl get pods -n dev
NAME READY STATUS RESTARTS AGE
myapp-pod 1/1 Running 1 1d
podlifecycle 1/1 Running 11 22h
podliveness.dhy 1/1 Running 2 22h
podreadiness.dhy 0/1 Running 1 22h
podresource.dhy 1/1 Running 1 22h
rs-demo-4z9qw 1/1 Running 0 2m
rs-demo-qm6b7 1/1 Running 0 2m
2个rs-demo,如果要3个,需要更改配置文件,并且重新提交。或者在线修改
[root@master01 manifests ]#kubectl edit rs rs-demo -n dev
spec:
replicas: 3
NAME READY STATUS RESTARTS AGE
rs-demo-4z9qw 1/1 Running 0 8m
rs-demo-6j5s6 1/1 Running 0 18s
rs-demo-qm6b7 1/1 Running 0 8m
修改配置版本为v2,则需要删除一个,配置文件创建补足一个为v2的版本
kubectl delete pods rs-demo-qm6b7 -n dev
这样不太方便,可以通过Deployment自动管理
===
Deployment
A Deployment controller provides declarative updates for Pods and ReplicaSets
you describe a desired state in a Deployment object,and the Deployment controller changes the actual state to the desired state at a controlled rate.
The following are typical use cases for Deployments:
Create a Deployment to rollout a ReplicaSet
Declare the new state of the Pods by updating the PodTemplateSpec of the Deployment.
Rollback to an earlier Deployment revision if the current state of the Deployment is not stable
Scale up the Deployment to facilitate more load
Pause the Deployment to apply multiple fixes to its PodTemplateSpec and then resume it to start a new rollout. #canary发布
Use the status of the Deployment as an indicator that a rollout has stuck.
Clean up older ReplicaSets that you don't need anymore
[root@master01 manifests ]#vim myapp1.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: rs-demo
namespace: dev
spec:
replicas: 2
selector:
matchLabels:
app: myapp-pod
env: dev
template:
metadata:
labels:
app: myapp-pod
env: dev
spec:
containers:
- name: myapp
image: ikubernetes/myapp:v1
[root@master01 manifests ]#kubectl apply -f myapp1.yaml
deployment.apps/rs-demo created
[root@master01 manifests ]#kubectl get deploy -o wide -n dev
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
rs-demo 2 2 2 2 17s myapp ikubernetes/myapp:v1 app=myapp-pod,env=dev
[root@master01 manifests ]#kubectl get rs -n dev
NAME DESIRED CURRENT READY AGE
rs-demo 3 3 3 27m
rs-demo-589c774986 2 2 2 1m
[root@master01 manifests ]#kubectl get pods -n dev
NAME READY STATUS RESTARTS AGE
myapp-pod 1/1 Running 1 1d
podlifecycle 1/1 Running 11 23h
podliveness.dhy 1/1 Running 2 23h
podreadiness.dhy 0/1 Running 1 22h
podresource.dhy 1/1 Running 1 22h
rs-demo-4z9qw 1/1 Running 0 28m
rs-demo-589c774986-gqkgn 1/1 Running 0 2m
rs-demo-589c774986-wnwtd 1/1 Running 0 2m
rs-demo-6j5s6 1/1 Running 0 20m
rs-demo-hj8kb 1/1 Running 0 15m