k8s中yaml文常见语法

在k8s中,所有的配置都是 json格式的。但为了读写方便,通常将这些配置写成yaml 格式,其运行的时候,还是会靠yaml引擎将其转化为json,apiserver 也仅接受json的数据类型。

yaml 结构主要有字典与数组两种结构:

1、字典类型,其中有普通字典与多层嵌套字典,字典的键值使用 : 标识。

普通字典:
apiVersion: v1, 此时 apiVersion 为key, v1 为value。
多层嵌套字典:
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kube-system
格式化为json后,实际就是:{"metadata":{"labels":{"k8s-app":"kubernetes-dashboard"},"name":"kubernetes-dashboard-certs","namespace":"kube-system"}}
# metadata 为键,其值是一个字典,字典中值为 labels,name,而labels 本身也是一个字典,它的键为k8s-app,值为kubernetes-dashboard
2、数组类型,数组和字典一样,可多层嵌套,通常使用 - 标识,并且在很多情况下,数组是与字典混合使用的。


1、普通数组,如:
volumes:
- name: kubernetes-dashboard-certs

解析为json 则为:

{"valumes":[{"name":"kubernetes-dashboard-certs"}]}

2、多层嵌套数组,如:

containers:
- name: kubernetes-dashboard
image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0
ports:
- containerPort: 8443
protocol: TCP
解析为json 则为:
{"containers":[{"name":"kubernetes-dashboard","image":"k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0","ports":[{"containerPort":"8443","protocol":"TCP"}]}]}
# 字典中嵌套列表,列表的元素中又嵌套字典与列表。

YAML语法规则:

- 大小写敏感
- 使用缩进表示层级关系
- 缩进时不允许使用Tal键,只允许使用空格
- 缩进的空格数目不重要,只要相同层级的元素左侧对齐即可
- “#” 表示注释,从这个字符一直到行尾,都会被解析器忽略
- “---” 为可选的分隔符 ,当需要在一个文件中定义多个同级的结构的时候需要使用,如同时在一个yaml中定义 Secret 和 ServiceAccount 这两个同级资源类型时,就需要使用 --- 进行分隔。


例如:

apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kube-system
type: Opaque

---
# ------------------- Dashboard Service Account ------------------- #

apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system

在k8s 中常用的几个配置项的解释:

kind: 指定这个资源对象的类型,如 pod ,deployment,statefulset,job,cronjob

apiVersion: 资源对象所支持的分组与版本。
如pod,ServiceAccount,Service 的apiVersion通常为: “apiVersion: v1” ,可选的有 v1beat1等。 则表示使用 v1 版本的api接口。
deployment的apiVersion通常为 “apiVersion: extensions/v1”,可选的有 “apiVersion: extensions/v1beta1”
尽量使用非beat版本的apiversion,因为在下一次更新后,这个apiversion有可能 被 改动或废弃。
一旦yaml 指定了apiVersion的版本后,就决定了yaml中的字典的键对应的功能。不同的apiversion支持的不同的功能的键。


metadata: 常用的配置项有 name,namespace,即配置其显示的名字与归属的命名空间。
name: memory-demo
namespace: mem-example

spec: 一个嵌套字典与列表的配置项,也是主要的配置项,支持的子项非常多,根据资源对象的不同,子项会有不同的配置。
如一个pod的spec配置:

spec:
containers:
- name: memory-demo-ctr
image: polinux/stress
resources:
limits:
memory: "200Mi"
requests:
memory: "100Mi"
command: ["stress"]
args: ["--vm", "1", "--vm-bytes", "150M", "--vm-hang", "1"]

一个service 的 spec 的配置:

spec:
type: NodePort
ports:
- port: 8080
nodePort: 30001
selector:
app: myap


k8s 中的资源对象:

主要有三种:
workload类型
服务发现及负载均衡
配置与存储管理


workload:
pod ,deployment,statefulset,job,cronjob

服务发现及负载均衡:
service,ingress

配置与存储管理:
configmap
Secret
LimitRange


pod,集群调度的是小单位:

pod 的 yaml 文件常见配置:


apiVersion: v1 # 指明使用的 api 的版本
kind: Pod # 指明资源对象类型
metadata:
name: memory-demo # 指明 元数据中的名称,的所属的名称空间选项。
namespace: mem-example
spec:
containers: # 使用一个列表,配置 容器的属性
- name: memory-demo-ctr # 容器在k8s中显示的名字
image: polinux/stress # 在容器仓库中,真实存在的、具体的镜像的名称
resources: # 资源配置
limits: # 限制内存最多使用 200M
memory: "200Mi"
requests: # 申请100M的内存
memory: "100Mi"
command: ["stress"] # run 起来后,要执行的命令
args: ["--vm", "1", "--vm-bytes", "150M", "--vm-hang", "1"] # 命令的参数


内存限制官方的说明:

The Container has no upper bound on the amount of memory it uses. The Container could use all of the memory available on the Node where it is running.

The Container is running in a namespace that has a default memory limit, and the Container is automatically assigned the default limit. Cluster administrators can use a LimitRange to specify a default value for the memory limit.

定义有健康状态检查的pod:

apiVersion: v1
kind: Pod
metadata:
labels:
test: liveness
name: liveness-http
spec:
containers:
- args:
- /server
image: k8s.gcr.io/liveness
livenessProbe:
httpGet:
# when "host" is not defined, "PodIP" will be used
# host: my-host
# when "scheme" is not defined, "HTTP" scheme will be used. Only "HTTP" and "HTTPS" are allowed
# scheme: HTTPS
path: /healthz
port: 8080
httpHeaders:
- name: X-Custom-Header
value: Awesome
initialDelaySeconds: 15 # pod 启动后,多少秒才开始进行健康状态检测,对一些启动较慢的应用十分必要。如 java 应用 中和tomcat, 一些比较大的 jar 文件等。
timeoutSeconds: 1 # httpget 超时的时间
name: liveness

LimitRange 配置名称空间下的资源限额。

memory 的配置:


资源限制,可以具体的在某一个pod 中指定,可以在名称空间指定。若是即不在名称空间指定,也不在pod 中指定,那么pod 的最大可用内存为 所在的node 节点的内存。若是在pod 中指定的内存限制 ,那么可用内存为配置文件中指定的大小。

LimitRange 定义的是默认的内存的使用,而 pod 中的resources 定义的则是特指的,适用于本pod的,可以覆盖掉默认的LimitRange的配置。

优先级: pod > 名称空间

apiVersion: v1
kind: LimitRange
metadata:
name: mem-limit-range
spec:
limits:
- default:
memory: 512Mi
defaultRequest:
memory: 256Mi
type: Container


# resources 中有两修配项,若是仅定义了其中的一个,如仅定义了申请内存的request,没有定义限制内存的limit,则request使用自定义的,limit 使用 LimitRange 定义的。

官方关于LimitRange说明:https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/


cpu 的配置:

在LimitRange 中指定:

apiVersion: v1
kind: LimitRange
metadata:
name: cpu-limit-range
spec:
limits:
- default:
cpu: 1
defaultRequest:
cpu: 0.5
type: Container

在pod 文件中指定:

apiVersion: v1
kind: Pod
metadata:
name: default-cpu-demo-2
spec:
containers:
- name: default-cpu-demo-2-ctr
image: nginx
resources:
limits:
cpu: "100m" # CPU资源以cpus为单位。允许小数值。你可以用后缀m来表示mili。例如100m cpu等同于100 milicpu,意思是0.1cpu。

综合: 在创建一个pod 时,指定cpu 与 memory 的资源配额。

apiVersion: v1
kind: Pod
metadata:
name: quota-mem-cpu-demo
spec:
containers:
- name: quota-mem-cpu-demo-ctr
image: nginx
resources:
limits:
memory: "800Mi"
cpu: "2"
requests:
memory: "600Mi"
cpu: "1"

ResourceQuota 配置名称空间下的资源限额:

apiVersion: v1
kind: ResourceQuota
metadata:
name: mem-cpu-demo
spec:
hard:
requests.cpu: "1"
requests.memory: 1Gi
limits.cpu: "2"
limits.memory: 2Gi

ReplicationController:

A ReplicationController ensures that a specified number of pod replicas are running at any one time.
In other words, a ReplicationController makes sure that a pod or a homogeneous set of pods is always up and available.

主要作用使 pod 在用户期望的副本数量,仅此而已。而 新出的ReplicaSet,deployment 不仅能实现副本数量的控制 ,还能支持新的 yaml 格式的标签的绑定,pod 数量 的动态修改,并且deployment 还支持平滑升级与回滚。
作为k8s早期的副本控制的资源对象,现在已经很少使用了。


ReplicaSet:

ReplicaSet is the next-generation Replication Controller. The only difference between a ReplicaSet and a Replication Controller right now is the selector support.
ReplicaSet supports the new set-based selector requirements as described in the labels user guide whereas a Replication Controller only supports equality-based selector requirements.

A ReplicaSet ensures that a specified number of pod replicas are running at any given time. However,
a Deployment is a higher-level concept that manages ReplicaSets and provides declarative updates to pods along with a lot of other useful features. Therefore,
we recommend using Deployments instead of directly using ReplicaSets, unless you require custom update orchestration or don’t require updates at all.
This actually means that you may never need to manipulate ReplicaSet objects: use a Deployment instead, and define your application in the spec section.

大概意思 就是 ReplicaSet 是 Replication Controller 的升级版,而deployment 是基于 ReplicaSet 的,更高一层的封闭,非常不建议用户直接创建 ReplicaSet,只需要通过操作 deployment ,来控制 ReplicaSet 就可以了。

ReplicaSet 的 yaml 文件示例:

apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: frontend
labels:
app: guestbook
tier: frontend
spec:
# modify replicas according to your case
replicas: 3
selector:
matchLabels:
tier: frontend
matchExpressions:
- {key: tier, operator: In, values: [frontend]}
template:
metadata:
labels:
app: guestbook
tier: frontend
spec:
containers:
- name: php-redis
image: gcr.io/google_samples/gb-frontend:v3
resources:
requests:
cpu: 100m
memory: 100Mi
env:
- name: GET_HOSTS_FROM
value: dns
# If your cluster config does not include a dns service, then to
# instead access environment variables to find service host
# info, comment out the 'value: dns' line above, and uncomment the
# line below.
# value: env
ports:
- containerPort: 80


deployment 常用配置项:

k8s资源 对象的主要的操作类型。

A Deployment controller provides declarative updates for Pods and ReplicaSets.
You describe a desired state in a Deployment object, and the Deployment controller changes the actual state to the desired state at a controlled rate.
You can define Deployments to create new ReplicaSets, or to remove existing Deployments and adopt all their resources with new Deployments.

https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#creating-a-deployment

示例:

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment # 决定了Deployment 的名字
labels:
app: nginx
spec:
replicas: 3 # pod 副本的数量
selector:
matchLabels:
app: nginx #selector 下的 这个子项,决定了 deployment 能控制 所有 labels 键值对为 app: nginx 的pod
template:

# spec 下的template, 有许多子项,如 配置Pod 的label,配置具体的容器的 地址,端口等。
metadata:
labels:
app: nginx # 与 deployment 的 selector 对应
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80


创建后的deployment ,还支持在命令行进行修改,如:

kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1

或:

kubectl edit deployment/nginx-deployment ,会弹出 一个vim 容器,修改保存即可。


若是发现修改或升级后有问题,还可以进行回滚:

kubectl rollout undo deployment deployment/nginx-deployment (不指明 版本,默认回退到上一版本)
kubectl rollout undo deployment deployment/nginx-deployment --to-version=1 (指定回滚到 第1 版本。)

还可以查看回滚的状态:

$ kubectl rollout status deployments nginx-deployment
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...

查看 回滚的历史版本:

$ kubectl rollout history deployment/nginx-deployment

deployments "nginx-deployment"
REVISION CHANGE-CAUSE
1 kubectl create -f https://k8s.io/examples/controllers/nginx-deployment.yaml --record
2 kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1
3 kubectl set image deployment/nginx-deployment nginx=nginx:1.91



查看 指定版本的修改的内容:

$ kubectl rollout history deployment/nginx-deployment --revision=2
deployments "nginx-deployment" revision 2
Labels: app=nginx
pod-template-hash=1159050644
Annotations: kubernetes.io/change-cause=kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1
Containers:
nginx:
Image: nginx:1.9.1
Port: 80/TCP
QoS Tier:
cpu: BestEffort
memory: BestEffort
Environment Variables: <none>
No volumes.



动态的修改deployment 中的pod 的数量 :

kubectl scale --replicas=5 deployment myapp (增加)
kubectl scale --replicas=1 deployment myapp (减少)

StatefulSets:
配置有状态的集群时会用到。如 redissentinal,rediscluster,mysql主从。


daemonSet:
一个特殊的pod类型, 同一种功能的daemonSet 只能在 每一个node 节点运行一个。
一般用来做一些 公共的基础服务。如存储容器,或日志收集容器。

A DaemonSet ensures that all (or some) Nodes run a copy of a Pod. As nodes are added to the cluster, Pods are added to them.
As nodes are removed from the cluster, those Pods are garbage collected. Deleting a DaemonSet will clean up the Pods it created.

Some typical uses of a DaemonSet are:

running a cluster storage daemon, such as glusterd, ceph, on each node.
running a logs collection daemon on every node, such as fluentd or logstash.
running a node monitoring daemon on every node, such as Prometheus Node Exporter, collectd, Dynatrace OneAgent, Datadog agent, New Relic agent, Ganglia gmond or Instana agent.

daemonSet yaml 文件示例:

apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd-elasticsearch
namespace: kube-system
labels:
k8s-app: fluentd-logging
spec:
selector:
matchLabels:
name: fluentd-elasticsearch
template:
metadata:
labels:
name: fluentd-elasticsearch
spec:
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: fluentd-elasticsearch
image: k8s.gcr.io/fluentd-elasticsearch:1.20
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers

常用命令:

kubectl create -f fileName.yaml # 根据yaml 文件定义的内容,创建一个pod
kubectl get pod podName -n nameSpace -o wide # 查看pod 的运行状态及所处的节点
kubectl get pod podName --output=yaml --n nameSpace # 将一个运行的pod,导出为一个yaml 文件
kubectl top pod podName -n nameSpace # 得到一些硬件及负载信息
kubectl describe pod podName -n nameSpace # 查看Pod 运行的详细信息,排错的时候经常用到。

猜你喜欢

转载自www.cnblogs.com/kvipwang/p/10723083.html