kubernetes face questions Summary

Note: All the following questions are his summary, if wrong place, please also indicate.

1. What k8s that? Please state your understanding?

A: Kubenetes is a container for application, automatically deploy, resilient and elastic and open source systems management. The main function is to arrange container production environment.
K8S is Google launched, which comes from using the system for 15 years by an internal Google Borg company, assembled Borg essence.

2. What is the composition of K8s architecture?

A: Like most distributed systems, K8S cluster requires at least a master node (Master) and a plurality of compute nodes (Node).

  • The master node is mainly used for exposure of API, the node scheduling and management of the deployment;
  • Computing nodes running a vessel operating environment, typically docker environment (and similar rkt docker environment), while running a K8s agent (kubelet) for communication with the master. Compute nodes will run a number of additional components, such as logging, monitoring node, service discovery, and so on. Compute nodes are the nodes in the cluster k8s real work.

K8S architecture segments:
1, Master node (default not participate in the actual work):

  • Kubectl: Client command-line tool, as the operation of the entire cluster K8s inlet;
  • Api Server: bear in K8s architecture is the role of "bridge", as the only entrance resources operations, it provides authentication, authorization, access control, API registration and discovery mechanisms. The client communicates with k8s cluster and K8s internal components, go through this Api Server components;
  • Controller-manager: responsible for maintaining the state of the cluster, such as fault detection, automatic extension, rollover, etc.;
  • Scheduler: is responsible for scheduling resources, according to a predetermined scheduling policy pod dispatch to the appropriate node node;
  • Etcd: assume the role of the data center, save the entire state of the cluster;
    2, the Node node:
  • Kubelet: responsible for maintaining the life cycle of the container, is also responsible for managing the network and Volume, generally it runs on all nodes, proxy Node node, when the Scheduler is determined run pod on a node, specific information will the pod (image , Volume) kubelet sent to the other node, to create and run kubelet container according to the information returned to the master operation. (Self-healing: if a node goes down in a container, it will attempt to restart the container, if the restart is invalid, it will kill the pod, and then re-create a container);
  • Kube-proxy: Service logically represents a plurality of rear pod. Service is responsible for providing internal cluster service discovery and load balancing (when accessing external services pod provided by the Service, the Service received a request is to be forwarded to the pod by kube-proxy);
  • container-runtime: container is responsible for managing the operation of the software, such as docker
  • Pod: k8s cluster is the smallest unit inside. Each pod may run one or more of the inside container (containers), if there is a two container the pod, the container of the USR (user), MNT (mount point), the PID (process ID) are isolated from each other, the UTS (host name and domain name), IPC (message queue), nET (network stack) are mutually shared. I prefer to be as a pea pod folder, and pea pod is in the container;

3. What is the difference between the container and the host application is deployed?

A: The central idea is that the container-second start; once the package, run everywhere; it is the host application deployment can not be achieved, but also should pay attention to the persistent problem of data containers.
In addition, the container can be deployed to isolate individual services, independently of each other, which is another core concept of the container.

4, would you please say something kubenetes health monitoring mechanism for the pod resource object?

A: K8s in resources for health testing pod object provides three types of probe (probe) to perform health monitoring of the pod of:

1) livenessProbe probe
according to user-defined rules to determine whether the health pod, if livenessProbe unhealthy probing into the container, the kubelet will decide whether to restart in accordance with its strategy to restart, if a container does not contain livenessProbe probe will then kubelet We believe the return value livenessProbe probe container is always successful.
2) ReadinessProbe probe
likewise can be user-defined according to the rules to determine whether the health pod, if the detection fails, the controller will be removed from the pod this service list corresponding to the endpoint, is no longer any request scheduling the Pod this until the next probe successful.
3) startupProbe probe
started checking mechanism to apply some slow start businesses, to avoid being out of business for a long time to start the above two types of probes kill, this problem can be solved Put another way, is the definition of the above two types of probe mechanism , the initialization time can be longer defined.

Each detection method can support the same number of inspection parameters, the control for setting checked:

  • initialDelaySeconds: Initial probe for the first time interval, the time for application launch, to prevent the application has not started and the health check failed
  • periodSeconds: check interval, how often to perform inspection probe, default 10s;
  • timeoutSeconds: Check the timeout period, after the timeout detection applications as failed;
  • successThreshold: successful detection threshold, indicate how many times the probe was healthy and normal, default probe 1.

The above two probes support the following three detection methods:
1) Exec: check this by way of command execution service is normal, for example, use the cat command to check whether a significant presence in the pod configuration file, if there is, then the pod health. Conversely exception.
Yaml file Exec detection methods the following syntax:

spec:
  containers:
  - name: liveness
    image: k8s.gcr.io/busybox
    args:
    - /bin/sh
    - -c
    - touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600
    livenessProbe:         #选择livenessProbe的探测机制
      exec:                      #执行以下命令
        command:
        - cat
        - /tmp/healthy
      initialDelaySeconds: 5          #在容器运行五秒后开始探测
      periodSeconds: 5               #每次探测的时间间隔为5秒

In the above configuration file, the detection mechanism in the container after running for 5 seconds, once every five seconds to detect, if the cat command returns a value of "0" indicates health, if non-zero, indicates abnormality.

2) Httpget: by sending a http / htps normal check if the service request, the returned status code indicates a container Health 200-399 (Note http get command similar to curl -I).
Yaml file Httpget detection methods the following syntax:

spec:
  containers:
  - name: liveness
    image: k8s.gcr.io/liveness
    livenessProbe:              #采用livenessProbe机制探测
      httpGet:                  #采用httpget的方式
    scheme:HTTP         #指定协议,也支持https
        path: /healthz          #检测是否可以访问到网页根目录下的healthz网页文件
        port: 8080              #监听端口是8080
      initialDelaySeconds: 3     #容器运行3秒后开始探测
      periodSeconds: 3                #探测频率为3秒

The above-described configuration file to send probe item container HTTP GET request, the requested file is healthz port 8080, any return 200 and less than or equal to 400 status code indicates success. Any other code indicates abnormal.

3) tcpSocket: implementation of TCP IP check by the container and Port, if we can establish a TCP connection, it indicates that the container healthy, somewhat similar detection mechanisms and in this way the HTTPget, tcpsocket health checks are applied to TCP traffic.
yaml file tcpSocket detection methods the following syntax:

spec:
  containers:
  - name: goproxy
    image: k8s.gcr.io/goproxy:0.1
    ports:
- containerPort: 8080
#这里两种探测机制都用上了,都是为了和容器的8080端口建立TCP连接
    readinessProbe:
      tcpSocket:
        port: 8080
      initialDelaySeconds: 5
      periodSeconds: 10
    livenessProbe:
      tcpSocket:
        port: 8080
      initialDelaySeconds: 15
      periodSeconds: 20

In the yaml profile, two probes are used, starting five seconds after the container, kubelet readinessProbe transmitting a first probe, which container is connected to port 8080, if the probe is successful, the pod healthy after ten seconds, kubelet will make a second connection.

In addition readinessProbe probe, starting 15 seconds after the container, kubelet livenessProbe transmitting a first probe, still try container port 8080, if the connection fails, the container is restarted.

Probing result is nothing less than one of the following three:

  • Success: Container passed the examination;
  • Failure: Container not pass inspection;
  • Unknown: no check is performed, so do not take any action (usually we do not define probes for the detection, the default is successful).

If the above is not enough that a thorough, venue and can document their official website: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/

5, how to control the rollover process?

A:
You can view the parameters that can be controlled by updating the following command:

[root@master yaml]# kubectl explain deploy.spec.strategy.rollingUpdate

maxSurge: This parameter controls the rollover process, the total number of copies exceeds the upper limit of the number of pod expected. It may be a percentage, or may be a specific value. The default is 1.
(That is, the role of these parameters in the update process, if the value is 3, then the matter three thousand seven hundred twenty-one, run three pod, used to replace the old pod, and so on)
maxUnavailable: This parameter controls the rollover process, Pod number of unavailable. (Above this value and the value of nothing to do, for example: I have ten pod, but in the process of updating, I allowed these ten pod in a maximum of three is not available, then it will set the value of this parameter is 3, during the update process, as long as the number of unavailable pod 3 or less, then the update process will not stop).

6, download strategy K8s what is in the mirror?

A: You can view this line of interpretation imagePullPolicy command "kubectl explain pod.spec.containers".

Mirror download policy K8s There are three: Always, Never, IFNotPresent;

  • Always: Mirror labeled latest, always get a mirror from the designated warehouse;
  • Never: prohibit downloading the image from the warehouse, that can only use a local mirror;
  • IfNotPresent: only when there is no corresponding local mirror, just download from the target warehouse.
  • The default image download strategy is: when the label is mirrored latest, the default policy is Always; when the mirror is a custom label (label is not the latest), then the default policy is IfNotPresent.

7, image of what state?

  • Running: Pod desired container has been successfully scheduled to a node, and the operation has been successful,
  • Pending: APIserver create a resource object pod, and has been stored in etcd, but it has not yet been completed or are still in the process of scheduling a warehouse download mirror
  • Unknown: APIserver can not obtain an object's state pod, usually caused by kubelet it can not communicate with the node where the work.

8. What pod restart strategy?

A: You can view the restart strategy pod through the command "kubectl explain pod.spec". (RestartPolicy field)

  • Always: Whenever pod objects on terminate the restart, this is the default policy.
  • OnFailure: error occurs only when the restart target in pod

9. What is the role of this Service resource object is?

A: to provide a fixed unified access interface to the same pod multiple objects, commonly used in service discovery and service access.

10, version rollback related command?

[root@master httpd-web]# kubectl apply -f httpd2-deploy1.yaml  --record  
#运行yaml文件,并记录版本信息;
[root@master httpd-web]# kubectl rollout history deployment httpd-devploy1  
#查看该deployment的历史版本
[root@master httpd-web]# kubectl rollout undo deployment httpd-devploy1 --to-revision=1    
#执行回滚操作,指定回滚到版本1

#在yaml文件的spec字段中,可以写以下选项(用于限制最多记录多少个历史版本):
spec:
  revisionHistoryLimit: 5            
#这个字段通过 kubectl explain deploy.spec  命令找到revisionHistoryLimit   <integer>行获得

11. What is the role label and tag selector is?

Tags: when more and more of the same type of resource object when, in order to better manage, in accordance with label divided into a group, in order to enhance the efficiency of resource object management.
The tag selector: is the query filter condition tags. API currently supports two tag selector:

  • "! =": Based on equivalence relations, such as "=", "", "==", (Note: "==" is equal to the mean, matchLabels field yaml file);
  • Collections, such as those based: in, notin, exists (matchExpressions field yaml file);
    Note: in: in this collection; notin: not in this collection; exists: either all in (exists) in this set, or both not (NotExists);

Use tag selector logic operation:

  • Using the specified logical relationship between the plurality of selectors based on the tag selector to simultaneously set, "AND" operation (such as: - {key: name, operator: In, values: [zhangsan, lisi]}, as long as it has two values ​​of resources, will be selected);
  • Use tag selector null values, meaning that each resource objects are selected (eg: the tag selector key is "A", two A resource object also has the key, but not the same value, in this case, If you use the tag selector null values, then select both resource object)
  • Empty tag selector (note that the above is not to say null, but empty, no name-defined key), you can not select any resources;
  • Based on the selected set, a "In" or "Notin" operation, which values ​​can be empty, but if it is empty, the label is selected, it is no meaning.
两种标签选择器类型(基于等值、基于集合的书写方法):
selector:
  matchLabels:           #基于等值
    app: nginx
  matchExpressions:         #基于集合
    - {key: name,operator: In,values: [zhangsan,lisi]}     #key、operator、values这三个字段是固定的
    - {key: age,operator: Exists,values:}   #如果指定为exists,那么values的值一定要为空

12, commonly used classification label what?

Classification label is customizable, but in order to make clear that others can achieve the effect, generally using some of the following classification:

  • Version of the class label (release): stable (stable version), Canary (canary version, it may be referred to as the beta beta), Beta (Beta);
  • Environmental label (environment): dev (development), qa (test), production (production), op (operation and maintenance);
  • Application of class (app): ui, as, pc, sc;
  • Type architecture (tier): frontend (front end), backend (rear end), cache (cache);
  • Partition label (partition): customerA (Client A), customerB (customer B);
  • Quality control level (Track): daily (per day), weekly (every week).

13, there are several ways to view the label?

A: There are three common ways to view:

[root@master ~]# kubectl get pod --show-labels    #查看pod,并且显示标签内容
[root@master ~]# kubectl get pod -L env,tier      #显示资源对象标签的值
[root@master ~]# kubectl get pod -l env,tier      #只显示符合键值资源对象的pod,而“-L”是显示所有的pod

14, add, modify, delete command label?

#对pod标签的操作
[root@master ~]# kubectl label pod label-pod abc=123     #给名为label-pod的pod添加标签
[root@master ~]# kubectl label pod label-pod abc=456 --overwrite       #修改名为label-pod的标签
[root@master ~]# kubectl label pod label-pod abc-             #删除名为label-pod的标签
[root@master ~]# kubectl get pod --show-labels

#对node节点的标签操作   
[root@master ~]# kubectl label nodes node01 disk=ssd      #给节点node01添加disk标签
[root@master ~]# kubectl label nodes node01 disk=sss –overwrite    #修改节点node01的标签
[root@master ~]# kubectl label nodes node01 disk-         #删除节点node01的disk标签

15, characteristics DaemonSet resource object?

DaemonSet this resource objects k8s runs on each node in the cluster, and each node can only run one pod, which is the largest and the only difference between it and the deployment resource object. Therefore, in its yaml file does not support the definition of replicas, except with written Deployment, RS and other objects of the same resources.

Its general use scenario is as follows:

  • Collection work to do in each node log;
  • Monitoring operating condition of each node;

16, talk about your understanding of the Job object of this resource?

A: Job and other containers of different service classes, Job is one of the working class vessel (generally used to make a one-time task). Use common small, you can ignore this issue.

#提高Job执行效率的方法:
spec:
  parallelism: 2           #一次运行2个
  completions: 8           #最多运行8个
  template:
metadata:

17, which states describe pod of life cycle?

  • Pending: representation pod has been agreed to create, we are waiting kube-scheduler select the appropriate node is created, usually in preparation for a mirror;
  • Running: The pod all the container has been created, and at least one container is running or is being started or is being restarted;
  • Succeeded: indicates that all containers have been successfully terminated and will not start;
  • Failed: indicates the pod all containers are non-0 (abnormal) state exit;
  • Unknown: indicates Pod state can not be read, usually kube-controller-manager can not communicate with the Pod.

18, what is the process to create a pod?

answer:

1) client submits Pod configuration information (which may be defined yaml file information) to apiserver-Kube;
2) after Apiserver command is received, the notification object is to create a resource-Manager Controller;
. 3) via the Controller-Manager api- server the pod configuration information stored in the ETCD data center;
. 4) Kube-scheduler detects pod message start scheduling preselected, will first filter out nodes do not meet Pod resources required, then start scheduling tuning, mainly selected node is more suitable for operation of the pod, the pod and the allocation of resources to transmit the single components on the node kubelet node.
5) Kubelet The resource allocation scheduler sent pod single operation, after a successful run, run the information back to the pod scheduler, the scheduler pod health information returned to the storing data center etcd.

19, remove a Pod what will happen?

A: Kube-apiserver will receive an instruction to delete the user's default have 30 seconds to wait for an elegant exit, more than 30 seconds will be marked as dead state, then the state Pod Terminating, kubelet see the pod marked Terminating began to close Pod of work;

Close process is as follows:
1, pod was removed from the list of endpoint service;
2, if the pod defines a hook in front of a stop, it will be called within the pod, stop hook generally defines how elegant the end of the process;
3 the process is sent TERM signal (the kill -14)
4, after more than an elegant exit time, all processes in the Pod are sent SIGKILL signal (kill -9).

20, what K8s of Service is the?

A: Pod each restart or redeployment, and will change its IP address, which makes communication between the pod and the pod external communication becomes difficult and, at this time, it is necessary to provide a fixed Service entrance pod.

Endpoint Service List pod typically bound a set of the same configuration, by way of the external load balancing requests among a plurality of pod

21, k8s is how to carry out the service register?

A: Pod will start loading the current environment, all Service information, in order to communicate according to different Pod Service name.

22, k8s outflows cluster how to access Pod?

A: You can access the Service by NodePort way, it will monitor all nodes in the same port, such as: 30000, traffic access node will be redirected to the corresponding Service above.

23, k8s data persistence options?

A: . 1) EmptyDir (empty directory): not specify a directory to be mounted on the host, a direct mapping from the retention portion to the Pod on the host. Similar to the manager volume docker in.

The main usage scenarios:
1) only need to temporarily save the data on disk, such as merging / sorting algorithm;
2) as the two shared storage containers, such that a first content management containers may be stored in the generated data wherein while providing these pages from the same webserver container outside.
emptyDir features:
different containers inside with a pod, sharing the same persistence directory, delete a node when the pod, volume of data will be deleted. If only the container is destroyed, pod still, it will not affect the volume of data.
In conclusion: emptyDir of data persistence life cycle and use the same pod. It is generally used as temporary storage.

2) Hostpath: the host to mount on an existing file or directory into the interior of the container. Similar to bind mount to mount in the way docker.
Such data persistence embodiment, the use of small scene, because it increases the coupling between the pod and the node.
For general k8s cluster data themselves persistence and persistence docker data itself will be used this way, you can refer apiService of yaml files yourself at: under / etc / kubernetes / main ... directory.

3) PersistentVolume (referred to as PV):
Based on PV NFS services can be based on GFS of PV. Its role is to unify data persistence directory, easy to manage.

In a PV of yaml file, you can configure the size of its PV, the
designated access mode PV:

  • ReadWriteOnce: read-write only way to mount a single node;
  • ReadOnlyMany: can be mounted to a plurality of nodes in read-only mode;
  • ReadWriteMany: can read and write in a manner to mount a plurality of nodes. ,
    And to specify pv recovery strategy:
  • recycle: PV clear the data, and then automatically recovered;
  • Retain: manually recovered;
  • delete: Delete cloud storage resources, dedicated cloud storage;
    # ps-: here refers to the recovery strategy is deleted after PV, PV stored in the source file is deleted).

For the use of PV, then there is an important concept: PVC, PVC is required to apply for application of PV capacity size, K8s cluster may have multiple PV, PVC PV To associate and its access method must be defined consistent. Defined storageClassName must also be consistent, if there is a cluster of the same (name, access mode are the same) two PV, PVC will then select it with the necessary capacity to close the PV application, or random application.

Guess you like

Origin blog.51cto.com/14154700/2452179