k8s cluster architecture and basic operation

First, k8s cluster to know that there are two deployment tools:
kubeadm: Tools automated deployment k8s cluster.
kubectl: K8S command-line tool, instructions for receiving user input.

kubernetes is what constitutes?

At the hardware level, a lot of kubernetes cluster nodes, these nodes are divided into the following two types:

  • The master node: it carries the control panel kubernetes control and manage the entire cluster system
  • Work nodes: they run user application actually deployed.
    k8s cluster architecture and basic operation

    Control panel (master)

    Control panel is used to control the cluster and make it work. It comprises a plurality of components, the components may be run on a single copy of the master node or by a plurality of main nodes are deployed in order to ensure high availability.

master of the components are:

Note: master node default does not work, if there is a need we can set it to work, but is generally not recommended, because the master node is responsible for controlling and managing the cluster, it is very important to keep the default general it does not participate in the work that is can.

  • APIserver: apiserver is a front-end interface k8s cluster, a variety of client tools and other components can k8s managing resources in a cluster through it.

  • Scheduler: The pod is responsible for deciding which node on that delete operation. In the scheduling process, we will consider the cluster node status, current load condition of each node and the corresponding scheduling availability, and performance requirements.

  • Controller manager: responsible for the various resource management k8s cluster. Ensure that the resource is the user's desired state.

  • ectd: multiple data centers, responsible for storing configuration information k8s cluster and status information for various resources, when the data changes, etcd will notify the other components k8s cluster.

  • Pod: k8s cluster is the smallest unit inside. Each pod inside Container running one or more (typically only one run)

Node node components are:

  • kubelet: a proxy node Node, when a Schedule determined run on a Node pod, the pod will be specific configuration information (image, volume) kubelet sent to the other node, kubelet this information to create and run the container, and master report operating status.
    Self-healing: If a node goes down in a container, it will automatically kill off, and then re-create a container.

  • kube-Proxy (load balancing): service logically represents more pod back end, the outside world through service access pod.
    How to service the received request is forwarded to the pod? This is kube-proxy work to be done. To do load balancing by iptables rules.

k8s cluster architecture and basic operation
What is the interaction between the various components of it? :
First, the user sends a deployment by kubectl command, passed a cluster APIserver, after APIserver instructed to notify Controller Manager to create a deployment of resource objects, have been confirmed, in turn pass APIserver instruction, APIserver will communicate with etcd, etcd will retrieval of information resources in the cluster, followed by the schedule to perform scheduled and decided to pod assigned to which node in the cluster to run. Finally kubelet will create and run a pod on each node upon request.

k8s basic operations

k8s yaml file in the storage position of the various components:
k8s cluster architecture and basic operation

kubernetes default namespace has the following four:
k8s cluster architecture and basic operation

1) create a controller, deployment Deployment of a resource object
[root@master ~]# kubectl run nginx-deploy --image=nginx --port=80 --replicas=2

k8s cluster architecture and basic operation

Parameter Description:
kubectl RUN: Run a resource object, followed by the custom name
--image: specify the mirror, that is, the services you want to deploy
--port: specify the service port
--replicas: create two copies

//查看Deployment资源对象
[root@master ~]# kubectl  get deployments. -o wide

k8s cluster architecture and basic operation

参数解释:
-o wide: 加上该参数,显示的内容更宽泛一点
READY:表示所达到的期望值,2/2 表示有2个可用。
AVAILABLE:表示为可用的数

它会自动的去下载镜像(nginx镜像),也可以提前将镜像上传到服务器上,从本地进行下载。

//查看pod运行到哪个节点之上:(包括显示pod的ip地址)
[root@master ~]# kubectl get pod -o wide

k8s cluster architecture and basic operation
以上pod分配的ip地址是在我们初始化集群时,指定官方提供的pod网络中的网段。

一个pod中会有两种容器(其中):
USR,MNT,PID是相互隔离的
UTS,IPC,NET是相互共享的

2)service-暴露资源:(暴露端口给外网)
#如果外网想要访问k8s中提供的服务,就必须创建一个service的资源对象。

[root@master ~]# kubectl expose deployment  nginx-deploy --name=myweb --port=80 --type=NodePort
service/myweb exposed

参数解释:
expose: 暴露端口
nginx-deploy:暴露名称为nginx-deploy的资源对象
--name: 自定义名称myweb
--port:指定端口80
--type:指定类型nodeport

#其实上面就相当于是创建了个service。

//查看service映射出来的资源对象:
[root@master ~]# kubectl get service
k8s cluster architecture and basic operation

解释:
CLUSTER-IP:统一的一个集群接口,为集群内部通信的地址。
80:32326/TCP:80为服务端口,后面的端口是暴露给外网的(随机生成,范围是30000-32767)

External network tests to access the cluster via the web interface exposed port //:
url: http://172.16.1.30:30400/
k8s cluster architecture and basic operation
k8s cluster architecture and basic operation
k8s cluster architecture and basic operation
need to know is the cluster of any host is accessible, not only master.

3) container manually delete nodes, and then view the Deployment resource object again to see whether Pod maintained at the desired user number? Whether the IP address has changed?

[root@master ~]# kubectl  get pod -o wide  #查看pod分配的节点
NAME                            READY   STATUS    RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES
nginx-deploy-59f5f764bb-h7pv2   1/1     Running   0          44m   10.244.1.2   node01   <none>           <none>
nginx-deploy-59f5f764bb-x2cwj   1/1     Running   0          44m   10.244.2.2   node02   <none>           <none>
在node01上删除容器:
[root@node01 ~]# docker ps 

k8s cluster architecture and basic operation

[root@node01 ~]# docker rm -f 47e17e93d911

//再次查看Deployment资源对象:
[root@master ~]# kubectl  get deployments. -o wide
NAME           READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS     IMAGES   SELECTOR
nginx-deploy   2/2     2            2           48m   nginx-deploy   nginx    run=nginx-deploy
//查看pod
[root@master ~]# kubectl  get pod -o wide
NAME                            READY   STATUS    RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES
nginx-deploy-59f5f764bb-h7pv2   1/1     Running   0          50m   10.244.1.2   node01   <none>           <none>
nginx-deploy-59f5f764bb-x2cwj   1/1     Running   0          50m   10.244.2.2   node02   <none>           <none>

You can see in our pod or maintain the desired number, and pod ip address is not changed, you will find that when you delete a container on the node, it will immediately automatically generate a new pod. Why is this?
In fact, through the cluster controller manager component to ensure that the resource is the user desired state, that is when you define a copy, you define the two, it will ensure that you have been running two pod, if less, will be increased .

Guess you like

Origin blog.51cto.com/13972012/2455113