Build k8s cluster architecture ($ _ $)

The origin of k8s


 

k8s official website: https://kubernetes.io/

k8s Chinese community: https://www.kubernetes.org.cn/

  kubernetes, referred K8s, in place of 8 characters "ubernete" 8 made of abbreviations used. Is an open source, cloud management platform for application of the container on multiple hosts, that is, we often say that the container layout tools, K8S goal is to make application deployment container of simple and efficient.

By Google several engineers founded, in June 2014 for the first time announced. K8S development of a deeply internal Google called the system of influence Borg, Borg internal Google system is stable operation over ten years of large-scale container layout tools. Google follows the idea of ​​Borg system, with the GO language rebuild this vessel scheduling tool, and named K8S.

Here you might have some doubts about this concept container layout tool, then you can be understood by reference VCenter vmware, container scheduling tool is simply scheduling and management of container and container cluster , the range includes the node placement of the container , create a deployment , resource constraints , network and routing , storage management , auto-scaling , including then out of load balancing aspects and so on.

Kubernetes and containers ecosystem

Key features include k8s

  • Self-healing : Restart the failed container, when the node is unavailable, replace and re-scheduling containers on the node, container health checks on user-defined response will not be suspended, and will not be ready before the service to its customers in a container end of the broadcast.

  • Elastically stretchable : cpu load value by monitoring the container, if the average is higher than 80%, increase in the number of containers, if this average number is less than 10% reduction in the container

  • Automatic discovery of service and load balancing : no need to modify your application to use unfamiliar service discovery mechanism, Kubernetes provides a single DNS name its own IP address and a set of containers to the container, and can load between them balanced.

  • And a rolling upgrade key rollback : Kubernetes gradually deploy configuration changes to the application or while monitoring application health to ensure that it does not terminate all instances simultaneously. If a problem occurs, Kubernetes changes to your resume, take advantage of the growing eco-system deployment solution.

  • Private profile management : web container inside, the account password database (test database password)

k8s architecture


 

In addition to the core components, there are some recommended Add-ons:

Component Name Explanation
dome-dns DNS is responsible for providing services for the entire cluster
Ingress Controller For service providers outside the network entrance
Heapster Provide resource monitoring
Dashboard The web interface provides GUI k8s
Federation It provides cluster across the available area
Fluentd-elasticsearch It provides cluster log collection, storage and query

Architecture Component Description

master node - Big Brother

 

  1. Server API : provide a variety of resources k8s objects (pod, RC, Service, etc.) and watch the CRUD interfaces such as HTTP Rest is the entire system data bus and a data center;

  2. Controlle Manager (kube-controller-manager):kubernets 里面所有资源对象的自动化控制中心,可以理解为资源对象的大总管

  3. Scheduler (kube-scheduler)负责资源调度(pod调度)的进程,相当去公交公司的‘调度室’;

  4. Etcd用于持久化存储集群中所有的资源对象,如Node、Service、Pod、RC、Namespace等;API Server提供了操作etcd的封装接口API,这些API基本上都是集群中资源对象的增删改查及监听资源变化的接口。

node节点 --小弟

  1. Kubelet负责pod对应的容器创建、启停等任务,同时与master 节点密切协作,实现集群管理的基本功能;

  2. kube-proxy:实现K8S Service的通信与负载均衡机制的重要组件;

  3. Docker Engine:Docker引擎负责本机的容器创建和管理工作。

组件处理流程

  1. 通过Kubectl提交一个创建RC的请求,该请求通过API Server被写入etcd中;

  2. 此时Controller Manager通过API Server的监听资源变化的接口监听到这个RC事件,分析之后,发现当前集群中还没有它所对应的Pod实例,于是根据RC里的Pod模板定义生成一个Pod对象,通过API Server写入etcd;

  3. 接下来,此事件被Scheduler发现,它立即执行一个复杂的调度流程,为这个新Pod选定一个落户的Node,然后通过API Server将这一结果写入到etcd中;

  4. 随后,目标Node上运行的Kubelet进程通过API Server监测到这个“新生的”Pod,并按照它的定义,启动该Pod并任劳任怨地负责它的下半生,直到Pod的生命结束。

  5. 随后,通过Kubectl提交一个新的映射到该Pod的Service的创建请求;

  6. Controller Manager会通过Label标签查询到相关联的Pod实例,然后生成Service的Endpoints信息,并通过API Server写入到etcd中;

  7. 接下来,所有Node上运行的Proxy进程通过API Server查询并监听Service对象与其对应的Endpoints信息,建立一个软件方式的负载均衡器来实现Service访问到后端Pod的流量转发功能。

k8s逻辑概念


 

pod

Pod是K8S中可以创建的最小部署单元,一个Pod代表集群上正在运行的一个进程。Pod代表部署的一个单位:K8S中单个应用的实例,它可能由单个容器或多个容器共享组成。 Pod提供两种共享资源:网络和存储。

 

  1. 每个Pod被分配一个独立的IP地址,Pod中的每个容器共享网络命名空间,包括IP地址和网络端口;

  2. Pod可以指定一组共享存储volumes。

控制器

  • ReplicationController(副本控制器):确保Pod的数量始终保持设定的个数,也支持Pod的滚动更新。

  • ReplicaSet (副本集):它不直接使用,由一个声明式更新的控制器叫Deployment来负责管理,但是Deployment只能负责管理那些无状态的应用。

  • StatefulSet (有状态副本集):负责管理有状态的应用。

  • DaemonSet:如果需要在每一个Node上只运行一个副本,而不是随意运行,就需要DaemonSet。

  • job运行作业,对于时间不固定的操作,比如:某个应用生成了一大堆数据集,现在需要临时启动一个Pod去清理这些数据集,清理完成后,这个Pod就可以结束了。 这些不需要一直处于运行状态的应用,就用Job这个类型的控制器去控制。如果Pod运行过程中意外中止了,Job负责重启Pod。如果Pod任务执行完了,就不需要再启动了。

  • Cronjob:周期性作业。

ReplicationController

是pod的复制抽象,用于解决pod的扩容缩容问题。

通过replicationController,我们可以指定一个应用需要几份复制,K8S将为每份复制创建一个pod,并且保证实际运行pod数量总是与该复制数量相等。

RC中selector设置一个label,去关联pod的label,selector的label与pod的label相同,那么该pod就是该rc的一个实例。

RC中Replicas设置副本数大小,系统根据该值维护pod的副本数。

Replicaset在继承Pod的所有特性的同时, 它可以利用预先创建好的模板定义副本数量并自动控制, 通过改变Pod副本数量实现Pod的扩容和缩容。

缺点:无法修改template模板, 也就无法发布新的镜像

 

 

 

 

 

service

一个定义了一组pod策略的抽象,我们也有时候叫做宏观服务,这些被服务标记的Pod都是(一般)通过label Selector决定的。 K8S提供了一个简单的Endpoints API,这个Endpoints api的作用就是当一个服务中的pod发生变化时,Endpoints API随之变化。

Service类型

ClusterIP:默认模式,只能在集群内部访问

NodePort:在每个节点上都监听一个同样的端口号(30000-32767),ClusterIP和路由规则会自动创建。集群外部可以访问<NodeIP>:<NodePort>联系到集群内部服务,可以配合外部负载均衡使用。

LoadBalancer:要配合支持公有云负载均衡使用比如GCE、AWS。其实也是NodePort,只不过会把<NodeIP>:<NodePort>自动添加到公有云的负载均衡当中。

ExternalName:创建一个dns别名指到service name上,主要是防止service name发生变化,要配合dns插件使用。

 

Endpoint

是可被访问的服务端点,即一个状态为running的pod,它是service访问的落点,只有service关联的pod才可能成为endpoint

 

Deployment

Deployment为Pod和Replica Set(下一代Replication Controller)提供声明式更新,只需要在Deployment中描述你想要的目标状态是什么,Deployment controller就会帮你将Pod和Replica Set的实际状态改变到你的目标状态。

网络

节点网络:各主机(Master、Node、ETCD等)自身所属的网络,地址配置在主机的网络接口,用于各主机之间的通信,又称为节点网络。

Pod网络:专用于Pod资源对象的网络,它是一个虚拟网络,用于为各Pod对象设定IP地址等网络参数,其地址配置在Pod中容器的网络接口上。Pod网络需要借助kubenet插件或CNI插件实现

Service网络:专用于Service资源对象的网络,它也是一个虚拟网络,用于为K8S集群之中的Service配置IP地址,但是该地址不会配置在任何主机或容器的网络接口上,而是通过Node上的kube-proxy配置为iptables或ipvs规则,从而将发往该地址的所有流量调度到后端的各Pod对象之上。

 

Ingress Controller

Service是一种工作于4层的负载均衡器,而Ingress是在应用层实现的HTTP(S)的负载均衡。不过,Ingress资源自身并不能进行流量的穿透,,它仅仅是一组路由规则的集合,这些规则需要通过Ingress控制器(Ingress Controller)发挥作用。

 

 

k8s集群的安装


 

主机名 ip地址 节点
k8s-master 10.0.0.11 master
k8s-node-1 10.0.0.12 node-1
k8s-node-2 10.0.0.13 node-2

修改IP地址、主机和host解析

#所有主机进行解析
10.0.0.11  k8s-master
10.0.0.12  k8s-node-1
10.0.0.13  k8s-node-2

master节点安装etcd

yum install etcd -y

vim /etc/etcd/etcd.conf
6行:ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
21行:ETCD_ADVERTISE_CLIENT_URLS="http://10.0.0.11:2379"

systemctl start etcd.service
systemctl enable etcd.service

etcdctl set testdir/testkey0 0
etcdctl get testdir/testkey0

etcdctl -C http://10.0.0.11:2379 cluster-health

etcd原生支持做集群

master节点安装kubernetes

yum install kubernetes-master.x86_64 -y

vim /etc/kubernetes/apiserver 
8行:  KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
11行:KUBE_API_PORT="--port=8080"
14行: KUBELET_PORT="--kubelet-port=10250"
17行:KUBE_ETCD_SERVERS="--etcd-servers=http://10.0.0.11:2379"
23行:KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"

vim /etc/kubernetes/config
22行:KUBE_MASTER="--master=http://10.0.0.11:8080"

systemctl enable kube-apiserver.service
systemctl restart kube-apiserver.service
systemctl enable kube-controller-manager.service
systemctl restart kube-controller-manager.service
systemctl enable kube-scheduler.service
systemctl restart kube-scheduler.service

检查服务是否安装正常

[root@k8s-master ~]# kubectl get componentstatus 
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-0               Healthy   {"health":"true"} 

node节点安装kubernetes

yum install kubernetes-node.x86_64 -y

vim /etc/kubernetes/config 
22行:KUBE_MASTER="--master=http://10.0.0.11:8080"

vim /etc/kubernetes/kubelet
5行:KUBELET_ADDRESS="--address=0.0.0.0"
8行:KUBELET_PORT="--port=10250"
11行:KUBELET_HOSTNAME="--hostname-override=10.0.0.12"
14行:KUBELET_API_SERVER="--api-servers=http://10.0.0.11:8080"

systemctl enable kubelet.service
systemctl restart kubelet.service
systemctl enable kube-proxy.service
systemctl restart kube-proxy.service

在master节点检查

[root@k8s-master ~]# kubectl get nodes
NAME        STATUS    AGE
10.0.0.12   Ready     6m
10.0.0.13   Ready     3s

所有节点配置flannel网络


 

yum install flannel -y
sed -i 's#http://127.0.0.1:2379#http://10.0.0.11:2379#g' /etc/sysconfig/flanneld

##master节点:
etcdctl mk /atomic.io/network/config   '{ "Network": "172.18.0.0/16" }'

yum install docker -y
systemctl enable flanneld.service 
systemctl restart flanneld.service 
systemctl  restart  docker
systemctl  enable  docker
systemctl restart kube-apiserver.service
systemctl restart kube-controller-manager.service
systemctl restart kube-scheduler.service

##node节点:
systemctl enable flanneld.service 
systemctl restart flanneld.service 
systemctl  restart  docker
systemctl restart kubelet.service
systemctl restart kube-proxy.service

vim /usr/lib/systemd/system/docker.service
#在[Service]区域下增加一行,作用让主机之间可以连通
ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT
systemctl daemon-reload 
systemctl restart docker

配置master为镜像仓库


 

#所有节点

vi /etc/docker/daemon.json
{
"registry-mirrors": ["https://registry.docker-cn.com"],
"insecure-registries": ["10.0.0.11:5000"]
}

systemctl restart docker

#master节点
docker run -d -p 5000:5000 --restart=always --name registry -v /opt/myregistry:/var/lib/registry  registry

 

 

Guess you like

Origin www.cnblogs.com/Mercury-linux/p/12203382.html