Kubernetes study notes (1)

Introduction

In April 2015, the long-rumored Borg paper was first published by Google along with the high-profile publicity of Kubernetes; 
Kubernetes is a complete distributed system support platform with complete cluster management capabilities, including multi-level security protection and access mechanisms, Multi-tenant application support capability, transparent service registration and service discovery mechanism, built-in intelligent load balancer, powerful fault discovery and self-healing capabilities, service rolling upgrade and online capacity expansion, scalable automatic resource scheduling mechanism, and multi-granularity resource quota management capabilities . At the same time, Kubernetes also provides comprehensive management tools, which cover all aspects including development, deployment testing , and operation and maintenance monitoring. Therefore, Kubernetes is a new distributed architecture solution based on container technology, and a one-stop complete distributed system development and support platform.

1. Core Concepts


1、Node

As a worker node in the cluster, Node runs real applications. The smallest running unit managed by Kubernetes on Node is Pod. Kubernetes' Kubelet and kube-proxy service processes run on Node. These service processes are responsible for Pod creation, startup, monitoring, restart, destruction, and load balancing in software mode.

Information contained in Node:

  • Node Address: The IP address of the host, or Node ID.
  • The running status of Node: Pending, Running, Terminated.
  • Node Condition:…
  • Node system capacity: Describes the system resources available to the Node, including CPU, memory, and the maximum number of schedulable Pods.
  • Others: Kernel version number, Kubernetes version, etc.

View Node information:

kubectl describe node

2 、 Pod

Pod is the most basic operation unit of Kubernetes, containing one or more closely related containers. A Pod can be regarded as a "logical host" of the application layer by a containerized environment; multiple container applications in a Pod are usually closely related. Coupling, Pods are created, started or destroyed on Node; each Pod runs a special container called Pause, and other containers are business containers. These business containers share the network stack and Volume hanging of the Pause container. Load volumes, so the communication and data exchange between them is more efficient, we can make full use of this feature in design to put a group of closely related service processes into the same Pod.

Containers in the same Pod can communicate with each other only through localhost.

Under

Application containers in a Pod share the same set of resources:

  • PID namespace: Different applications in a Pod can see the process IDs of other applications;
  • Network namespace: Multiple containers in a Pod can access the same IP and port range;
  • IPC namespace: Multiple containers in a Pod can communicate using SystemV IPC or POSIX message queues;
  • UTS namespace: Multiple containers in a Pod share a hostname;
  • Volumes (shared storage volumes): Each container in a Pod can access Volumes defined at the Pod level;

The life cycle of a Pod is managed by the Replication Controller; it is defined by a template, and then assigned to a Node to run. After the container contained in the Pod finishes running, the Pod ends.

Kubernetes has designed a unique set of network configurations for Pods, including: assigning each Pod an IP address, using the Pod name as the host name for communication between containers, etc.

3、Service

In the world of Kubernetes, although each Pod will be assigned a separate IP address, this IP address will disappear with the destruction of the Pod, which leads to a problem: if a group of Pods form a cluster to provide services, then How to access it? Service!

A Service can be regarded as the external access interface of a group of Pods that provide the same service. The Pods that the Service acts on are defined by the Label Selector.

  • has a specified name (eg my-mysql-server);
  • It has a virtual IP (Cluster IP, Service IP or VIP) and port number, which will not be changed until it is destroyed, and can only be accessed on the intranet;
  • Ability to provide some kind of remote service capability;
  • is mapped to a set of container applications that provide this service capability;

If the Service wants to provide external network services, it needs to specify the public IP and NodePort, or an external load balancer;

The NodePort 
system will open a real port of a host on each Node in the Kubernetes cluster, so that clients that can access Node can access the internal Service through this port.

4、Volume

A Volume is a shared directory in a Pod that can be accessed by multiple containers.

5、Label

Labels are attached to various objects in the form of key/value, such as Pod, Service, RC, Node, etc., to identify these objects and manage the relationship, such as the relationship between Service and Pod.

6、RC(Replication Controller)

  • Definition of target Pod;
  • The number of replicas the target Pod needs to run;
  • The target Pod label to monitor (Lable);

Kubernetes filters out the corresponding Pod instances through the Lable defined in the RC, and monitors its status and quantity in real time. If the number of instances is less than the defined number of replicas (Replicas), a new Pod will be created according to the Pod template defined in the RC. , and then schedule the Pod to a suitable Node to start running until the number of Pod instances reaches the predetermined target.

2. The overall architecture of Kubernetes

Master sum Node

Kubernetes将集群中的机器划分为一个Master节点和一群工作节点(Node)。其中,Master节点上运行着集群管理相关的一组进程etcd、API Server、Controller Manager、Scheduler,后三个组件构成了Kubernetes的总控中心,这些进程实现了整个集群的资源管理、Pod调度、弹性伸缩、安全控制、系统监控和纠错等管理功能,并且全都是自动完成。在每个Node上运行Kubelet、Proxy、Docker daemon三个组件,负责对本节点上的Pod的生命周期进行管理,以及实现服务代理的功能。

Governors

流程 
通过Kubectl提交一个创建RC的请求,该请求通过API Server被写入etcd中,此时Controller Manager通过API Server的监听资源变化的接口监听到这个RC事件,分析之后,发现当前集群中还没有它所对应的Pod实例,于是根据RC里的Pod模板定义生成一个Pod对象,通过API Server写入etcd,接下来,此事件被Scheduler发现,它立即执行一个复杂的调度流程,为这个新Pod选定一个落户的Node,然后通过API Server讲这一结果写入到etcd中,随后,目标Node上运行的Kubelet进程通过API Server监测到这个“新生的”Pod,并按照它的定义,启动该Pod并任劳任怨地负责它的下半生,直到Pod的生命结束。

随后,我们通过Kubectl提交一个新的映射到该Pod的Service的创建请求,Controller Manager会通过Label标签查询到相关联的Pod实例,然后生成Service的Endpoints信息,并通过API Server写入到etcd中,接下来,所有Node上运行的Proxy进程通过API Server查询并监听Service对象与其对应的Endpoints信息,建立一个软件方式的负载均衡器来实现Service访问到后端Pod的流量转发功能。

  • etcd 
    用于持久化存储集群中所有的资源对象,如Node、Service、Pod、RC、Namespace等;API Server提供了操作etcd的封装接口API,这些API基本上都是集群中资源对象的增删改查及监听资源变化的接口。

  • API Server 
    提供了资源对象的唯一操作入口,其他所有组件都必须通过它提供的API来操作资源数据,通过对相关的资源数据“全量查询”+“变化监听”,这些组件可以很“实时”地完成相关的业务功能。

  • Controller Manager 
    集群内部的管理控制中心,其主要目的是实现Kubernetes集群的故障检测和恢复的自动化工作,比如根据RC的定义完成Pod的复制或移除,以确保Pod实例数符合RC副本的定义;根据Service与Pod的管理关系,完成服务的Endpoints对象的创建和更新;其他诸如Node的发现、管理和状态监控、死亡容器所占磁盘空间及本地缓存的镜像文件的清理等工作也是由Controller Manager完成的。

  • Scheduler 
    集群中的调度器,负责Pod在集群节点中的调度分配。

  • Kubelet 
    负责本Node节点上的Pod的创建、修改、监控、删除等全生命周期管理,同时Kubelet定时“上报”本Node的状态信息到API Server里。

  • Proxy 
    实现了Service的代理与软件模式的负载均衡器。

客户端通过Kubectl命令行工具或Kubectl Proxy来访问Kubernetes系统,在Kubernetes集群内部的客户端可以直接使用Kuberctl命令管理集群。Kubectl Proxy是API Server的一个反向代理,在Kubernetes集群外部的客户端可以通过Kubernetes Proxy来访问API Server。

API Server内部有一套完备的安全机制,包括认证、授权和准入控制等相关模块。

三、Hello World


启动Kubernetes

systemctl start etcd
systemctl start docker
systemctl start kube-apiserver
systemctl start kube-controller-manager
systemctl start kube-scheduler
systemctl start kubelet
systemctl start kube-proxy

kubectl delete service redis-master //删除原来的Service,Pod、RC命令类似
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9

定义RC来创建Pod 
Redis-master-controller.yaml

aipVersion: v1
kind: ReplicationController
metadata:
  name: redis-master
  labels:
    name: redis-master
spec:
  replicas: 1  //确保集群上只有一个Pod实例
  selector:  //Pod选择器,即监控和管理拥有这些标签的Pod实例
    name: redis-master
  template:  //根据此模板来生成Pod实例
    metadata:
      labels:
        name: redis-master  //Pod的标签,须和Service的Selector匹配
    spec:
      containers:
      - name: master
        image: kubeguide/redis-master
        ports:
        - containerPort: 6379

执行命令,将它发布到kubernetes集群中:

kubectl create -f redis-master-controller.yaml

查看命令

kubectl get rc
kubectl get pods

提供Redis服务的Pod已经创建并正常运行了,接下来创建一个与之关联的Service服务 
redis-master-service.yaml

apiVersion: v1
kind: Service
metadata:
  name:redis-master
  labels:
    name: redis-master
spec:
  ports:
  - port: 6379  //service暴露在cluster ip上的端口,即虚拟端口
    targetPort: 6379  //Pod上的端口
  selector:
    name: redis-master  //哪些Label的Pod属于此服务

执行命令:

kubectl create -f redis-master-service.yaml
kubectl get services

创建成功后会分配IP、端口,但由于IP地址是在服务创建后由Kubernetes系统自动分配的,在其他Pod中无法预先知道某个Service的虚拟IP地址,因此需要一个机制来找到这个服务。为此,Kubernetes巧妙地使用了Linux环境变量,在每个Pod的容器里都增加了一组Service相关的环境变量,用来记录从服务名到虚拟IP地址的映射关系。以redis-master服务为例,在容器的环境变量中会增加如下两条记录

REDIS_MASTER_SERVICE_HOST=10.254.144.74
REDIS_MASTER_SERVICE_PORT=6379

Therefore, other applications can obtain the virtual IP and port of the redis-master service through environment variables;

redis-slave-controller.yaml

apiVersion: v1
kind: ReplicatioinController
metadata:
  name: redis-slave
  labels:
    name: redis-slave
spec:
  replicas: 2
  selector:
    name: redis-slave
  template:
    metadata:
      labels:
        name:redis-slave
    spec:
      containers:
      - name: slave
        image: kubeguide/guestbook-redis-slave
        env:
        - name: GET_HOSTS_FROM
          value: env
        ports:
        - containerPort: 6379

Excuting an order

kubectl create -f redis-slave-controller.yaml
kubectl get rc
kubectl get pods

In order to synchronize the master-slave data of the Redis cluster, the redis-slave needs to know the address of the redis-master, so in the startup command /run.sh of the redis-slave image, the last startup command is:

redis-server --slaveof ${REDIS_MASTER_SERVICE_HOST} 6379

redis-slave-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    name: redis-slave
spec:
  ports:
  - port: 6379
  selector:
    name: redis-slave

Excuting an order:

kubectl create -f redis-slave-service.yaml
kubectl get services

Create web Pod and Service 
frontend-controller.yaml

apiVersion: v1
kind: ReplicationController
metadata:
  name: frontend
  labels:
    name: frontend
spec:
  replicas: 3
  selector:
    name: frontend
  template:
    metadata:
      labels:
        name: frontend
    spec:
      containers:
      - name: frontend
        image: Kubeguide/guestbook-php-frontend
        env: 
        - name: GET_HOSTS_FROM
          value: env
        ports:
        - containerPort: 80

Excuting an order

kubectl create -f frontend-controller.yaml
kubectl get rc
kubectl get pods
  • 1
  • 2
  • 3
  • 1
  • 2
  • 3

frontend-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    name: frontend
  spec:
    type: NodePort
    ports:
    - port: 80
      nodePort: 30001  //物理机上的端口
    selector:
      name: frontend

Excuting an order

kubectl create -f frontend-service.yaml
kubectl get services


http://blog.csdn.net/test103/article/details/55663562

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=326619886&siteId=291194637