The ninth chapter Service

2019-09-23

Today there are just one hundred days in 2020 in the hope that in the future be able to live up to expectations in the Hundred Days

  Not forgetting beginning of the heart, was always square,

  Easy to get early heart, always hard to defend.

A, Service concept

Kubernetes Service defines such an abstraction: a Pod logical grouping of a can access their strategy - commonly referred to as micro-services. This group of Pod can be Service access to, usually by Label Selector

Service can provide load balancing capabilities, but there are limitations in the use of the following: provide only 4 -layer load balancing capabilities, but not 7 layer functions, but sometimes we may need more matching rules to forward the request on this 4 -layer load equilibrium is not supported

Two, Service type

Service in K8s There are four types ClusterIp :

① default type, automatically assigns only a Cluster virtual access to the internal IP

NodePort ② : In ClusterIP based on Service bind a port on each machine, so you can : NodePort to access the service

LoadBalancer ③ : the NodePort based on, by cloud provider to create an external load balancer, and forwards the request to : NodePort

ExternalName ④ : the exterior into the interior of the cluster service to the cluster, to be used directly within the cluster. Without any type of proxy is created, which only kubernetes 1.7 or later kube-dns only support

svc Introduction to basic

 

to sum up

Client Access node via iptables achieve,

iptables rule by kube-proxy written,

apiserver by monitoring kube-proxy to monitor the services and endpoints,

kube-proxy through pod tag ( Lables ) to determine whether this information is written to the breakpoint Endpoints go.

Three, VIP and Service Agent

In Kubernetes cluster, each Node runs a kube-proxy process. kube-proxy is responsible for the Service implemented a VIP (virtual IP ) form, rather than ExternalName form. In Kubernetes v1.0 version, Acting entirely in userspace . In Kubernetes v1.1 version adds iptables agent, but not the default mode of operation. From Kubernetes v1.2 onwards, the default is iptables agent. In Kubernetes v1.8.0-beta.0 added a ipvs agent

 

In Kubernetes 1.14 start using the default version ipvs agent

 

In Kubernetes v1.0 version, Service is "4 layer " ( TCP / UDP over IP ) concept. In Kubernetes v1.1 version adds Ingress API ( Beta version), used to represent the "7 layers " ( HTTP ) service

Why not use round-Robin the DNS ?

DNS will be carried out in a lot of clients in cache, many services in the access DNS domain name resolution is completed, will not get the address of the DNS to clear the cache operation resolved, so once his address information, regardless of access times or the original address information, resulting in load balancing invalid.

Category 4, agent mode

1 , userspace proxy mode

 

2 , iptables proxy mode

 

3 , the IPVS proxy mode

This model, Kube-Proxy monitors Kubernetes Service objects and Endpoints , call netlink interface for creation Accordingly ipvs rules and regularly Kubernetes Service objects and Endpoints objects synchronized ipvs rules to ensure ipvs state in line with expectations. When accessing services, traffic will be redirected to one of the back-end Pod

And iptables Similarly, IPVS in netfilter the hook function, but using a hash table as the underlying data structure and operate in the kernel space. This means ipvs can redirect traffic faster and have better performance when synchronizing proxy rules. In addition, the IPVS provide more options for load balancing algorithms, such as:

RR ① : round robin scheduling

LC ② : minimum number of connections

Dh ③ : target hash

SH ④ : Source Hash

Sed ⑤ : the shortest expected delay

NQ ⑥ : do not line up schedule

 

Five, ClusterIP

clusterIP mainly at each node using the node iptables , will be sent to clusterIP corresponding data ports, forwarding to kube-proxy in. Then kube-proxy own internal achieve a load balancing method, and can check the service corresponding to the pod address and port, and then forward the data to the corresponding pod address and port

 

In order to realize the function on the graph, the main need to work together the following components:

  1. apiserver user kubectl to the command apiserver sends a create service command, apiserver received after the request data is stored etcd the
  2. kube-proxy kubernetes each node has called kube-porxy process, this process is responsible for the perception Service , POD information changes, and changes in local writing iptables rules
  3. iptables use NAT techniques to virtualIP traffic transferred endpoint in

 Creating myapp-deploy.yaml file

 

[root@master manifests]# vim myapp-deploy.yaml

apiVersion: apps/v1

kind: Deployment

metadata:

  name: myapp-deploy

  namespace: default

spec:

  replicas: 3

selector:

    matchLabels:

app: myapp

      release: stack

  template:

    metadata:

      labels:

        app: myapp

  release: stack

        env: test

spec:

      containers:

      - name: myapp

image: wangyanglinux / myapp: v2

       imagePullPolicy: IfNotPresent

       ports:

- name: http

 containerPort: 80

创建 Service 信息

 

[root@master manifests]# vim myapp-service.yaml

apiVersion: v1

kind: Service

metadata:

  name: myapp

  namespace: default

spec:

type: ClusterIP

selector:

app: myapp

    release: stabel

ports:

- name: http

 port: 80

   targetPort: 80

 六、Headless Service

有时不需要或不想要负载均衡,以及单独的 Service IP 。遇到这种情况,可以通过指定 ClusterIP(spec.clusterIP) 的值为 “None” 来创建 Headless Service 。这类 Service 并不会分配 Cluster IPkube-proxy 不会处理它们,而且平台也不会为它们进行负载均衡和路由

[root@k8s-master mainfests]# vim myapp-svc-headless.yaml

 

apiVersion: v1

kind: Service

metadata:

name: myapp-headless

namespace: default

spec:

selector:

app: myapp

 clusterIP: "None"

 ports:

 - port: 80

targetPort: 80

 

[root@k8s-master mainfests]# dig -t A myapp-headless.default.svc.cluster.local. @10.96.0.10

 

 七、NodePortnodePort

的原理在于在 node 上开了一个端口,将向该端口的流量导入到 kube-proxy,然后由 kube-proxy 进一步到给对应的 pod

类型 命令 描述 基础命令 create 通过文件名或标准输入创建资源 expose 将一个资源公开为一个新的Service run 在集群中运行一个特定的镜像 set 在对象上设置特定的功能 get 显示一个或多个资源 explain 文档参考资料。 edit 使用默认的编辑器编辑一个资源。 delete 通过文件名、标准输入、资源名称或标签选择器来删除资源。 部署命令 rollout 管理资源的发布 rolling-update 对给定的复制控制器滚动更新 scale 扩容或缩容Pod数量,DeploymentReplicaSetRCJob autoscale 创建一个自动选择扩容或缩容并设置Pod数量 集群管理命令 certificate 修改证书资源 cluster-info 显示集群信息 top 显示资源(CPU/Memory/Storage)使用。需要Heapster运行 cordon 标记节点不可调度 uncordon 标记节点可调度 drain 维护期间排除节点 taint

[root@master manifests]# vim myapp-service.yaml

apiVersion: v1

kind: Service

metadata:

 name: myapp

 namespace: default

spec:

type: NodePort

selector:

 app: myapp

 release: stabel

ports:

- name: http

 port: 80

targetPort: 80

 

查看流程

iptables -t nat -nv

LKUBE-NODEPORTS

 八、LoadBalancer(了解  ) 

loadBalancer nodePort 其实是同一种方式。区别在于 loadBalancer nodePort 多了一步,就是可以调用cloud provider 去创建 LB 来向节点导流(LB收费)

九、ExternalName

这种类型的 Service 通过返回 CNAME 和它的值,可以将服务映射到 externalName 字段的内容( 例如:hub.atguigu.com )ExternalName Service Service 的特例,它没有 selector,也没有定义任何的端口和Endpoint。相反的,对于运行在集群外部的服务,它通过返回该外部服务的别名这种方式来提供服务

kind: Service

apiVersion: v1

metadata:

 name: my-service-1

 namespace: default

spec:

 type: ExternalName

externalName: hub.atguigu.com

当查询主机 my-service.defalut.svc.cluster.local ( SVC_NAME.NAMESPACE.svc.cluster.local )时,集群的DNS 服务将返回一个值 my.database.example.com CNAME 记录。访问这个服务的工作方式和其他的相同,唯一不同的是重定向发生在 DNS 层,而且不会进行代理或转发

 

链接:https://www.bilibili.com/video/av66617940/?p=31

 

Guess you like

Origin www.cnblogs.com/LiuQizhong/p/11573173.html