Detailed explanation of Kubernetes Service and Ingress

1. The concept of Service

Kubernetes Service defines such an abstraction: a logical grouping of Pods and a strategy for accessing them-usually called microservices. This group of Pods can be accessed by Service, usually through Label Selector

In layman's terms: SVC is responsible for detecting the status information of the Pod, and will not change the IP address due to the change of the pod (because the focus is on the label), resulting in Nginx load balancing impact
Insert picture description here

Service can provide load balancing capabilities, but has the following limitations in use:

  • By default, only Layer 4 load balancing capabilities (IP+port) are provided, and Layer 7 functions (hostname and domain name) are not provided, but sometimes we may need more matching rules to forward requests. In this regard, Layer 4 load balancing is not supported
  • Later, you can add 7-layer capabilities through the Ingress solution

Two, the type of Service

Service has the following four types in K8s

  • Clusterlp: the default type, automatically assigns a virtual IP that can only be accessed within the Cluster
  • NodePort: Bind a port to each machine for Service on the basis of ClusterlP, so that <NodeIP>:NodePortthe service can be accessed through
  • LoadBalancer: On the basis of NodePort, with the help of cloud provider, create an external load balancer and forward the request to
    <NodeIP>:NodePort
  • ExternalName: Introduce services outside the cluster into the cluster and use them directly within the cluster. No proxy of any type is created, which is only supported by kube-dns of kubernetes 1.7 or higher

①ClusterIp: The default type, automatically assigns a virtual IP that can only be accessed inside the Cluster
Insert picture description here
②NodePort: Bind a port to each machine for Service on the basis of ClusterIP, so that you can <NodeIP>:NodePortaccess the service through to access the
30001 of node01, which is equivalent to access definition Three different pods of the 80's SVC backend with the same service (RR)
client——"nginx (load receiver, reverse proxy)——"node1, node2
Insert picture description here

③LoadBalancer: On the basis of NodePort, create an external load balancer with the help of cloud provider, and forward the request to <NodeIP>:NodePort
Insert picture description here
④ExternalName: Introduce services outside the cluster to the inside of the cluster, and use it directly inside the cluster. No proxy of any type is created, this is only supported by kube-dns of kubernetes 1.7 or higher
Insert picture description here

SVC basic introduction
Insert picture description here
summary The
client accesses the node through iptables,
iptables rules are written through kube-proxy,
apiserver monitors kube-proxy to monitor services and endpoints, and
kube-proxy uses pod tags (labels) To determine whether the breakpoint information is written to Endpoints.

Three, VIP and Service proxy

In a Kubernetes cluster, each Node runs a kube-proxy process. kube-proxy is responsible for implementing a VIP (virtual IP) form for Service instead of ExternalName. In Kubernetes v1.0, the proxy is completely in the userspace. In the Kubernetes v1.1 version, the iptables agent was added, but it is not the default operating mode. Starting from Kubernetes v1.2, the default is iptables proxy. In Kubernetes v1.8.0-beta.0, the ipvs proxy has been added

Proxy level: userspace——"iptables——"ipvs
uses ipvs proxy by default since Kubernetes 1.14

In Kubernetes v1.0 version, Service is the "4 layer" (TCP/UDP over IP) concept. In the Kubernetes v1.1 version, the Ingress API (beta version) was added to represent the "7-layer" (HTTP) service

Why not use round-robin DNS?
DNS will be cached in many clients. Many services will not clear the cache for DNS resolution after accessing the DNS for domain name resolution and obtaining the address. So once the address information is available, no matter how many times it is accessed or The original address information causes the load balancing to be invalid.

Four, ipvs proxy mode

ipvs proxy mode (standard)
This mode, kube-proxy will monitor Kubernetes Service objects and Endpoints, call the netlink interface to create ipvs rules accordingly, and periodically synchronize ipvs rules with Kubernetes Service objects and Endpoints objects to ensure that the ipvs status is consistent with expectations . When accessing the service, traffic will be redirected to one of the backend Pods

Similar to iptables, ipvs is the hook function of netfilter, but uses a hash table as the underlying data structure and works in the kernel space. This means that ipvs can redirect traffic faster and have better performance when synchronizing proxy rules. In addition, ipvs provides more options for load balancing algorithms, such as:

①rr: round-robin scheduling
②lc: minimum number of connections
③dh: target hash
④sh: source hash
⑤sed: shortest expected delay
⑥nq: no queue scheduling
Insert picture description here

<–Note; the ipvs mode assumes that the IPVS kernel module has been installed on the node before running kube-proxy. When kube-proxy starts in ipvs proxy mode, kube-proxy will verify whether the IEVS module is installed on the node. If it is not installed, kube-proxy will fall back to iptables proxy mode ->

ipvsadm -Ln
kubectl get svc

Five, Service experiment explanation

5.1 ClusterIP

clusterIP mainly uses iptables on each node to forward the data sent to the corresponding port of clusterIP to kube-proxy. Then kube-proxy implements its own internal load balancing method, and can query the address and port of the corresponding pod under this service, and then forward the data to the address and port of the corresponding pod.
Insert picture description here
In order to achieve the functions on the figure, the following are mainly needed The collaborative work of two components:

  • The apiserver user sends a service creation command to the apiserver through the kubectl command, and the apiserver stores the data in etcd after receiving the request
  • Each node of kube-proxy kubernetes has a process called kube-porxy, which is responsible for sensing changes in service and pod, and writing the changed information into the local iptables rules
  • iptables uses NAT and other technologies to transfer virtualIP traffic to the endpoint

(Api writes information to etcd, kubeproxy monitors the changes of etcd, and writes the changes to ipvs rules after obtaining the changes)

The first step is to create the svc-deployment.yaml file

[root@k8s-master01 ~]# vim svc-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-deploy # deployment的名字
  namespace: default
spec:
  replicas: 3  # 副本数目为3
  selector:
    matchLabels: # 匹配
      app: myapp
      release: stabel
  template:
    metadata:
      labels:
        app: myapp
        release: stabel
        env: test
    spec:
      containers:
      - name: myapp
        image: wangyanglinux/myapp:v2
        imagePullPolicy: IfNotPresent
        ports:
        - name: http
          containerPort: 80

kubectl apply -f svc-deployment.yaml
Insert picture description here
kubectl get pod -o wide
curl 10.244.2.44
Insert picture description here

This address access is not good. If the pod dies, a new pod will appear, and then it will be inconsistent with the previous address. Therefore, for reliable access, the second step is required, SVC creation

The second step is to bind svc to deploy, that is, create Service information

[root@k8s-master01 ~]# vim svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: myapp
  namespace: default
spec:
  type: ClusterIP # 不指定的话默认而已是Cluster IP
  selector:
    app: myapp # 一定与svc-deployment.yaml的标签要匹配
    release: stabel
  ports:
  - name: http
    port: 80
    targetPort: 80 # 目标后端服务的端口

kubectl apply -f svc.yaml
kubectl get svc
ipvsadm -Ln
Here are two reasons because there is a container still being created, it doesn't matter
Insert picture description here
kubectl delete -f svc.yaml can also see that the corresponding service has also been deleted.

Direct access to the IP address of the svc is equivalent to using the ipvs module, load balancing, and proxy to the back-end node.
Insert picture description here
Visit the IP address of svc directly, you can see the effect of polling RR
Insert picture description here

5.2 Headless Service

It belongs to a special kind of Cluster IP, and
sometimes load balancing is not needed or desired, and a separate Service IP . In this case, you can create a Headless Service by specifying the value of ClusterIP (spec.clusterIP) as "None". Such services will not allocate Cluster IP, kube-proxy will not process them, and the platform will not perform load balancing and routing for them.

[root@k8s-master01 ~]# vim svc-none.yaml
apiVersion: v1
kind: Service
metadata:
  name: myapp-headless
  namespace: default
spec:
  selector:
   app: myapp
  clusterIP: "None"
  ports:
  - port: 80
    targetPort: 80
[root@k8s-master01 ~]# kubectl apply -f svc-none.yaml
[root@k8s-master01 ~]# kubectl get svc

Insert picture description here
Although there is no svc anymore, you can still access the
svc through the domain name scheme. Successful creation will write the host name (svc name. namespace name. cluster domain name) to coredns
[root@k8s-master01 ~]# kubectl get pod -n kube-system -o wide Get the address information of the current dns
[root@k8s-master01 ~]# dig -t A myapp-headless.default.svc.cluster.local. @10.244.0.12
Insert picture description here
Insert picture description here

5.3 NodePort

You can expose a port on the current physical machine to expose internal services to the outside.
Clients can access the inside of the cluster through the physical machine IP + port

The principle of nodePort is to open a port on the node, import the traffic to the port to kube-proxy, and then further (interact with the interface layer) to the corresponding pod by kube-proxy

[root@k8s-master01 ~]# vim nodeport.yaml
apiVersion: v1
kind: Service
metadata:
  name: myapp
  namespace: default
spec:
  type: NodePort # 不指定的话默认而已是Cluster IP
  selector:
    app: myapp # 一定与svc-deployment.yaml的标签要匹配
    release: stabel
  ports:
  - name: http
    port: 80
    targetPort: 80 # 目标后端服务的端口

[root@k8s-master01 ~]# kubectl apply -f nodeport.yaml
[root@k8s-master01 ~]# kubectl get pod
[root@k8s-master01 ~]# kubectl get svc

At the same time, it can be seen that a group of pods can correspond to different svcs. As long as the pod tag is consistent with the svc tag, it can be associated. To-many relationship n: m
Insert picture description here
browser to visit: master virtual machine IP: port 10.0.100.10:32642
and child nodes pod will open the port
10.0.100.11:32642and10.0.100.12:32642

Insert picture description here

Insert picture description here

Query process
ipvsadm -Ln
iptables -t nat -nvL

5.4 LoadBalancer

loadBalancer and nodePort are actually the same way. The difference is that loadBalancer is one step more than nodePort, that is, you can call cloud provider to create LB to divert to the node (LB charges)
Insert picture description here

5.5 ExternalName

Alias ​​operation, external services are introduced into the cluster.
This type of Service can map the service to the content of the externalName field by returning the CNAME and its value (for example: hub.atguigu.com). ExternalName Service is a special case of Service. It has no selector, nor does it define any port or endpoint. On the contrary, for services running outside the cluster, it provides services by returning the alias of the external service

kind: Service
apiVersion: v1
metadata:
 name: my-service-1
 namespace: default
spec:
 type: ExternalName
 externalName: hub.atguigu.com

When querying the host my-service-1.defalut.svc.cluster.local (SVC_NAME.NAMESPACE.svc.cluster.local ) , the DNS service of the cluster will return a CNAME record with the value hub.atguigu.com. Access to this service works in the same way as the others, the only difference is that the redirection occurs at the DNS layer, and no proxy or forwarding is performed.

vim ex.yaml
kubectl create -f ex.yaml
kubectl get svc
Insert picture description here

dig -t A my-service-1.default.svc.cluster.local @10.244.0.13
This IP is the coredns address, throughkubectl get pod -n kube-system -o wideInsert picture description here

 

Six, Ingress

For traditional SVC, only four layers are supported

6.1 Information

Ingress-Nginx github address: https://github.com/kubernetes/ingress-nginx
Ingress-Nginx official website: https://kubernetes.github.io/ingress-nginx/

Insert picture description here
Actually the Nginx exposure scheme is Nodepod, internal Service exposed to the outside
Insert picture description here

6.2 Deploy Ingress

kubectl apply -f mandatory.yaml
kubectl apply -f service-nodeport.yaml

Enter the official download
cd /usr/local/install-k8s/plugin/
mkdir ingress
wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml
mandatory cat.yaml | grep image
Get xxx
docker pull xxx

Step 1: Three nodes, one master and two children must be decompressed and imported

tar -zxvf ingree.contro.tar.gz 解压
docker load -i ingree.contro.tar  导入

Insert picture description here

Step 2: Create pod and svc

kubectl apply -f mandatory.yaml
kubectl get pod -n ingress-nginx
kubectl apply -f service-nodeport.yaml
kubectl get svc -n ingress-nginx

Insert picture description here

6.3 Ingress HTTP proxy access

deployment, Service, Ingress Yaml files

Now I want to expose it through Nginx's Ingress solution to realize such a structure of domain name access

[root@k8s-master01 ~]# vim ingress.http.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx-dm
spec:
  replicas: 2
  template:
    metadata:
      labels:
        name: nginx
    spec:
      containers:
        - name: nginx
          image: wangyanglinux/myapp:v1
          imagePullPolicy: IfNotPresent # 如果有就不下载
          ports:
            - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-svc
spec:
  ports:
    - port: 80
      targetPort: 80
      protocol: TCP
  selector: # 匹配,当name=nginx的时
    name: nginx

[root@k8s-master01 ~]# kubectl apply -f ingress.http.yaml
deployment.extensions/nginx-dm created
service/nginx-svc created
[root@k8s-master01 ~]# kubectl get svc
NAME             TYPE           CLUSTER-IP       EXTERNAL-IP          PORT(S)        AGE
nginx-svc        ClusterIP      10.102.101.216   <none>               80/TCP         5s
[root@k8s-master01 ~]# curl 10.102.101.216
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>

Insert picture description here

[root@k8s-master01 ~]# vim ingress1.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
   name: nginx-test
spec:
  rules:
    - host: www1.atguigu.com
      http:
        paths:
        - path: /
          backend:
            serviceName: nginx-svc # 链接的是上面svc的名字
            servicePort: 80

[root@k8s-master01 ~]# kubectl apply -f ingress1.yaml 
ingress.extensions/nginx-test created

Test under W10, modify the local host resolution, C:\Windows\System32\drivers\etc\hosts
10.0.100.10 www1.atguigu.com
Note that the port accessed is not 80, but port 32510 of ingress
kubectl get svc -n ingress-nginx
Insert picture description here
Insert picture description hereInsert picture description here

6.5 Implementing a virtual hosting solution based on Ingress

Insert picture description here
The first deployment and the first svc

[root@k8s-master01 ~]# mkdir ingress-vh
[root@k8s-master01 ~]# cd ingress-vh/
[root@k8s-master01 ingress-vh]# vim deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: deployment1
spec:
  replicas: 2
  template:
    metadata:
      labels:
        name: nginx
    spec:
      containers:
        - name: nginx
          image: wangyanglinux/myapp:v1
          imagePullPolicy: IfNotPresent # 如果有就不下载
          ports:
            - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: svc-1
spec:
  ports:
    - port: 80
      targetPort: 80
      protocol: TCP
  selector: # 匹配,当name=nginx的时
    name: nginx
[root@k8s-master01 ingress-vh]# kubectl apply -f deployment.yaml 

Insert picture description here
 
The second deployment and the second svc

[root@k8s-master01 ingress-vh]# cp -a deployment.yaml deployment2.yaml
[root@k8s-master01 ingress-vh]# vim deployment2.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: deployment2
spec:
  replicas: 2
  template:
    metadata:
      labels:
        name: nginx2
    spec:
      containers:
        - name: nginx2
          image: wangyanglinux/myapp:v2
          imagePullPolicy: IfNotPresent # 如果有就不下载
          ports:
            - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: svc-2
spec:
  ports:
    - port: 80
      targetPort: 80
      protocol: TCP
  selector: # 匹配,当name=nginx的时
    name: nginx2

[root@k8s-master01 ingress-vh]# kubectl apply -f deployment2.yaml 
[root@k8s-master01 ingress-vh]# kubectl get svc

Insert picture description here
Write Ingress1, 2 rules

[root@k8s-master01 ~]# vim ingressrule.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
   name: ingress1
spec:
  rules:
    - host: www1.atguigu.com
      http:
        paths:
        - path: /
          backend:
            serviceName: svc-1 # 链接的是上面svc的名字
            servicePort: 80
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
   name: ingress2
spec:
  rules:
    - host: www2.atguigu.com
      http:
        paths:
        - path: /
          backend:
            serviceName: svc-2 # 链接的是上面svc的名字
            servicePort: 80
[root@k8s-master01 ~]# kubectl apply -f ingressrule.yaml

Insert picture description here

[root@k8s-master01 ingress-vh]# kubectl get pod -n ingress-nginx
NAME                                        READY   STATUS    RESTARTS   AGE
nginx-ingress-controller-7995bd9c47-kzqh2   1/1     Running   0          83m
[root@k8s-master01 ingress-vh]# kubectl exec nginx-ingress-controller-7995bd9c47-kzqh2 -n ingress-nginx -it -- /bin/bash

查看发现,写入的Ingress规则会自己转换注入到配置文件

Insert picture description here
View the ports exposed by Ingresskubectl get svc -n ingress-nginx
Insert picture description herekubectl get ingress 查看规则

Browser access test
Insert picture description here
dynamic graph effect demonstration virtual host
Insert picture description here

6.6 Ingress HTTPS proxy access

Insert picture description here

Create certificate, and cert storage method

openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=nginxsvc/O=nginxsvc"
kubectl create secret tls tls-secret --key tls.key --cert tls.crt

deployment, Service, Ingress Yaml files

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: nginx-test
spec:
  tls:
    - hosts:
      - foo.bar.com
      secretName: tls-secret
  rules:
    - host: foo.bar.com
      http:
        paths:
        - path: /
          backend:
            serviceName: nginx-svc
            servicePort: 80

Operation process
The first step: create a certificate, and cert storage method

[root@k8s-master01 ~]# mkdir https
[root@k8s-master01 ~]# cd https
[root@k8s-master01 https]# openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=nginxsvc/O=nginxsvc"
Generating a 2048 bit RSA private key
....................................................................+++
...............+++
writing new private key to 'tls.key'
-----
[root@k8s-master01 https]# kubectl create secret tls tls-secret --key tls.key --cert tls.crt

Insert picture description here
Step 2: Create deployment and Service files

[root@k8s-master01 https]# cp /root/ingress-vh/deployment.yaml .
[root@k8s-master01 https]# vim deployment.yaml 

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: deployment3
spec:
  replicas: 2
  template:
    metadata:
      labels:
        name: nginx3
    spec:
      containers:
        - name: nginx
          image: wangyanglinux/myapp:v3
          imagePullPolicy: IfNotPresent # 如果有就不下载
          ports:
            - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: svc-3
spec:
  ports:
    - port: 80
      targetPort: 80
      protocol: TCP
  selector: # 匹配,当name=nginx的时
    name: nginx3

[root@k8s-master01 https]# kubectl apply -f deployment.yaml 
deployment.extensions/deployment3 created
service/svc-3 created
[root@k8s-master01 https]# kubectl get svc

Insert picture description here
Step 3: Create an Ingress Yaml file
with more tls

[root@k8s-master01 https]# vim https.ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: https
spec:
  tls:
    - hosts:
      - www3.atguigu.com
      secretName: tls-secret
  rules:
    - host: www3.atguigu.com
      http:
        paths:
        - path: /
          backend:
            serviceName: svc-3
            servicePort: 80
[root@k8s-master01 https]# kubectl apply -f https.ingress.yaml
ingress.extensions/https created
[root@k8s-master01 https]# kubectl get svc -n ingress-nginx
NAME            TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx   NodePort   10.110.174.77   <none>        80:32510/TCP,443:31401/TCP   118m

Insert picture description here
Browser visit to see the effect
https://www3.atguigu.com:31401
Insert picture description here
Insert picture description here

6.7 BasicAuth with Nginx

Apache certified module used for nginx

mkdir basic-auth
cd basic-auth
yum -y install httpd
htpasswd -c auth foo # 用户名为foo,文件为auth
kubectl create secret generic basic-auth --from-file=auth

Insert picture description here

vim ingress.yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-with-auth
  annotations:
    nginx.ingress.kubernetes.io/auth-type: basic
    nginx.ingress.kubernetes.io/auth-secret: basic-auth
    nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required - foo'
spec:
  rules:
  - host: auth.atguigu.com  # 访问该域名进行认证方案
    http:
      paths:
      - path: /
        backend:
          serviceName: svc-1
          servicePort: 80

[root@k8s-master01 basic-auth]# kubectl apply -f ingress.yaml
ingress.extensions/ingress-with-auth created

The access is 32510 corresponding to port 80
Insert picture description here
browser access
Insert picture description hereInsert picture description here

Insert picture description here

 

6.8 Nginx for rewriting

Insert picture description here
 
Experimental operation
Visit www4 and skip to www3. https access
Insert picture description here
vim re.yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: nginx-test
  annotations:
    # 重定向到目标的url,注意这里是s
    nginx.ingress.kubernetes.io/rewrite-target: https://www3.atguigu.com:31401/hostname.html
spec:
 rules:
 - host: re.atguigu.com
   http:
     paths:
     - path: /
       backend:
         serviceName: svc-1 # 这个svc也可不指定,因为他上面已跳转
         servicePort: 80

ps: If you encounter paste confusion, you can set paste

Insert picture description here
Browser access: http://re.atguigu.com:32510/
Jump to https://www3.atguigu.com:31401/hostname.html

Guess you like

Origin blog.csdn.net/qq_39578545/article/details/108893076