[Cloud Native--Kubernetes] Ingress Detailed Explanation

1. Introduction to Ingress

1.1 service exposure method

The role of service is reflected in two aspects. Inside the cluster, it keeps track of pod changes, updates the corresponding pod objects in the endpoint, and provides a service discovery mechanism for pods with changing IP addresses; outside the cluster, it is similar to a load balancer. Pods can be accessed from both inside and outside the cluster.
In Kubernetes, the IP address of the Pod and the ClusterIP of the service can only be used within the cluster network, and are invisible to applications outside the cluster. In order to enable external applications to access services in the cluster, Kubernetes currently provides the following solutions:

  • NodePort: Expose the service to the node network. Behind the NodePort is Kube-Proxy. Kube-Proxy is a bridge to communicate with the service network, Pod network and node network.
    The test environment is okay to use, but when dozens or hundreds of services are running in the cluster, the port management of NodePort is a disaster. Because each port can only be one service, the port range can only be 30000-32767.

  • LoadBalancer: Map to the LoadBalancer address provided by the cloud service provider by setting the LoadBalancer. This usage is only used in the scenario of setting up a Service on the cloud platform of a public cloud service provider. It is limited by the cloud platform, and usually additional fees are required to deploy LoadBalancer on the cloud platform.
    After the service is submitted, Kubernetes will call CloudProvider to create a load balancing service for you on the public cloud, and configure the IP address of the proxied Pod to the load balancing service as the backend.

  • externalIPs: Service allows the assignment of external IPs. If the external IPs are routed to one or more Nodes in the cluster, the Service will be exposed to these externalIPs. The traffic entering the cluster through the external IP will be routed to the Endpoint of the Service.

  • Ingress: Only one or a small number of public network IPs and LBs are needed to expose multiple HTTP services to the external network at the same time, seven-layer reverse proxy.
    A service that can be simply understood as a service is actually a set of rules that forward user requests to one or more services based on domain names and URL paths.

1.2 ingress component

1.2.1 ingress

The ingress is an API object , which is configured through a yaml file. The function of the ingress object is to define the rules for how requests are forwarded to the service, which can be understood as a configuration template.
Ingress exposes the internal services of the cluster through http or https, and provides services with external URLs, load balancing, SSL/TLS capabilities, and domain name-based reverse proxy. Ingress depends on ingress-controller to implement the above functions.

1.2.2 ingress-controller

ingress-controller is a program that specifically implements reverse proxy and load balancing , analyzes the rules defined by ingress, and implements request forwarding according to the configured rules.
Ingress-controller is not a built-in component of k8s. In fact, ingress-controller is just a general term. Users can choose different ingress-controller implementations. Currently, the only ingress-controllers maintained by k8s are GCE and ingress-nginx of Google Cloud. , there are many other ingress-controllers maintained by third parties , for details, please refer to the official documents. But no matter what kind of ingress-controller, the implementation mechanism is similar, but there are differences in specific configurations.
Generally speaking, the ingress-controller is in the form of a pod**, which runs a daemon program and a reverse proxy program. The daemon is responsible for continuously monitoring changes in the cluster, generating configurations based on ingress objects and applying new configurations to the reverse proxy. For example, ingress-nginx dynamically generates nginx configurations, dynamically updates upstream, and reloads the program to apply new configurations when needed. For convenience, the following examples all use the ingress-nginx officially maintained by k8s as an example.

Ingress-Nginx github address: https://github.com/kubernetes/ingress-nginx
Ingress-Nginx official website: https://kubernetes.github.io/ingress-nginx/

Summary : ingress-controller is the component responsible for specific forwarding. It is exposed to the cluster entrance in various ways. External request traffic to the cluster will first go to ingress-controller, and the ingress object is used to tell ingress-controller how to forward Requests, such as which domain names, which URLs to forward to which services, and so on.

1.3 Ingress working principle

  1. The ingress-controller interacts with the kubernetes APIServer to dynamically sense changes in the ingress rules in the cluster.
  2. Then read it, according to the custom rules, the rules are to specify which domain name corresponds to which service, and generate a piece of nginx configuration,
  3. Then write to the pod of nginx-ingress-controller, which runs an Nginx service in the pod of ingress-controller, and the controller will write the generated nginx configuration into the /etc/nginx.conf file.
  4. Then reload it to make the configuration take effect. In this way, the role of distinguishing configuration and dynamic updating of domain names can be achieved.

2. How ingress exposes services

2.1 Deployment+LoadBalancer

If you want to deploy the ingress in the public cloud, then this method is more appropriate. Use Deployment to deploy ingress-controller, create a service of type LoadBalancer and associate this group of pods. Most public clouds will automatically create a load balancer for the LoadBalancer service, and usually bind a public network address. As long as the domain name resolution points to this address, the external exposure of the cluster service is realized

Disadvantages: Not suitable for high concurrency and large clusters

2.2 DaemonSet+HostNetwork+nodeSelector

Use DaemonSet combined with nodeselector to deploy ingress-controller to a specific node, and then use HostNetwork to directly connect the pod with the network of the host node, and directly use the 80/433 port of the host to access the service. At this time, the node machine where the ingress-controller is located is very similar to the edge node of the traditional architecture, such as the nginx server at the entrance of the computer room. This method is the simplest in the entire request link, and its performance is better than that of the NodePort mode. The disadvantage is that a node can only deploy one ingress-controller pod due to the direct use of the network and port of the host node. It is more suitable for large concurrent production environments.

2.3 Deployment+NodePort

Also use the deployment mode to deploy the ingress-controller and create the corresponding service, but the type is NodePort. In this way, the ingress will be exposed on the specific port of the cluster node ip. Since the port exposed by nodeport is a random port, a load balancer is usually built in front to forward requests. This method is generally used in scenarios where the IP address of the host machine is relatively fixed.
Although it is simple and convenient to expose ingress through NodePort, NodePort has an additional layer of NAT, which may have a certain impact on performance when the request level is large.

Disadvantages: It is more difficult to maintain in the later stage, and the pressure on traffic forwarding will also increase, and it is not very friendly to high concurrency support

3. ingress-nginx

3.1 Nginx Ingress Controller workflow

The Nginx Ingress Controller container mainly runs Ingress Controller (hereinafter referred to as IC) and Nginx two programs. The official document introduces its working principle in great detail: How NGINX Ingress Controller Works . One of the pictures is taken below:
insert image description here

1. The IC will create an Informer for each resource type it is interested in (such as Ingress, VirtualServer, VirtualServerRoute and its associated resources), and each Informer includes a store that stores this type of resource. The Informer will monitor the changes of its corresponding resource types through the Kubernetes API to keep the content in the store up-to-date.

2. IC will register a handler for each Informer. When the user creates or updates a resource (such as creating an Ingress resource), the Informer will update its store and call other corresponding Handlers.

3. The Handler will create an entry for the changed resource in the work queue Workqueue. The elements of the work queue include the type of resource and its namespace and name, such as (Ingress, default, cafe).

4. Workqueue will always try to clear the elements in the queue. If there is an element in front of the queue, Workqueue will remove the element and send it to the controller Controller by calling the callback function.

5. Controller is the main component in IC and represents the control loop. After the Controller receives the information sent by the Workqueue (that is, the changed resource), it will obtain the latest version of the relevant resource from the store.

6. The Controller generates the corresponding nginx configuration file according to the acquired resources, writes it into the container's file system, then reloads nginx, and updates the reload result to the status and event of the resource through the Kubernetes API.

3.2 Deploy nginx-ingress-controller

  • Deploy ingress-controller pod and related resources
mkdir /opt/ingress
cd /opt/ingress

Official download:

wget https://gitee.com/mirrors/ingress-nginx/raw/nginx-0.30.0/deploy/static/mandatory.yaml

The mandatory.yaml file contains the creation of many resources, including namespace, ConfigMap, role, ServiceAccount, etc. All resources needed to deploy ingress-controller.

insert image description here
Modify mandatory.yaml configuration

apiVersion: rbac.authorization.k8s.io/v1beta1
RBAC-related resources are changed to rbac.authorization.k8s.io/v1 from version 1.17.
Here, the version of v1beta1 may report an error. Change all v1beta1 to v1

3.3 DaemonSet+HostNetwork+nodeselector

  • Specify nginx-ingress-controller to run on node02 node
kubectl label node k8s-node2 ingress=true
 
kubectl get nodes --show-labels

insert image description here

  • Modify the Deployment to Daemonset, specify the node to run, and enable hostNetwork

vim mandatory.yaml

apiversion: apps/vl
kind: Daemonset   #修改kind
replicas: 1        # 删除Replicas
hostNetwork: true  #使用主机网络
nodeSelector:
  ingress: "true"   #选择节点运行

insert image description here
insert image description here

  • Start nginx-ingress-controller
kubectl apply -f mandatory.yaml
 
kubectl get pod -n ingress-nginx -o wide

insert image description here
View at node2 node

netstat -natp | grep nginx

insert image description here

Due to the configuration of hostnetwork, nginx has been listening on port 80/443/8181 locally on the node host. Among them, 8181 is a default backend configured by nginx-controller by default (when the Ingress resource does not have a matching rule object, traffic will be directed to this default backend).
In this way, as long as the access node host has a public IP, it can directly map the domain name to expose the service to the external network. If you want nginx to be highly available, you can
deploy it on multiple nodes, and build a set of LVS+keepalived in front of it for load balancing.

  • Create an ingress rule
    vim service-nginx.yaml
    to create a deploy and svc
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-app
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
        - name: nginx
          image: nginx
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-app-svc
spec:
  type: ClusterIP
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  selector:
    app: nginx
kubectl apply -f service-nginx.yaml
kubectl get svc,pod

insert image description here

  • Create ingress
    vim ingress-app.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nginx-app-ingress
spec:
  rules:
  - host: www.xiayan.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: nginx-app-svc
            port:
              number: 80

insert image description here

kubectl apply -f ingress-app.yaml

insert image description here

Add local domain name mapping for testing

#注意此处使用的IP地址为node2节点的IP地址
echo "192.168.48.11 www.xiayan.com" >> /etc/hosts

insert image description here

  • See nginx-ingress-controller
kubectl get pod -n ingress-nginx -o wide
 
kubectl exec -it nginx-ingress-controller-pplxc -n ingress-nginx bash
more /etc/nginx/nginx.conf

insert image description here

3.3 Deployment+NodePort

When doing this deployment, first delete the DaemonSet+HostNetwork+nodeselector deployed above
insert image description here
Deployment+NodePort

  • Download nginx-ingress-controller and ingress-nginx exposed port configuration files
在主节点
mkdir /opt/ingress-nodeport
cd /opt/ingress-nodeport
 
官方下载地址:
wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/mandatory.yaml
wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/provider/baremetal/service-nodeport.yaml
 
国内 gitee 资源地址:
wget https://gitee.com/mirrors/ingress-nginx/raw/nginx-0.30.0/deploy/static/mandatory.yaml
wget https://gitee.com/mirrors/ingress-nginx/raw/nginx-0.30.0/deploy/static/provider/baremetal/service-nodeport.yaml

insert image description here

  • Start nginx-ingress-controller
kubectl apply -f mandatory.yaml
kubectl apply -f service-nodeport.yaml
kubectl get pod,svc -n ingress-nginx

insert image description here
insert image description here

  • Create yaml resource
    vim ingress-nginx.yaml for deployment, service and ingress
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-app
spec:
  replicas: 2
  selector:
    matchLabels:
      name: nginx
  template:
    metadata:
      labels:
        name: nginx
    spec:
      containers:
        - name: nginx
          image: nginx
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-svc
spec:
  ports:
    - port: 80
      targetPort: 80
      protocol: TCP
  selector:
    name: nginx
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nginx-test
spec:
  rules:
  - host: www.xiayan.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service: 
            name: nginx-svc
            port:
              number: 80

release

kubectl apply -f ingress-nginx.yaml
kubectl get svc,pods -o wide

insert image description here
test:

#在两个pod中的nginx添加测试文件
kubectl exec -it pod/nginx-app-57dd86f5cc-7vmf4 bash
echo 'this is web1' >> /usr/share/nginx/html/index.html 

insert image description here
Access test:
insert image description here
insert image description here
do port mapping

echo "192.168.48.14 www.xiayan.com" >> /etc/hosts

Open www.xiayan.com:31018 in the web browser of the virtual machine, and use curl to access it as well
insert image description here
insert image description here

4. ingress-nginx reverse proxy

4.1 ingress http proxy access virtual host

mkdir /opt/vhost
cd /opt/vhost

Create virtual host 2 resource
vim deployment1.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: deployment1
spec:
  replicas: 2
  selector:
    matchLabels:
      name: nginx1
  template:
    metadata:
      labels:
        name: nginx1
    spec:
      containers:
        - name: nginx1
          image: nginx
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: svc-1
spec:
  ports:
    - port: 80
      targetPort: 80
      protocol: TCP
  selector:
    name: nginx1

Create virtual host 2 resource
vim deployment2.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: deployment2
spec:
  replicas: 2
  selector:
    matchLabels:
      name: nginx2
  template:
    metadata:
      labels:
        name: nginx2
    spec:
      containers:
        - name: nginx2
          image: nginx
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: svc-2
spec:
  ports:
    - port: 80
      targetPort: 80
      protocol: TCP
  selector:
    name: nginx2

publish resources

kubectl apply -f deployment1.yaml
kubectl apply -f deployment2.yaml

insert image description here

Create ingress resource
vim ingress-nginx.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress1
spec:
  rules:
    - host: www.xy.com
      http:
        paths:
        - path: /
          pathType: Prefix
          backend:
            service: 
              name: svc-1
              port:
                number: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress2
spec:
  rules:
    - host: www.xiayan.com
      http:
        paths:
        - path: /
          pathType: Prefix
          backend:
            service: 
              name: svc-2
              port:
                number: 80

release:

kubectl apply -f ingress-nginx.yaml
kubectl get ingress

Note: Here we need to delete the ingress made in the previous experiment, leaving only the ingress as shown in the figure, otherwise there may be problems in the test later
insert image description here
Write the access interface of nginx for testing

kubectl exec -it deployment1-bc8f85f7-c99cl bash
kubectl exec -it deployment1-bc8f85f7-v4tbm bash
kubectl exec -it deployment2-68954b7689-8mrsd bash 
kubectl exec -it deployment2-68954b7689-hdnn6 bash 
echo 'this is www.xiayan.com web2' > /usr/share/nginx/html/index.html

insert image description here

Add Domain Mapping

echo "192.168.48.14 www.xiayan.com" >> /etc/hosts
echo "192.168.48.14 www.xy.com" >> /etc/hosts

insert image description here
Access test:
insert image description here

4.2 ingress HTTPS proxy access

mkdir /opt/https
cd /opt/https
  • Create an SSL certificate
openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=nginxsvc/O=nginxsvc"

insert image description here

  • Create a secret resource for storage
    insert image description here

  • Create deployment, Service, Ingress Yaml resources
    vim ingress-https.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-app
spec:
  replicas: 2
  selector:
    matchLabels:
      name: nginx
  template:
    metadata:
      labels:
        name: nginx
    spec:
      containers:
        - name: nginx
          image: nginx
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-svc
spec:
  ports:
    - port: 80
      targetPort: 80
      protocol: TCP
  selector:
    name: nginx
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nginx-https
spec:
  tls:
    - hosts:
      - www.xy.com
      secretName: tls-secret
  rules:
    - host: www.xy.com
      http:
        paths:
        - path: /
          pathType: Prefix
          backend:
            service: 
              name: nginx-svc
              port:
                number: 80

insert image description here
Access test:
insert image description here

insert image description here

4.3 Nginx rewrite jump

  1. nginx.ingress.kubernetes.io/rewrite-target:<string> #The target URI that must redirect traffic
  2. nginx.ingress.kubernetes.io/ssl-redirect: <boolean> Indicates whether the location part is only accessible for sSL (defaults to true when the Ingress contains a certificate)
  3. nginx.ingress.kubernetes.io/force-ssl-redirect:<boolean> # Force redirection to HTTPS even if the Ingress does not have rLS enabled
  4. nginx.ingress.kubernetes.io/app-root:<string> #defines the application root that the controller must redirect to if it is in the '/' context
  5. nginx.ingress.kubernetes.io/use-regex:<Boolean> # Indicates whether the path defined on the Ingress. uses regular expressions

Write yaml file
vim ingress-rewrite.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nginx-rewrite
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: https://www.xy.com:31222
spec:
  rules:
  - host: rewrite.xy.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
		  #由于rewrite.xy.com只是用于跳转不需要真实站点存在,因此svc资源名称可随意定义
          service: 
            name: nginx-svc
            port:
              number: 80
echo "192.168.48.14 rewrite.xy.com" >> /etc/hosts

insert image description here
Test access:
insert image description here
insert image description here

V. Summary

The ingress is the request entry of the k8s cluster, which can be understood as the re-abstraction of multiple services.
Generally speaking, the ingress generally includes the ingress resource object and the ingress-controller.
There are many implementations of the ingress-controller. The original community is ingress-nginx There are many ways to choose the exposure of ingress itself according to specific needs
, and you need to choose the appropriate way according to the basic environment and business type

Guess you like

Origin blog.csdn.net/weixin_44175418/article/details/126258416