1. Introduction to Ingress
1.1.Cause of Ingress:
When we use Service, there are two main ways for Service to expose services outside the cluster: NotePort and LoadBalancer. However, these two methods have certain shortcomings:
(1) The disadvantage of the NodePort method is that every time a service is created, a host port needs to be exposed for external access, which will occupy many cluster machine ports. When the number of cluster services increases, this shortcoming becomes more and more obvious. .
(2) The disadvantage of the LB method is that each service requires one LB, which is wasteful and requires support from devices other than kubernetes.
Based on this current situation, kubernetes provides the Ingress resource object. Ingress only needs one NodePort or one LB to meet the needs of exposing multiple Services. The working mechanism is roughly shown in the figure below:
1.2.The role of Ingress
Ingress is equivalent to a 7-layer load balancer. It is an abstraction of reverse proxy by Kubernetes. Its working principle is similar to Nginx. It can be understood as establishing many mapping rules in Ingress. Ingress Controller listens to these configuration rules and converts them into Nginx reverse proxy configuration, and then provide services to the outside.
There are two core concepts here:
(1) ingress: an object in kubernetes that defines the rules for how requests are forwarded to service
(2) Ingress controller: A program that specifically implements reverse proxy and load balancing. It parses the rules defined by ingress and implements request forwarding according to the configured rules. There are many implementation methods, such as Nginx, Contour, Haproxy, etc.
1.3. The working principle of Ingress (taking Nginx as an example):
(1) The user writes Ingress rules to indicate which domain name corresponds to which Service in the kubernetes cluster.
(2) The Ingress controller dynamically senses changes in Ingress service rules, and then generates a corresponding Nginx reverse proxy configuration
(3) The Ingress controller will write the generated Nginx configuration into a running Nginx service and update it dynamically
(4) So far, what is actually working is an Nginx, which is configured with user-defined request forwarding rules.
2. Experimental application
Create two models nginx service and tomcat service and Pod under the service
Create nginx-deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: dev
spec:
replicas: 3
selector:
matchLabels:
app: nginx-pod
template:
metadata:
labels:
app: nginx-pod
spec:
containers:
- name: nginx
image: nginx:1.17.1
ports:
- containerPort: 80
Create tomcat-deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: tomcat-deployment
namespace: dev
spec:
replicas: 3
selector:
matchLabels:
app: tomcat-pod
template:
metadata:
labels:
app: tomcat-pod
spec:
containers:
- name: tomcat
image: tomcat:8.5-jre10-slim
ports:
- containerPort: 8080
Create nginx-service.yam file:
apiVersion: v1
kind: Service
metadata:
name: nginx-service
namespace: dev
spec:
selector:
app: nginx-pod
clusterIP: None
type: ClusterIP
ports:
- port: 80
targetPort: 80
Create tomcat-service.yaml file:
apiVersion: v1
kind: Service
metadata:
name: tomcat-service
namespace: dev
spec:
selector:
app: tomcat-pod
clusterIP: None
type: ClusterIP
ports:
- port: 8080
targetPort: 8080
Create these Services and Pods and view the corresponding services created:
Write Http proxy Ingress.yaml file proxy service:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-http
namespace: dev
spec:
rules:
- host: nginx.itheima.com
http:
paths:
- path: /
backend:
serviceName: nginx-service
servicePort: 80
- host: tomcat.itheima.com
http:
paths:
- path: /
backend:
serviceName: tomcat-service
servicePort: 8080
Create an Ingress and view the information corresponding to the Ingress:
View Ingress details: