1. Ingress principle
1. Data flow
Ingress is a seven-layer load balancing, understood as nginx. The data flow diagram from the official website, the client access enters ingress, ingress resolves according to the domain name, and then finds the associated service service to obtain pod information, and directly proxy to the pod node
2. Ingress mode hostnetwork nodeport
hostnetwork mode:
Each node creates a container of ingress-controller, and the network mode of the container is set to hostNetwork. The access request will go directly to pod-nginx through port 80/443. Then nginx forwards the traffic to the corresponding web application container according to the ingress rule.
nodeport mode:
Access traffic first enters the node node through the nodeport, is forwarded to the ingress-controller container via iptables (svc), and then forwarded to the back-end service container according to the rule.
Second, department
1. Hostnetwork mode
1) Download the yaml file
wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/mandatory.yaml
2) Modify the image
cp mandatory.yaml mandatory.yaml_back
quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.30.0
改为,registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:0.30.0
3) 先把镜像拉取下来
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:0.30.0
3) Modify to hostnetwork mode
vim mandatory
4) Deployment
kubectl apply -f mandatory.yaml
5) Ingress-nginx pod is randomly assigned to check
ingress-nginx pod is randomly deployed to node node, new node nod02 is added, and then redeployed
kubectl apply -f mandatory.yaml
6) Modified to daemon-set mode,
ingress-controller pod is deployed on all nodes.
7) Use label grouping to deploy ingress-controller
to find a group of node nodes dedicated to ingress entrance as traffic entrance, and use 10.0.0.104 as traffic entrance . For the configuration of affinity, k8s v1.10+ is required
1>nod02 as ingress dedicated node, tag
kubectl get nodes --show-labels
kubectl label nodes nod02 httpin=ingressfor
kubectl label nodes nod02 httpin=podforingress --overwrite
2>daemon-set, the affinity configuration is
based on the nodeselector configuration affinity, the node node configuration sets the upper limit of limit
3>apply
kubectl apply -f mandatory.yaml
The pod is scheduled to node nod02. (If you use the label's key value configuration, the configuration error will not report an error and the configuration will not take effect, and you will not see any pod in the namespace)
2. Nodeport mode deployment
nodeport can use deploy deployment or helm deployment, here use helm deployment
1) Add helm dependency
helm repo add nginx-stable https://helm.nginx.com/stable
helm repo list
helm search repo nginx-ingress
2) Pull the dependency to the local, modify the nodeport
helm pull nginx-stable/nginx-ingress
tar xf nginx-ingress-0.8.1.tgz
vim ./nginx-inress/values.yaml
3) Installation
kubectl create namespace ingress-nginx
helm install ingress-nginx ./nginx-ingress -n ingress-nginx
[root@k8s01 ingress]# helm install ingress-nginx ./nginx-ingress -n ingress-nginx
Error: cannot re-use a name that is still in use
[root@k8s01 ingress]# helm uninstall ingress-nginx -n ingress-nginx
release "ingress-nginx" uninstalled
kubectl get pods -o wide -n ingress-nginx
kubectl get svc -n ingress-nginx
Three, structure
The ingress entry grouping node is used for the traffic entry, and the external side must be load balancing. Nowadays, most of them are cloud environments, and most of the service exposure is through the 4-layer and 7-layer SLB, and deployment deployment realizes the 4-layer or 7-layer association through loading annotation and loadbalance-id. Or directly use ingress to achieve 7-layer load. Regardless of the environment, there are no more than two considerations. High concurrency and large traffic will inevitably bring high availability and horizontal scalability requirements.