This is the fourth in a container cloud platform, then on a continuing,
first kubernetes services expose the following ways:
- NodePort
- Loadbalance
- ClusterIP
- Ingress
This article is close to the first architecture diagram, only introduces Ingress, and the rest will be described in detail later. .
What is Ingress?
Ingress is an API object that manages external access to services in the cluster. The typical access method is HTTP. Of course, TCP can also be managed.
Ingress can provide load balancing, SSL termination, and domain-based virtual hosting.
The vernacular is: Expose the services deployed by the kubernetes cluster so that users or services outside the cluster can access it.
There are many controllers that can provide ingress for kubernetes. This article uses a HAProxy-based controller: Haproxy Ingreess.
HAProxy is a fast and reliable TCP and HTTP reverse proxy and load balancer.
Of course, a kubernetes cluster can also deploy multiple ingress controllers at the same time.
HAProxy Ingress
HAProxy Ingress obtains the pod status of the services backend by monitoring the Kubernetes API, and dynamically updates the configuration file of haproxy to achieve load balancing.
It allows thousands of requests per second per agent, regardless of the size of the cluster, with very low latency.
A brief introduction, and then the actual combat.
Deploy HAProxy Ingress to kubernetes cluster
- Download the yaml deployment file:
wget https://haproxy-ingress.github.io/resources/haproxy-ingress.yaml
- Modify haproxy-ingress.yaml, because now using the 1.19 version, and some api version has expired, you need to modify the next
rbac.authorization.k8s.io/v1beta1
changerbac.authorization.k8s.io/v1
, if you do not have to modify Warning, but does not affect the current - deploy:
kubectl apply -f haproxy-ingress.yaml
[root@k8s-master001 opt]# kubectl apply -f haproxy-ingress.yaml namespace/ingress-controller created serviceaccount/ingress-controller created Warning: rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole clusterrole.rbac.authorization.k8s.io/ingress-controller created Warning: rbac.authorization.k8s.io/v1beta1 Role is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 Role role.rbac.authorization.k8s.io/ingress-controller created Warning: rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding clusterrolebinding.rbac.authorization.k8s.io/ingress-controller created Warning: rbac.authorization.k8s.io/v1beta1 RoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 RoleBinding rolebinding.rbac.authorization.k8s.io/ingress-controller created configmap/haproxy-ingress created daemonset.apps/haproxy-ingress created
-
View deployment status
[root@k8s-master001 opt]# kubectl get all -n ingress-controller NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/haproxy-ingress 0 0 0 0 0 role=ingress-controller 6m2s
What the hell, why is there no pod related to haproxy?
- Now look at the haproxy-ingress.yaml file and find that it defines the nodeSelector node label selector, but now the cluster node does not have any machine with this label, so you need to set a reasonable label for the node, or you will not create a pod
now manually Add a label to master003role=ingress-controller
[root@k8s-master001 opt]# kubectl label node k8s-master003 role=ingress-controller node/k8s-master003 labeled
-
Let's check again, there is already haproxy-ingress-6mfqr, the status is Running, which means it has been deployed, is it easy~~~
[root@k8s-master001 opt]# kubectl get all -n ingress-controlle NAME READY STATUS RESTARTS AGE pod/haproxy-ingress-6mfqr 1/1 Running 1 2m40s NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/haproxy-ingress 1 1 1 1 1 role=ingress-controller 18m
Use Ingress to expose services
Here we use the nginx deployed in the previous article as an example, deploy the previous article, and now check the nginx status
[root@k8s-master001 ~]# kubectl get po,svc
NAME READY STATUS RESTARTS AGE
pod/nginx-0 1/1 Running 0 32h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d2h
service/nginx NodePort 10.106.27.213 <none> 80:30774/TCP 32h
You can see that there is a service named nginx
[root@k8s-master001 ~]# cat ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx
annotations:
kubernetes.io/ingress.class: haproxy
spec:
rules:
- host: nginx.ieasou.cn
http:
paths:
- path: /
backend:
serviceName: nginx
servicePort: 80
If you execute the deployment, there will be a Warning, because the kubernetes apiVersion is updated, and extensions/v1beta1 will no longer be supported in the future. This does not affect it. Ignore it first.
[root@k8s-master001 ~]# kubectl apply -f ingress.yaml
Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
ingress.extensions/nginx configured
View deployment results
[root@k8s-master001 ~]# kubectl get ing
NAME CLASS HOSTS ADDRESS PORTS AGE
nginx <none> nginx.test.cn 80 21m
Now, you need to resolve nginx.test.cn to the IP of the node where haproxy-ingresss is located. In this article, if haproxy-ingresss is deployed to master003 (10.26.25.22) for
testing, you can directly modify the /etc/hosts file, and then you can access nginx.test. cn
[root@k8s-master001 ~]# curl -I nginx.text.cn
HTTP/1.1 200 OK
server: nginx/1.19.2
date: Sat, 12 Sep 2020 12:40:01 GMT
content-type: text/html
content-length: 612
last-modified: Tue, 11 Aug 2020 14:50:35 GMT
etag: "5f32b03b-264"
accept-ranges: bytes
strict-transport-security: max-age=15768000
As you can see from the result, the status code is returned, indicating that the deployed nginx service can be accessed.
Note: The pictures in the article are from the Internet. If there is any infringement, please contact me to delete it in time.
Tips: For more good articles, please pay attention to the public * "Rookie Operation and Maintenance Talk"! ! !