k8s deployment ingress-nginx steps

Table of contents

1. Introduction to ingress

2. Deploy ingress controller and ingress-service

3. Create external service deployment and service

4. Create HTTP proxy yaml

5. Test

6. Public domain name test

Seven, reference blog


1. Introduction to ingress

The role of service is reflected in two aspects. For the inside of the cluster, it keeps track of pod changes, uses pod readiness probes to update the corresponding pod objects in the endpoint, and provides a service discovery mechanism for pods with changing IP addresses. Outside the cluster, they Like a load balancer, pods can be accessed inside and outside the cluster.

There are two main ways for k8s to expose services to the outside world: NotePort and LoadBalance. In addition, externalIPs can also enable various services to provide services to the outside world. However, when there are many cluster services, the biggest disadvantage of the NodePort method is that it will occupy many cluster machine ports. , when there are tens or hundreds of services running in the cluster, the port management of NodePort is a disaster; the biggest disadvantage of the LB method is that one LB for each service is a bit wasteful, troublesome, and expensive, and requires a cloud other than k8s Platform support; while ingress only needs one NodePort or one LB to meet the external service requirements of all services.

Ingress provides a way to expose services in the cluster dimension. Ingress can be simply understood as a service of a service. It uses an independent ingress object to formulate rules for forwarding domain name requests, and forwards requests to one or more services. In this way, the service and request rules are decoupled, and the exposure of the business can be considered uniformly from the business dimension, instead of considering each service separately.

Ingress is equivalent to a 7-layer load balancer , which is an abstraction of reverse proxy by k8s. The general working principle is indeed similar to Nginx, which can be understood as establishing mapping rules in the Ingress, and the ingress Controller listens to the configuration rules in the Ingress and converts them into Nginx configurations, and then provides services to the outside. Ingress includes: There are two core concepts here:

Ingress : An object in kubernetes, which defines the rules for how requests are forwarded to the service

ingress controller : The core is a deployment, and there are many ways to implement it, such as nginx, Contour, Haproxy. The yaml that needs to be written includes: Deployment, Service, ConfigMap, ServiceAccount (Auth), where the type of service can be NodePort or LoadBalancer.

The working principle of Ingress (taking nginx as an example) is as follows:
1. The user writes an Ingress rule, indicating that the domain name corresponds to the Service in the kubernetes cluster.
2. The Ingress Controller dynamically perceives the change of the Ingress service rule, and then generates a corresponding Nginx reverse proxy Configuration
3. The Ingress Controller will write the generated nginx configuration into a running Nginx service and update it dynamically.
4. Up to this point, the real working Nginx is an Nginx with user-defined request forwarding rules configured internally.

2. Deploy ingress controller and ingress-service

Deploy ingress controller, ingress-service

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/mandatory.yaml

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/provider/baremetal/service-nodeport.yaml

mandatory.yaml is a collection of multiple yaml files, including all the resources of the ingress controller. Since the yaml is too long, only the important deployment part is shown here to create quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.30 .0 creates a deployment named nginx-ingress-controller in the ingress-nginx namespace for the image.

The deployment section of mandatory.yaml

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/part-of: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
      annotations:
        prometheus.io/port: "10254"
        prometheus.io/scrape: "true"
    spec:
      # wait up to five minutes for the drain of connections
      terminationGracePeriodSeconds: 300
      serviceAccountName: nginx-ingress-serviceaccount
      nodeSelector:
        kubernetes.io/os: linux
      containers:
        - name: nginx-ingress-controller
          image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.30.0
          args:
            - /nginx-ingress-controller
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --publish-service=$(POD_NAMESPACE)/ingress-nginx
            - --annotations-prefix=nginx.ingress.kubernetes.io
          securityContext:
            allowPrivilegeEscalation: true
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            # www-data -> 101
            runAsUser: 101
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
            - name: https
              containerPort: 443
              protocol: TCP
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          lifecycle:
            preStop:
              exec:
                command:
                  - /wait-shutdown
---

service-nodeport.yaml

apiVersion: v1
kind: Service
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  type: NodePort
  ports:
    - name: http
      port: 80
      targetPort: 80
      protocol: TCP
    - name: https
      port: 443
      targetPort: 443
      protocol: TCP
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

3. Create external service deployment and service

create namespace

[root@k8s-master ingress]# kubectl create ns dev
namespace/dev created

Create tomcat-nginx.yaml, create the deployment and service resources of tomcat and nginx, and bind them according to labels:app=nginx-pod and labels:app=tomcat-pod.

Note that this is a special kind of Service, Headless Service. As long as clusterIP: None is set in the definition of Service, a Headless Service is defined. The key difference between it and ordinary Service is that it does not have a ClusterIP address. If you parse the Headless Service DNS domain name, it returns the Endpoint list of all Pods corresponding to the Service, which means that the client directly establishes a TCP/IP connection with the backend Pod to communicate without forwarding through the virtual ClusterIP address, so the communication performance is the highest. Equivalent to "native network communication".

tomcat-nginx.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  namespace: dev
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx-pod
  template:
    metadata:
      labels:
        app: nginx-pod
    spec:
      containers:
      - name: nginx
        image: nginx:1.17.1
        ports:
        - containerPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: tomcat-deployment
  namespace: dev
spec:
  replicas: 3
  selector:
    matchLabels:
      app: tomcat-pod
  template:
    metadata:
      labels:
        app: tomcat-pod
    spec:
      containers:
      - name: tomcat
        image: tomcat:8.5-jre10-slim
        ports:
        - containerPort: 8080

---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
  namespace: dev
spec:
  ports:
    - port: 80
      name: nginx
  clusterIP: None
  selector:
    app: nginx-pod
---
apiVersion: v1
kind: Service
metadata:
  name: tomcat-service
  namespace: dev
spec:
  ports:
    - port: 8080
      name: tomcat
  clusterIP: None
  selector:
    app: tomcat-pod

kubectl apply -f tomcat-nginx.yaml
kubectl get svc -n dev

Compared with ordinary services, the services here do not have clusterIP .

4. Create HTTP proxy yaml

Create the HTTP proxy ingress-http.yaml. This file is more important. Each host field represents a domain name, and servicePort represents the container service port. The domain name can be changed by itself , as long as it is consistent with the following hosts file.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-http
  namespace: dev
spec:
  rules:
  - host: nginx.itheima.com
    http:
      paths:
      - path: /
        backend:
          serviceName: nginx-service
          servicePort: 80
  - host: tomcat.itheima.com
    http:
      paths:
      - path: /
        backend:
          serviceName: tomcat-service
          servicePort: 8080
#创建HTTP代理
kubectl create -f ingress-http.yaml 
#查询HOST(域名)
kubectl get ing ingress-http -n dev
#查询详细信息
kubectl describe ing ingress-http -n dev

Port 30022 is automatically allocated here, and ports 30000 to 32767 are randomly exposed. The port selection range can be defined, but the specific port cannot be defined.

#查看svc端口
kubectl get svc -n ingress-nginx

5. Test

Add two lines to /etc/hosts file

172.18.60.77 nginx.itheima.com
172.18.60.77 tomcat.itheima.com

Test success

If you directly use the ClusterIP of ingress-nginx to access, 404Not Found, because one port of one IP has multiple services, so it cannot be directly accessed by IP, and 404 will appear.

6. Public domain name test

If you do not have a public domain name, you can skip this step and perform a public domain name test.

#编辑HTTP代理yaml
kubectl edit ingress ingress-http -n dev

No need to set up hosts, it can be tested directly on any networked device

If you directly use the public network IP:30022 to access, 404 will appear. It can be seen that using the public network IP:30022 on the external network is equivalent to using ingress-nginxIP:80 on the internal network .

 If you want to change port 30022, edit the nodePort property of service-nodeport.yaml, which must be in the range of 30000-30656 by default. But the changed port number must be in the --service-node-port-range attribute in the kube-apiserver configuration. Because k8s is afraid of occupying other ports, the default port is 30000-32767. You can change the kube-apiserver configuration. It is recommended to use 1- 65535.

kubectl edit svc ingress-nginx -n ingress-nginx

Change to 30023

Seven, reference blog

k8s ingress principle and ingress-nginx deployment test_ingrssnginx_GavinYCF's Blog-CSDN Blog

How to deploy ingress-nginx in k8s - Programmer Sought

Guess you like

Origin blog.csdn.net/weixin_48878440/article/details/130246467