Use Tencent Cloud Container Service to Play Nginx Ingress

The author is Lin Yuxin, a container product engineer at Tencent Cloud. Currently, he is mainly responsible for the research and development of Tencent Cloud TKE console.

Overview

In the open source community, there are many ways to implement the Ingress Controller of Kubernetes. Nginx Ingress is just one of them. Of course, it is also the most widely used Ingress Controller in the community. It is not only powerful, but also performance. Extremely high. This article mainly introduces how to use Tencent Cloud Container Service to implement Nginx Ingress deployment in multiple ways, and will briefly introduce the implementation principles, advantages and disadvantages and applicable scenarios of each method.

What is Nginx Ingress

Nginx Ingress is an object Kubernetes by nginx-ingress-controllerthe user declared nginx-ingressas nginx forwarding rules transformation. The core problem to be solved is the forwarding of traffic and load balancing in the east-west direction.
The main working principle is to nginx-ingress-controllermonitor api-serverchanges (Kubernetes Informers), and to change the configuration of the Nginx instance through watch Kubernetes changes in objects such as Ingress, Service, Endpoint, Secret, and ConfigMap to forward traffic.

img

Currently in the community, there are mainly the following two implementation methods for Nginx Ingress

Why you need Nginx Ingress

Among the open source community, to Ingress Controllerachieve a variety of ways, each of which has its Controller applicable scenarios and their advantages and disadvantages, why it is recommended to use nginx-ingress-controller? Let's talk about, if you do not use nginx-ingress-controllerwill bring business what troubled

Here Tencent cloud container service console (hereinafter referred to as TKE) the default recommended ingress controlleras an example, there are some problems as follows:

  1. The CLB-type Ingress capability cannot meet the needs of existing services, such as the inability to share the same external network entry, support the default forwarding backend, etc.
  2. The original business has been used nginx-inrgess, and the operation and maintenance has become accustomed to the configuration nginx.conf, do not want to make too many changes

Use nginx-ingress-controller, can solve the above problems.

What prerequisites are needed

Deploy nginx-ingress-operator

Component deployment and installation

Tencent cloud service into the container among console, select the need to deploy Nginx Ingressthe cluster, enter the cluster - which component management, deployment, installation Nginx Ingesscomponents, as follows:

img

The components are installed and functioning normally

img

Deployment plan

TKE provides a variety nginx-ingress-controllerof deployment solutions and ways to access LB to adapt to different business scenarios. The following will introduce different solutions.

nginx-ingress-controller deployment plan

Option 1: DaemonSet + node pool

img

As a key traffic access gateway, Nginx is a vital component. It is not recommended to deploy Nginx and other services in the same node. It can be deployed by setting taints in the node pool. For related instructions on node pools, you can check Tencent Cloud Container Service Node Pool Overview .

img

When using this deployment scheme, you should pay attention to the following items:

  • Advance ready to deploy nginx-ingress-controllernode pool, and set node pool blemishes Taintand Labelprevent other Pod dispatched to the node pool.

  • Make sure that the successful deployment of installed nginx-ingress-operatorcomponents, deployment above the Reference Guide

  • Enter component details, create

    nginx-ingress-controller

    Instance (multiple instances can exist in a single cluster at the same time)

    • Deployment method selection 指定节点池DaemonSet部署
    • Set tolerance to stain
    • Set Request/Limit, where Request needs to be set smaller than the model configuration of the node pool (the node itself has resources reserved to avoid the instance being unavailable due to insufficient resources), Limit can not be set
    • Other parameters can be set according to business needs
Option 2: Deployment + HPA

img

Using the Deployment + HPA solution for deployment, you can configure taint and tolerance according to business needs, and deploy Nginx and business Pods in a decentralized manner. With HPA, set indicators such as CPU/memory for elastic scaling.

img

When using this deployment scheme, you should pay attention to the following items:

  • Set to be deployed in a cluster nginx-ingress-controllerLabel node

  • Make sure that the successful deployment of installed nginx-ingress-operatorcomponents, deployment reference guide above.

  • Enter component details, create

    nginx-ingress-controller

    Instance (multiple instances can exist in a single cluster at the same time)

    • Deployment method selection 自定义Deployment+HPA部署
    • Set HPA trigger strategy
    • Set Request/Limit
    • Set the node scheduling policy, recommend nginx-ingress-controllerexclusive nodes, avoid other business resources resulting from occupation unavailable
    • Other parameters can be set according to business needs

Nginx front-end access to LB deployment method

She described above, deployed in clusters among TKE nginx-ingress-operatorand nginx-ingress-controllerthe deployment and use of processes that the fulfillment of the above steps, only deployed in a cluster of related components Nginx, but to receive external traffic, configure, configure nginx The front end LB. Currently TKE has completed product support for Nginx Ingress. You can choose one of the following deployment modes according to business needs.

Solution 1: VPC-CNI mode cluster uses CLB to communicate with Nginx Service (recommended)

Preconditions (satisfy one of them):

  1. The cluster's own network plug-in is VPC-CNI
  2. Cluster plug-in for its own network Global Router, and turned on VPC-CNIthe support (both modes mix)

We use the load example deployed by the node pool. The
img
current solution has good performance. All Pods use elastic network cards. The Pod of the elastic network card supports CLB to directly bind the Pod, which can bypass NodePort, and does not need to manually maintain the CLB. It supports automatic scaling. Content is the most ideal solution.

Solution 2: Global Router mode cluster uses ordinary LoadBalancer mode Service

The current TKE's default implementation of LoadBalancer type of Service is based on NodePort. CLB will bind the NodePort of each node as the back-end RS, forward the traffic to the NodePort of the node, and then the node will route the request to the corresponding post of the Service through Iptables or IPVS. End Pod (refers to Pod of Nginx Ingress Controller).

If your cluster is not supported by VPC-CNIthe network mode, it can be obtained by conventional LoadBalancerService access traffic access methods.
This is the easiest way to deploy Nginx Ingress on TKE. The traffic will pass through a layer of NodePort and be forwarded by another layer, but the following problems may exist:

  1. The forwarding path is long. After the traffic reaches the NodePort, it will go through the internal load balance of Kubernetes and forwarded to Nginx through Iptables or IPVS, which will increase the network time consumption.
  2. After NodePort, SNAT will inevitably occur. If the traffic is too concentrated, it is easy to cause the source port to be exhausted or conntrack insertion conflicts to cause packet loss and cause some traffic abnormalities.
  3. The NodePort of each node also acts as a load balancer. If the CLB is bound to the NodePort of a large number of nodes, the state of load balancing will be scattered on each node, which will easily lead to uneven global load.
  4. CLB will perform health detection on the NodePort, and the detection packet will eventually be forwarded to the Nginx Ingress Pod. If the number of nodes bound to the CLB is more than the Nginx Ingress Pod, the detection packet will cause greater pressure on the Nginx Ingress.
Solution 3: Use HostNetwork + LB

Although the second solution is the simplest deployment method, the traffic will pass through a layer of NodePort, and there may be problems as described above. We can let Nginx Ingress use HostNetwork and CLB directly bind the node IP + port (80,443). Due to the use of HostNetwork, Pods nginx ingresscannot be scheduled to the same node to avoid port monitoring conflicts.
Since TKE not have a program of product, you can plan ahead, choose some of the nodes, specifically for deployment nginx-ingress-controller, nodes marked Label, then DaemonSet way to deploy on those nodes (ie nginx-ingress-controllerdeployment scenario a).

How to integrate monitoring

TKE integrates the high-performance cloud native monitoring service of the Tencent Cloud Container Team (Portal: https://console.cloud.tencent.com/tke2/prometheus ), and can also be used in the previously published article "How to monitor 100,000 containers with Prometheus "Kubernetes Cluster" to learn about Prometheus, Kvass and how to use kvass-based Prometheus clustering technology.

Bind monitoring instance

img

View monitoring data

img

How to collect and consume logs

TKE By integrating log Tencent cloud service CLS, provides a full range of complete product capabilities, achieve nginx-ingress-controllerlog collection and spending power, but a few things to note are as follows:

  1. Prerequisite: Make sure that the log collection function is enabled in the current cluster
  2. In the nginx-ingress-controllerexample, the configuration options related to the acquisition of the log.

img

to sum up

This paper reviews how to use Tencent cloud service console Fun Nginx Ingress container, mainly introduced for the console nginx-ingress-controllerare two ways to deploy and recommendations, as well as front-end access LB of three ways, in addition to Nginx Ingress on the TKE one-click deployment, TKE also provides cluster deployed nginx-ingress-controllerproduct support related to the ability of logging and monitoring. For those who want to use Nginx Ingress on TKE, this article is a good reference and guide.

Guess you like

Origin blog.51cto.com/14120339/2575983