Teach you how to use Nginx Ingress to achieve canary release

Overview

This article will introduce how to use Nginx Ingress to achieve canary release, from usage scenario analysis, to detailed usage, and then to hands-on practice.

Precondition

Cluster need to deploy Nginx Ingress as Ingress Controller, and exposed a unified flow of foreign entry, refer to deploy Nginx Ingress on TKE .

What publishing scenarios can Nginx Ingress be used in?

What scenarios can I use Nginx Ingress to achieve canary release? This mainly depends on what strategy is used for traffic segmentation. Currently, Nginx Ingress supports three traffic segmentation strategies based on Header, Cookie and Service Weight. Based on them, the following two publishing scenarios can be implemented.

Scenario 1: Grayscale the new version to some users

Assuming that a set of Service A services that provide 7-layer services to the outside world is running online, and a new version of Service A is developed later. I want to go online, but I don’t want to directly replace the original Service A. I hope to gray a small number of users first. After the operation is stable enough for a period of time, the new version will be released gradually, and finally the old version will be smoothly offline. At this time, you can use Nginx Ingress to publish the strategy of traffic segmentation based on Header or Cookie. The business uses Header or Cookie to identify different types of users. We configure Ingress to realize that requests with specified Header or Cookie are forwarded to The new version, others are still forwarded to the old version, so that the new version can be grayed out to some users:

img

Scenario 2: cut a certain percentage of traffic to the new version

Assume that a set of Service B services that provide 7-layer services to the outside world is running online, and some problems are fixed later, and a new version of Service B'needs to be launched in grayscale, but you don’t want to directly replace the original Service B, but let it cut first. 10% of the traffic to the new version, wait for a period of observation to stabilize, then gradually increase the traffic ratio of the new version until the old version is completely replaced, and finally slide the old version offline to achieve a certain percentage of traffic to the new version:

img

Annotation

We can achieve through some annotation to specify Nginx Ingress Ingress resources supported canary release, you need to create two Ingress to the service, a normal Ingress, and the other is with nginx.ingress.kubernetes.io/canary: "true"Ingress this fixed annotation, we will call it Canary Ingress, generally represents a new version of the service, combined with another annotation for the traffic segmentation strategy can be configured together to achieve canary release in various scenarios. The following is a detailed introduction to these annotations:

  • nginx.ingress.kubernetes.io/canary-by-header: Indicates if the request header includes a header name specified here, and the value always, then it forwards the request to the Ingress defined corresponding backend services; if the value is nevernot forwarded, may be used to roll back to the older version; if Other values ​​ignore the annotation.
  • nginx.ingress.kubernetes.io/canary-by-header-value: This can be used as canary-by-headera supplement, allows you to specify the request header can be customized to other values, not only is alwaysor never; a hit when the value of the custom request header values here, the request will be forwarded to the corresponding defined Ingress Backend service, if it is other value, the annotation will be ignored.
  • nginx.ingress.kubernetes.io/canary-by-header-pattern: The above and canary-by-header-valuesimilar, the only difference is that it is a regular expression to match the request header, rather than a certain fixed value; Note that, if it canary-by-header-valueis present at the same time, the annotation will be ignored .
  • nginx.ingress.kubernetes.io/canary-by-cookie: And this canary-by-headeris similar, except for the cookie, it is also the only support alwaysand nevervalue.
  • nginx.ingress.kubernetes.io/canary-weight: Indicates the percentage of the traffic allocated by Canary Ingress. The value range is [0-100]. For example, set to 10, which means that 10% of the traffic is allocated to the backend service corresponding to Canary Ingress.

The above rules will be evaluated in order of priority, which is as follows: canary-by-header -> canary-by-cookie -> canary-weight

Note: When Ingress Ingress is marked as Canary, besides nginx.ingress.kubernetes.io/load-balanceand nginx.ingress.kubernetes.io/upstream-hash-bybeyond, all other non-Canary comment will be ignored.

Hands-on practice

Below we give some examples to let you quickly get started with the canary release of Nginx Ingress, the environment is TKE cluster.

Create resources using YAML

The example in this article will use yaml to deploy workloads and create services. There are two ways of operation.

Method 1: Click YAML 创建资源in the upper right corner of the TKE or EKS console , and then paste the yaml example in this article into it:

img

Second way: Save yaml example into a file and then use kubectl yaml specified file to create, such as: kubectl apply -f xx.yaml.

Deploy two versions of the service

Here is a simple nginx as an example, first deploy a v1 version:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
      version: v1
  template:
    metadata:
      labels:
        app: nginx
        version: v1
    spec:
      containers:
      - name: nginx
        image: "openresty/openresty:centos"
        ports:
        - name: http
          protocol: TCP
          containerPort: 80
        volumeMounts:
        - mountPath: /usr/local/openresty/nginx/conf/nginx.conf
          name: config
          subPath: nginx.conf
      volumes:
      - name: config
        configMap:
          name: nginx-v1
---
apiVersion: v1
kind: ConfigMap
metadata:
  labels:
    app: nginx
    version: v1
  name: nginx-v1
data:
  nginx.conf: |-
    worker_processes  1;
    events {
        accept_mutex on;
        multi_accept on;
        use epoll;
        worker_connections  1024;
    }
    http {
        ignore_invalid_headers off;
        server {
            listen 80;
            location / {
                access_by_lua '
                    local header_str = ngx.say("nginx-v1")
                ';
            }
        }
    }
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-v1
spec:
  type: ClusterIP
  ports:
  - port: 80
    protocol: TCP
    name: http
  selector:
    app: nginx
    version: v1

Deploy another v2 version:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-v2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
      version: v2
  template:
    metadata:
      labels:
        app: nginx
        version: v2
    spec:
      containers:
      - name: nginx
        image: "openresty/openresty:centos"
        ports:
        - name: http
          protocol: TCP
          containerPort: 80
        volumeMounts:
        - mountPath: /usr/local/openresty/nginx/conf/nginx.conf
          name: config
          subPath: nginx.conf
      volumes:
      - name: config
        configMap:
          name: nginx-v2
---
apiVersion: v1
kind: ConfigMap
metadata:
  labels:
    app: nginx
    version: v2
  name: nginx-v2
data:
  nginx.conf: |-
    worker_processes  1;
    events {
        accept_mutex on;
        multi_accept on;
        use epoll;
        worker_connections  1024;
    }
    http {
        ignore_invalid_headers off;
        server {
            listen 80;
            location / {
                access_by_lua '
                    local header_str = ngx.say("nginx-v2")
                ';
            }
        }
    }
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-v2
spec:
  type: ClusterIP
  ports:
  - port: 80
    protocol: TCP
    name: http
  selector:
    app: nginx
    version: v2

You can see the deployment status in the console:

img

Create another Ingress to expose the service externally, pointing to the v1 version of the service:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: nginx
  annotations:
    kubernetes.io/ingress.class: nginx
spec:
  rules:
  - host: canary.example.com
    http:
      paths:
      - backend:
          serviceName: nginx-v1
          servicePort: 80
        path: /

Visit verification:

$ curl -H "Host: canary.example.com" http://EXTERNAL-IP # EXTERNAL-IP 替换为 Nginx Ingress 自身对外暴露的 IP
nginx-v1

Header-based traffic segmentation

Create Canary Ingress, specify the v2 version of the back-end service, and add some annotations, so that only requests with a request header named Region and a value of cd or sz are forwarded to the current Canary Ingress, and the new version of simulating grayscale is forwarded to Chengdu and Users in Shenzhen area:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/canary: "true"
    nginx.ingress.kubernetes.io/canary-by-header: "Region"
    nginx.ingress.kubernetes.io/canary-by-header-pattern: "cd|sz"
  name: nginx-canary
spec:
  rules:
  - host: canary.example.com
    http:
      paths:
      - backend:
          serviceName: nginx-v2
          servicePort: 80
        path: /

Test visit:

$ curl -H "Host: canary.example.com" -H "Region: cd" http://EXTERNAL-IP # EXTERNAL-IP 替换为 Nginx Ingress 自身对外暴露的 IP
nginx-v2
$ curl -H "Host: canary.example.com" -H "Region: bj" http://EXTERNAL-IP
nginx-v1
$ curl -H "Host: canary.example.com" -H "Region: cd" http://EXTERNAL-IP
nginx-v2
$ curl -H "Host: canary.example.com" http://EXTERNAL-IP
nginx-v1

As you can see, only Regionthe request whose header is cd or sz is responded by the v2 version service.

Cookie-based traffic segmentation

Header similar to the previous, but it can not be customized using the Cookie value, here to simulate the gray area of Chengdu users, for example, with only the name user_from_cdof the current Canary Ingress forwarding the request to the cookie. First delete the previous Canary Ingress based on Header traffic segmentation, and then create the following new Canary Ingress:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/canary: "true"
    nginx.ingress.kubernetes.io/canary-by-cookie: "user_from_cd"
  name: nginx-canary
spec:
  rules:
  - host: canary.example.com
    http:
      paths:
      - backend:
          serviceName: nginx-v2
          servicePort: 80
        path: /

Test visit:

$ curl -s -H "Host: canary.example.com" --cookie "user_from_cd=always" http://EXTERNAL-IP # EXTERNAL-IP 替换为 Nginx Ingress 自身对外暴露的 IP
nginx-v2
$ curl -s -H "Host: canary.example.com" --cookie "user_from_bj=always" http://EXTERNAL-IP
nginx-v1
$ curl -s -H "Host: canary.example.com" http://EXTERNAL-IP
nginx-v1

We can see, only the cookie user_from_cdis alwaysthe only response to a request by the v2 version of the service.

Traffic segmentation based on service weight

Canary Ingress based on service weight is simple, directly define the proportion of traffic that needs to be imported, here is an example of importing 10% of traffic to v2 (if there is, delete the previous Canary Ingress):

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/canary: "true"
    nginx.ingress.kubernetes.io/canary-weight: "10"
  name: nginx-canary
spec:
  rules:
  - host: canary.example.com
    http:
      paths:
      - backend:
          serviceName: nginx-v2
          servicePort: 80
        path: /

Test visit:

$ for i in {1..10}; do curl -H "Host: canary.example.com" http://EXTERNAL-IP; done;
nginx-v1
nginx-v1
nginx-v1
nginx-v1
nginx-v1
nginx-v1
nginx-v2
nginx-v1
nginx-v1
nginx-v1

It can be seen that there is only a one-tenth chance that the service of the v2 version will respond, which meets the 10% service weight setting.

Existing defects

Although we use Nginx Ingress to implement canary releases in several different poses, there are still some flaws:

  1. Only one Canary Ingress of the same service can be defined, so the backend service supports up to two versions.
  2. The domain name must be configured in Ingress, otherwise it will not be effective.
  3. Even if the traffic is completely cut to Canary Ingress, the old service must still exist, otherwise an error will be reported.

to sum up

This article summarizes the usage of canary publishing of Nginx Ingress in an all-round way. Although Nginx Ingress has limited capabilities in canary publishing, and there are still some defects, it can basically cover some common scenarios. If Nginx is used in the cluster Ingress, and the publishing requirements are not complicated, you can consider using this solution.

Reference

Guess you like

Origin blog.51cto.com/14120339/2543143