Use Nginx-Ingress to achieve blue-green release/canary release/AB test

Reprinted: https://juejin.cn/post/6844903927318577159

Background introduction

In some cases, we are using Kubernetes as the cloud platform for business applications. We want to implement blue-green deployment of applications to iterate application versions. Using lstio is too heavy and complicated, and it is positioned in flow control and grid governance; Ingress -Nginx introduced the Canary function in version 0.21, which can configure multiple versions of applications for the gateway entrance, and use annotations to control the traffic distribution of multiple back-end services

Ingress-Nginx-Annotation Canary function introduction

If you want to enable the Canary function, you must first set it nginx.ingress.kubernetes.io/canary: "true", and then you can enable the following annotations to configure Canary

  • nginx.ingress.kubernetes.io/canary-weight The percentage of requests to the service specified in Canary Ingress. The value is an integer from 0-100. According to the set value, the percentage of traffic will be allocated to the backend service specified in Canary Ingress.
  • nginx.ingress.kubernetes.io/canary-by-header Traffic segmentation based on the request header is suitable for grayscale publishing or A/B testing. When the set hearer value is always, the request traffic will always be allocated to the Canary entrance. When the hearer value is set to never, the request traffic Will not be assigned to the Canary entry, for other hearder values, will be ignored, and request traffic will be assigned to other rules by priority
  • nginx.ingress.kubernetes.io/canary-by-header-value This configuration should nginx.ingress.kubernetes.io/canary-by-header be used together. When the hearer key and value nginx.ingress.kubernetes.io/canary-by-header nginx.ingress.kubernetes.io/canary-by-header-valuein the request match, the request traffic will be allocated to the Canary Ingress entry. For any other hearer values, it will be ignored, and the request traffic will be assigned to other rules by priority
  • nginx.ingress.kubernetes.io/canary-by-cookie This configuration is based on cookie traffic segmentation. It is also suitable for grayscale publishing or A/B testing. When the cookie value is set to always, the request traffic will be routed to the Canary Ingress entrance. When the cookie value is set to never, the request traffic will be routed to the Canary Ingress entry. Will not be routed to the Canary entry, for other values, will be ignored, and request traffic will be assigned to other rules by priority

The canary rules are sorted by priority as follows: canary-by-header -> canary-by-cookie -> canary-weight

1. Small-scale version testing based on weights

  • v1 version layout file
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
  labels:
    app: echoserverv1
  name: echoserverv1
  namespace: echoserver
spec:
  rules:
  - host: echo.chulinx.com
    http:
      paths:
      - backend:
          serviceName: echoserverv1
          servicePort: 8080
        path: /
---
kind: Service
apiVersion: v1
metadata:
  name:  echoserverv1
  namespace: echoserver
spec:
  selector:
    name:  echoserverv1
  type:  ClusterIP
  ports:
  - name:  echoserverv1
    port:  8080
    targetPort:  8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name:  echoserverv1
  namespace: echoserver
  labels:
    name:  echoserverv1
spec:
  selector:
    matchLabels:
      name: echoserverv1
  template:
    metadata:
      labels:
        name:  echoserverv1
    spec:
      containers:
      - image:  mirrorgooglecontainers/echoserver:1.10
        name:  echoserverv1 
        ports:
        - containerPort:  8080
          name:  echoserverv1
  • View the resources created by the v1 version
$ [K8sSj] kubectl get pod,service,ingress -n echoserver
NAME                                READY   STATUS    RESTARTS   AGE
pod/echoserverv1-657b966cb5-7grqs   1/1     Running   0          24h

NAME                   TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)    AGE
service/echoserverv1   ClusterIP   10.99.68.72   <none>        8080/TCP   24h

NAME                              HOSTS              ADDRESS   PORTS   AGE
ingress.extensions/echoserverv1   echo.chulinx.com             80      24h
  • Visit the v1 service, you can see that 10 requests are all access to a pod, that is, the v1 version of the service
$ [K8sSj] for i in `seq 10`;do curl -s echo.chulinx.com|grep Hostname;done
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
  • Create a v2 version of the service

We turn on the canary function and set the weight of the v2 version to 50%. This percentage does not accurately distribute requests to the two versions of the service, but fluctuates around 50%.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/canary: "true"
    nginx.ingress.kubernetes.io/canary-weight: "50"
  labels:
    app: echoserverv2
  name: echoserverv2
  namespace: echoserver
spec:
  rules:
  - host: echo.chulinx.com
    http:
      paths:
      - backend:
          serviceName: echoserverv2
          servicePort: 8080
        path: /
---
kind: Service
apiVersion: v1
metadata:
  name:  echoserverv2
  namespace: echoserver
spec:
  selector:
    name:  echoserverv2
  type:  ClusterIP
  ports:
  - name:  echoserverv2
    port:  8080
    targetPort:  8080
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name:  echoserverv2
  namespace: echoserver
  labels:
    name:  echoserverv2
spec:
  template:
    metadata:
      labels:
        name:  echoserverv2
    spec:
      containers:
      - image:  mirrorgooglecontainers/echoserver:1.10
        name:  echoserverv2 
        ports:
        - containerPort:  8080
          name:  echoserverv2
  • View the created resource again
$ [K8sSj] kubectl get pod,service,ingress -n echoserver
NAME                                READY   STATUS    RESTARTS   AGE
pod/echoserverv1-657b966cb5-7grqs   1/1     Running   0          24h
pod/echoserverv2-856bb5758-f9tqn    1/1     Running   0          4s

NAME                   TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
service/echoserverv1   ClusterIP   10.99.68.72      <none>        8080/TCP   24h
service/echoserverv2   ClusterIP   10.111.103.170   <none>        8080/TCP   4s

NAME                              HOSTS              ADDRESS   PORTS   AGE
ingress.extensions/echoserverv1   echo.chulinx.com             80      24h
ingress.extensions/echoserverv2   echo.chulinx.com             80      4s
  • Access test

It can be seen that 4 requests fell to the v2 version, and 6 fell to the v1 version. In theory, the more requests are said, the closer the number of requests falling to the v2 version is to the set weight of 50%.

$ [K8sSj] for i in `seq 10`;do curl -s echo.chulinx.com|grep Hostname;done
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs

2. A/B test based on header

  • Change the orchestration file of the v2 version

Add headernginx.ingress.kubernetes.io/canary-by-header: "v2"

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/canary: "true"
    nginx.ingress.kubernetes.io/canary-weight: "50"
    nginx.ingress.kubernetes.io/canary-by-header: "v2"
  labels:
    app: echoserverv2
  name: echoserverv2
  namespace: echoserver
spec:
  rules:
  - host: echo.chulinx.com
    http:
      paths:
      - backend:
          serviceName: echoserverv2
          servicePort: 8080
        path: /
---
kind: Service
apiVersion: v1
metadata:
  name:  echoserverv2
  namespace: echoserver
spec:
  selector:
    name:  echoserverv2
  type:  ClusterIP
  ports:
  - name:  echoserverv2
    port:  8080
    targetPort:  8080
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name:  echoserverv2
  namespace: echoserver
  labels:
    name:  echoserverv2
spec:
  template:
    metadata:
      labels:
        name:  echoserverv2
    spec:
      containers:
      - image:  mirrorgooglecontainers/echoserver:1.10
        name:  echoserverv2 
        ports:
        - containerPort:  8080
          name:  echoserverv2
  • Update access test

Test the header for the v2:always v2:never v2:truethree hearder value, as can be seen when hearder v2:alwaystime, all traffic will flow into v2, when v2:never, the traffic will flow into all v1, as v2:truewhen, that is, non always/never, heavy traffic will flow into the right configuration in accordance with the corresponding version of service

$ [K8sSj] kubectl apply -f appv2.yml
ingress.extensions/echoserverv2 configured
service/echoserverv2 unchanged
deployment.extensions/echoserverv2 unchanged

$ [K8sSj] for i in `seq 10`;do curl -s -H "v2:always" echo.chulinx.com|grep Hostname;done
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn

$ [K8sSj] for i in `seq 10`;do curl -s -H "v2:never" echo.chulinx.com|grep Hostname;done
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs

$ [K8sSj] for i in `seq 10`;do curl -s -H "v2:true" echo.chulinx.com|grep Hostname;done
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv2-856bb5758-f9tqn
  • Custom header-value
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/canary: "true"
    nginx.ingress.kubernetes.io/canary-weight: "50"
    nginx.ingress.kubernetes.io/canary-by-header: "v2"
    nginx.ingress.kubernetes.io/canary-by-header-value: "true"
  labels:
    app: echoserverv2
  name: echoserverv2
  namespace: echoserver
spec:
  rules:
  - host: echo.chulinx.com
    http:
      paths:
      - backend:
          serviceName: echoserverv2
          servicePort: 8080
        path: /
---
kind: Service
apiVersion: v1
metadata:
  name:  echoserverv2
  namespace: echoserver
spec:
  selector:
    name:  echoserverv2
  type:  ClusterIP
  ports:
  - name:  echoserverv2
    port:  8080
    targetPort:  8080
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name:  echoserverv2
  namespace: echoserver
  labels:
    name:  echoserverv2
spec:
  template:
    metadata:
      labels:
        name:  echoserverv2
    spec:
      containers:
      - image:  mirrorgooglecontainers/echoserver:1.10
        name:  echoserverv2 
        ports:
        - containerPort:  8080
          name:  echoserverv2
  • Update test

You can see only the header is v2:neverat the request traffic will flow into the v2 release, other values flow will be set according to their weights inflow unreasonable version of the service

$ [K8sSj] kubectl apply -f appv2.yml
ingress.extensions/echoserverv2 configured
service/echoserverv2 unchanged
deployment.extensions/echoserverv2 unchanged

$ [K8sSj] for i in `seq 10`;do curl -s -H "v2:true" echo.chulinx.com|grep Hostname;done
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn

$ [K8sSj] for i in `seq 10`;do curl -s -H "v2:always" echo.chulinx.com|grep Hostname;done
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn

$ [K8sSj] for i in `seq 10`;do curl -s -H "v2:never" echo.chulinx.com|grep Hostname;done
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv1-657b966cb5-7grqs

3. Cookie-based flow control

In fact, the principle of cookie is roughly the same as that of header. It is also the ingress automatic cookie value. If the cookie matches the client's visit, the traffic will flow to the matching back-end service.

  • Update v2 version of the orchestration file
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/canary: "true"
    nginx.ingress.kubernetes.io/canary-weight: "50"
    nginx.ingress.kubernetes.io/canary-by-header: "v2"
    nginx.ingress.kubernetes.io/canary-by-header-value: "true"
    nginx.ingress.kubernetes.io/canary-by-cookie: "user_from_shanghai"
  labels:
    app: echoserverv2
  name: echoserverv2
  namespace: echoserver
spec:
  rules:
  - host: echo.chulinx.com
    http:
      paths:
      - backend:
          serviceName: echoserverv2
          servicePort: 8080
        path: /
---
kind: Service
apiVersion: v1
metadata:
  name:  echoserverv2
  namespace: echoserver
spec:
  selector:
    name:  echoserverv2
  type:  ClusterIP
  ports:
  - name:  echoserverv2
    port:  8080
    targetPort:  8080
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name:  echoserverv2
  namespace: echoserver
  labels:
    name:  echoserverv2
spec:
  template:
    metadata:
      labels:
        name:  echoserverv2
    spec:
      containers:
      - image:  mirrorgooglecontainers/echoserver:1.10
        name:  echoserverv2 
        ports:
        - containerPort:  8080
          name:  echoserverv2
  • Access test

It can be seen that the access effect is the same as that of the header, but the cookie cannot customize the value

$ [K8sSj] kubectl apply -f appv2.yml
ingress.extensions/echoserverv2 configured
service/echoserverv2 unchanged
deployment.extensions/echoserverv2 unchanged

$ [K8sSj] for i in `seq 10`;do curl -s --cookie "user_from_shanghai" echo.chulinx.com|grep Hostname;done
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn

# zlx @ zlxdeMacBook-Pro in ~/Desktop/unicom/k8syml/nginx-ingress-canary-deployment [16:01:52]
$ [K8sSj] for i in `seq 10`;do curl -s --cookie "user_from_shanghai:always" echo.chulinx.com|grep Hostname;done
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn

# zlx @ zlxdeMacBook-Pro in ~/Desktop/unicom/k8syml/nginx-ingress-canary-deployment [16:02:25]
$ [K8sSj] for i in `seq 10`;do curl -s --cookie "user_from_shanghai=always" echo.chulinx.com|grep Hostname;done
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn

to sum up

The grayscale release can ensure the stability of the overall system. The new version can be tested, found and adjusted at the initial grayscale to ensure its impact. The above content introduces Ingress-Nginx's actual combat Canary Annotation in detail through examples. You can easily achieve blue-green release and canary release with the help of Ingress-Nginx

other

About blue-green releases, canary releases, and A/B testing

  • Blue-green release

In the blue-green deployment, there are two sets of systems: one is the service system, marked as "green"; the other is the system ready to be released, marked as "blue". Both systems are fully functional, and the systems that are running are only different in system versions and external services. Initially, there was no system, and there was no blue-green distinction. Then, the first system was developed and launched directly. There is only one system in this process, and there is no blue-green distinction. Later, a new version was developed, and the old version online was replaced with the new version. In addition to the online system, a brand new system using the new version code was built. At this time, a total of two systems are running. The old system that is providing services to the outside world is the green system, and the newly deployed system is the blue system. The blue system does not provide external services, what is it used for? It is used for pre-release testing. Any problems found during the testing process can be modified directly on the blue system without interfering with the system that the user is using. (Note that only when the two systems are not coupled can you be 100% guaranteed not to interfere.) After repeated testing, modification, and verification of the blue system, after confirming that it meets the online standard, the user is directly switched to the blue system: a period of time after switching Inside, the blue and green systems are still coexisting, but the user is already accessing the blue system. During this time, observe the working status of the blue system (new system), if there is a problem, switch back to the green system directly. When it is believed that the blue system that provides services to the outside world is working properly, and the green system that does not provide services to the outside world is no longer needed, the blue system officially becomes the service system to the outside world and becomes the new green system. The original green system can be destroyed to free up resources for the deployment of the next blue system. Blue-green deployment is only one of the online strategies, it is not a panacea that can deal with all situations. The premise of blue-green deployment can be implemented simply and quickly. The assumption is that the target system is very cohesive. If the target system is quite complex, then how to switch, whether the data of the two systems are required and how to synchronize, etc., all need to be carefully considered.

  • Canary release

Canary release (Canary) is also a release strategy, which is the same type of strategy as the gray release that is often said in China. Blue-green deployment is to prepare two systems and switch between the two systems. The canary strategy is to only have one system and gradually replace this system. For example, the target system is a group of stateless Web servers, but the number is very large. More, suppose there are 10,000 units. At this time, blue-green deployment cannot be used, because you cannot apply for 10,000 servers specifically to deploy the blue system (in the definition of blue-green deployment, the blue system must be able to accept all access). One way that can be thought of is: Only prepare a few servers, deploy a new version of the system on them, and test and verify them. After the test passed, I was afraid of accidents, and I didn't dare to update all the servers immediately. First update 10 of the 10,000 online servers to the latest system, and then observe and verify. After confirming that there is no abnormality, update all the remaining servers. This method is canary release. In actual operation, you can also do more control. For example, set a lower weight for the 10 servers that were originally updated, control the number of requests sent to these 10 servers, and then gradually increase the weight and the number of requests. This control is called "traffic splitting", and it can be used for canary releases, and it can also be used for subsequent A/B testing. Blue-green deployment and canary release are two release strategies, and they are not omnipotent. Sometimes both can be used, sometimes only one of them can be used.

  • A/B testing

The first thing to be clear is that A/B testing and blue-green deployment and canaries are completely different things. Blue-green deployment and canary is the release strategy, the goal is to ensure the stability of the newly launched system, and the focus is on the bugs and hidden dangers of the new system. A/B testing is an effect test. There are multiple versions of services for external services at the same time. These services are all services that have been tested enough to meet the online standards. There are differences but no distinction between the new and the old (they may use blue and green when they are online. Deployment method). A/B testing focuses on the actual effects of different versions of services, such as conversion rates and order status. During A/B testing, multiple versions of services are running online at the same time. These services usually have some experience differences, such as different page styles, colors, and operating procedures. Relevant personnel analyze the actual effect of each version of the service and select the version with the best effect. In the A/B test, you need to be able to control the distribution of traffic, for example, 10% of the traffic is allocated to the A version, 10% of the traffic to the B version, and 80% of the traffic to the C version.

Guess you like

Origin blog.csdn.net/lswzw/article/details/113766881