Realize a grayscale online system based on Nginx

Software development generally does not come up as the final version, but iterates version by version.

The new version will be tested before it goes online, but even so, there is no guarantee that there will be no problems when it goes online.

Therefore, the launch of new version codes in the company is generally through the grayscale system.

The gray scale system can divide the traffic into multiple parts, one for the new version code and one for the old version code.

833699739430c5a69e890eb5ca5f9c28.png

And the grayscale system supports setting the proportion of traffic. For example, you can set the traffic of the new version code to 5%, and if there is no problem, then put it at 10%, 50%, and finally put it at 100%.

This minimizes the impact of problems.

Otherwise, it will be full as soon as it comes up, and if there is an online problem, it will be a big accident.

And the grayscale system is not limited to this purpose. For example, if the product is not sure whether some changes are effective, it needs to do AB experiments, that is, it needs to divide the traffic into two parts, one part uses the A version code, and the other uses the B version code.

How is such a gray scale system realized?

In fact, many of them are implemented with nginx.

nginx is a reverse proxy service. User requests are sent to it and forwarded to specific application servers.

c906d46ec6fa44013db5c8ed6a76d1b2.png

This layer is also called the gateway layer.

It is responsible for forwarding requests to the application server, so it is natural to control the distribution of traffic here, which traffic goes to version A, and which traffic goes to version B.

Let's implement it below:

First, we prepare two versions of the code.

Create a nest project here:

npx nest new gray_test -p npm
d4321f287aca836ca8a35a392264c4d4.png

Run the nest service:

npm run start
a1bb54dad7d809fe03ceab5fff7f5052.png

Under browser access:

f0f9d0754741ec10918ef93ebcf3acb8.png

See hello world running on behalf of the nest service.

Then change AppService:

68a2ee3d0bfd6044ef9a9f16717cbd03.png

Modify the port:

35ac6aceb3b27db76da4a05c95451457.png

Then npm run start:

6a98a6bdda6c073d23f137f500b80ce2.png

Under browser access:

b437e41bc145de36d590715ad6976229.png

We now have two versions of the nest code.

The next question is, how to use nginx to achieve gray scale, so that some requests go to one version of the code, and some requests go to another version?

Let's run an nginx service first.

docker desktop search nginx image (this step needs scientific Internet access), click run:

2aa1dfa309d8b0173d5ccf7480033a56.png

Set the container name to gray1, port mapping host 82 to 80 in the container

b14e03f2ce4d8cd4fdfa7c2af153c8a9.png

Now visit http://localhost:82 to see the nginx page:

a118721a86d88d99604ee3d13f77a34f.png

We have to modify the configuration file and copy it out:

docker cp gray1:/etc/nginx/conf.d ~/nginx-config
38b41d0439b96ef5eeaa1588d1f9f34d.png

Then edit this default.conf

Add this line of configuration:

location ^~ /api {
    rewrite ^/api/(.*)$ /$1 break;
    proxy_pass http://192.168.1.6:3001;
}

This line is to add a route to forward the request starting with /api/ to the http://host IP:3001 service.

Use rewrite to rewrite the url, for example, /api/xxx becomes /xxx.

Then we re-run the nginx container:

954a01179963d7c33d848ff2bee9cdae.png

The container is named gray2, and the port maps 83 to 80 inside the container.

Specify the data volume and mount the local ~/nginx-config directory to the /etc/nginx/conf.d directory in the container.

Click run.

Then look at the files section:

You can see that the /etc/nginx/conf.d directory in the container is marked as mounted.

8e4dfbf599dd30523e29f96688d23306.png

Click to see:

40bd1d2185660837d2902478e4924b88.png

This is the local file.

Let's try it locally:

a6c164505f454480b8ab423c393bcc86.png

The inside of the container is also modified.

dc32dbbac60aa1aebba04bdf144608f6.png

Modifying this file in the container will also modify it locally.

That is to say, after the data volume is mounted, the directory in the container is the local directory, which is the same copy.

Then we visit http://localhost:83/api/ to see:

8cbe77c589394781eaea6cddd1d227e1.png

The nest service access was successful.

Now we are not directly accessing the nest service, but going through a layer of nginx reverse proxy or gateway layer.

bad2f4aaff6c14df022f0d19b6b70763.png

Naturally, we can implement the function of flow control at this layer.

When we talked about load balancing earlier, it was configured like this:

ee0d2e1f25e4cb04cab3048b89b79907.png

By default, it will poll and send the request to the server under upstream.

Now you need to have multiple sets of upstream:

upstream version1.0_server {
    server 192.168.1.6:3000;
}
 
upstream version2.0_server {
    server 192.168.1.6:3001;
}

upstream default {
    server 192.168.1.6:3000;
}

There are version 1.0, version 2.0, default server list.

Then you need to distinguish which service to forward to according to a certain condition.

Here we distinguish based on cookies:

set $group "default";
if ($http_cookie ~* "version=1.0"){
    set $group version1.0_server;
}

if ($http_cookie ~* "version=2.0"){
    set $group version2.0_server;
}

location ^~ /api {
    rewrite ^/api/(.*)$ /$1 break;
    proxy_pass http://$group;
}

If it contains a cookie with version=1.0, then use the service of version1.0_server, if there is a cookie with version=2.0, then use the service of version2.0_server, otherwise, use the default.

aa941825e640741f6754a095c8da3cdc.png

In this way, the division of traffic is realized, which is the function of grayscale.

Then we run the container again:

b4eaaa8ff3fdecab9418be4445150b70.png

At this time, what you visit http://localhost:83/api/ is the default version.

1abf2964fafdf9efa6e45ff4e4deff5e.png

Then take the version=2.0 cookie, and you will walk to another version of the code:

765e8b87b2f3bc41d9aec78491e988fd.png

In this way, we have realized the function of gray scale.

But now there is one more problem:

When is this cookie set?

For example, I want to achieve 80% of the traffic through version 1.0, and 20% of the traffic through version 2.0

In fact, there is generally a grayscale configuration system within the company, which can configure the ratio of different versions, and then after the traffic passes through this system, it will return the header of Set-Cookie, which sets different cookies according to the ratio.

For example, if the random number is between 0 and 0.2, set a cookie with version=2.0; otherwise, set a cookie with version=1.0.

This is also called traffic coloring.

The complete grayscale process is as follows:

4d89a01a7bc8bb21e0cb1fb704a44c1d.png

When the first request is made, the traffic will be randomly colored according to the set ratio, that is, different cookies will be set.

When you visit again, you will go to a different version of the code based on the cookie.

This realizes the grayscale function, which can be used to implement a grayscale online mechanism such as 5% 10% 50% 100%.

It can also be used for AB experiments of products.

Such a grayscale system is used in the company.

Summarize

The grayscale system is basically used for the launch of the new version code, which can be gradually increased to ensure that there will be no major problems in the launch process, and can also be used for product AB experiments.

We can use nginx to achieve such a function.

Nginx has the function of reverse proxy, which can forward requests to the application server, also known as the gateway layer.

At this layer, we can decide which service to forward the request to based on the version field in the cookie.

Before that, it is necessary to color the traffic according to the proportion, that is, return different cookies.

No matter how complicated the grayscale system is, the bottom layer is the two parts of traffic coloring and traffic forwarding according to the mark. We can implement one by ourselves.

- END -

About Qi Wu Troupe

Qi Wu Troupe is the largest front-end team of 360 Group, and participates in the work of W3C and ECMA members (TC39) on behalf of the group. Qi Wu Troupe attaches great importance to talent training. There are various development directions such as engineers, lecturers, translators, business interface people, and team leaders for employees to choose from, and supplemented by providing corresponding technical, professional, general, and leadership training. course. Qi Dance Troupe welcomes all kinds of outstanding talents to pay attention to and join Qi Dance Troupe with an open and talent-seeking attitude.

5405d7d50a07246f031c38263cdcba9e.png

Guess you like

Origin blog.csdn.net/qiwoo_weekly/article/details/131526138