After two weeks, I finally figured out WSGI

In the current situation of three hundred and sixty lines and the transition to IT, many students from all walks of life choose Python as the glue language as the first stepping stone to the Internet. Among these people, there are quite a few A large percentage of students have chosen the direction of Web development (including me). There is only one knowledge point in web development, which is WSGI.

Regardless of whether you are one of these classmates, you should learn this knowledge point well.

Since I am not engaged in professional python web development, when writing this article, I borrowed many excellent web blogs and spent a lot of energy reading a lot of OpenStack code.

In order to write this article, it took about two weeks scattered. It could have been divided into multiple articles and written into a series. After some deliberation, I prepared one to finish it. This is why this article is so long.

In addition, an article cannot fully understand a point of knowledge, and there is a lot of background knowledge involved in this article. If I can't explain it well, please refer to other people's web blogs for further study.

Before you look down, let me ask you a few questions. If you look down with these questions, it may be more purposeful and learning may be more effective.

Question 1: What is the process for an HTTP request to reach the corresponding application processing function?

Question 2: How to write a simple web service without using popular web frameworks?

The process of an HTTP request can be divided into two stages, the first stage is from the client to the WSGI Server, and the second stage is from the WSGI Server to the WSGI Application

Today is mainly about the second stage, the main contents are as follows:

  1. What is WSGI and why is it born?
  2. Why is there WSGI?
  3. How does the HTTP request get to the application?
  4. Implement a simple WSGI Server
  5. Realize "high concurrency" WSGI Server
  6. The first route: PasteDeploy
  7. PasteDeploy instructions
  8. webob.dec.wsgify decorator
  9. The second route: middleware routes routing

1. What is WSGI and why is it born?

WSGI is the abbreviation of Web Server Gateway Interface.

It is an interface between a Python application or framework (such as Django) and a web server, and it has been widely accepted.

It is a protocol, a specification, which was proposed in PEP 333 and supplemented in PEP 3333 (mainly to support Python 3.x). This agreement aims to solve the compatibility problems of many web frameworks and web server software. With WSGI, you no longer need to choose specific web server software because of the web framework you use.

Common web application frameworks are: Django, Flask, etc.

Commonly used web server software: uWSGI, Gunicorn, etc.

So what is the content of this WSGI protocol? Someone on Zhihu translated PEP 3333 into Chinese and wrote it very well. I will carry the content of this agreement.

The WSGI interface has two parts: the server side and the application side. The server side can also be called the gateway side, and the application side is also called the framework side. The server calls a callable object provided by the application. How to provide this object is determined by the server. For example, some servers or gateways require application deployers to write a script to create an instance of the server or gateway and provide an application instance for this instance. Other servers or gateways may use configuration files or other methods to specify where the application instance should be imported or obtained.

WSGI has the following three requirements for the application object

  1. Must be a callable object
  2. Receive two required parameters environ and start_response.
  3. The return value must be an iterable object, used to represent the http body.

2. Why is there WSGI?

This is a question from a friend of my Zhihu column in the comment area. I think it is a very good question, so I will answer it and update it here.

Why not use http directly where the wsgi protocol is used? Why should I translate it once?

The following is my answer, personal understanding, for communication only.

The web framework (ie app) is generally not used to directly receive http requests in production.

You might say, Django can receive http requests directly, and there is no need for so-called servers such as uwsgi.

Actually not, Django just implemented a simple web server internally for development and debugging purposes. Therefore, beginners often mistakenly believe that the web app framework itself can receive http requests.

Web server and web framework have different division of labor and different responsibilities (web server focuses on receiving and parsing requests and transferring the requested content to the web framework by calling). Both are indispensable. It can be said that they are two components that work together to realize the web. Since web access is two components, it is always necessary to define some conventional communication protocol, and this is WSGI, so WSGI must be available.

Then, another question arises: if they are not separated, but integrated together, there is only one external component, is there nothing WSGI?

The answer is yes.

But you can also find that there are quite a few large and small web development frameworks on the market. If each framework implements its own web server, wouldn't it be a duplication of wheels?

The best situation should be that a professional team develops a professional web server, and the developed web server needs to have framework versatility, Django can be used, Flask can also be used, and developers can freely choose which web server software to use , Which web framework to use, flexible combination.

3. How does the HTTP request get to the application?

When the client sends an HTTP request, how does it go to our application for processing and return?

Regarding this process, there is no way to go into details here, only a general idea.

I divide the implementation of this process into two types according to its architectural composition:

1. Two-level structure In this structure, uWSGI acts as a server, it uses HTTP protocol and wsgi protocol, flask application as application, realizes wsgi protocol. When a client sends a request, uWSGI accepts the request, calls flask app to get the response, and then sends the response to the client. Here is a point. Generally speaking, web frameworks such as Flask will come with a wsgi server (this is why the flask application can be started directly), but this is only used in the development phase, and it is not enough in the production environment, so use To uwsgi, a high-performance wsgi server.

2. Three-level structure In this structure, uWSGI is used as a middleware. It uses the uwsgi protocol (to communicate with nginx) and the wsgi protocol (to call Flask app). When a client sends a request, nginx processes it first (static resources are the strengths of nginx), and the unprocessed request (uWSGI), the final response is also nginx reply to the client. What are the benefits of adding a layer of reverse proxy?

Improve web server performance (uWSGI is not as good as nginx in processing static resources; nginx will forward to wWSGI after receiving a complete http request)

Nginx can do load balancing (provided that there are multiple servers), protecting the actual web server (the client interacts with nginx instead of uWSGI)

4. Implement a simple WSGI Server

In the above architecture diagram, I don't know if you found it, there is a library called wsgiref, which is a wsgi server module that comes with Python.

As you can see from its name, it is a reference implementation of a WSGI server written in pure Python. The so-called "reference implementation" means that the implementation fully complies with the WSGI standard, but does not consider any operational efficiency, and is only for development and testing.

With the wsgiref module, you can start a wsgi server very quickly.

from wsgiref.simple_server import make_server

# 这里的 appclass 暂且不说,后面会讲到
app = appclass()
server = make_server('', 64570, app)
server.serve_forever()

When you run this code, it will start a wsgi server, monitor 0.0.0.0:64570, and receive requests.

Use the lsof command to check that this port is indeed opened

The above uses wsgiref to write a demo to give you a preliminary understanding of wsgi. Because it is only suitable for learning and testing, you should look elsewhere in the production environment.

5. Realize "High Concurrency" WSGI Server

We said above that wsgiref cannot be used in production, so what should be used in production? There are many choices, such as the excellent uWSGI, Gunicore, etc. But today I am not going to talk about these, one is because I am not very familiar with it, and the other is because I am engaged in the secondary development of OpenStack and am familiar with it.

So below, I spent a few days reading the implementation of the Nova component code in OpenStack. I can just take it over to learn and record. If there is a deviation in understanding, I hope you can criticize it.

There are many services in the nova component, such as nova-api, nova-compute, nova-conductor, nova-scheduler, etc.

Among them, only nova-api has an open http interface.

To understand how this http interface is implemented, look at the code from the service startup entrance, and you will definitely find some clues.

From the Service file, we can see that the entry of nova-api is nova.cmd.api:main()

Open nova.cmd.api:main(), let's take a look at the code of OpenStack Nova.

In the yellow box below, you can see that service.WSGIService is used here to start a server, which is what we call wsgi server

What is the realization of the WSGI Server here? Let's continue to dive into the source code.

Wsgi.py can see that the eventlet network concurrency framework is used here. It first opens a green thread pool. From the configuration, you can see that the number of concurrent requests that this server can receive is 1000.

But we have not seen the presence of WSGI Server. Eventlet is used to open the thread pool. So each thread in the thread pool should be a server, right? How does it receive requests?

Continuing further, you can find that each thread is a WSGI Server started by eventlet.wsgi.server or eventlet.

Since there are more source codes, I extracted the main code and simplified it as follows

# 创建绿色线程池
self._pool = eventlet.GreenPool(self.pool_size)

# 创建 socket:监听的ip,端口
bind_addr = (host, port)
self._socket = eventlet.listen(bind_addr, family, backlog=backlog)
dup_socket = self._socket.dup()

# 整理孵化协程所需的各项参数
wsgi_kwargs = {
    'func': eventlet.wsgi.server,
    'sock': dup_socket,
    'site': self.app, # 这个就是 wsgi 的 application 函数
    'protocol': self._protocol,
    'custom_pool': self._pool,
    'log': self._logger,
    'log_format': CONF.wsgi.wsgi_log_format,
    'debug': False,
    'keepalive': CONF.wsgi.keep_alive,
    'socket_timeout': self.client_socket_timeout
}

# 孵化协程
self._server = utils.spawn(**wsgi_kwargs)

In this way, nova opened a WSGI Server that can accept 1000 concurrent green coroutines.

6. The first route: PasteDeploy

We mentioned above that the creation of WSGI Server needs to pass in an Application to process the received request, for a project with multiple apps.

For example, you have a personal website that provides the following modules

/blog  # 博客 app
/wiki  # wiki app

How to forward the request to the corresponding application according to the requested url address?

The answer is to use the PasteDeploy library (all components are widely used in OpenStack).

What exactly does PasteDeploy do?

According to the official document explained, translation is as follows

PasteDeploy is a system used to find and configure WSGI applications and services. PasteDeploy provides developers with a simple function loadapp. Through this function, a WSGI application can be loaded from a configuration file or Python egg.

One of the important meanings of using PasteDeploy is that system administrators can install and manage WSGI applications without having knowledge about Python and WSGI.

Since PasteDeploy originally belonged to Paste, it is now independent, but it will still be installed in the paste directory (site-packages\paste\deploy) during installation.

I will first talk about how to use PasteDeploy to implement URL routing and forwarding in Nova.

Remember that when creating the WSGI Server above, a self.app parameter was passed in. This app is not a fixed app, but uses the loadapp function provided in PasteDeploy to load the application from the paste.ini configuration file.

Specifically, look at the implementation of nova.

Know the value of config_url and app name through the printed DEBUG content

app: osapi_compute
config_url: /etc/nova/api-paste.inia

By viewing /etc/nova/api-paste.ini, found in the composite section in osapi_computethe app (the app here and wsgi app are two concepts need to note the distinction), it can be seen nova There are two versions of api, one is v2, is a v2.1, Currently we are using v2.1. From the configuration file, you can get the path of nova.api.openstack.computethe specified application is the factory method of the APIRouterV21 class under this module. This is a factory function that returns an APIRouterV21 instance.

[composite:osapi_compute]
use = call:nova.api.openstack.urlmap:urlmap_factory
/: oscomputeversions
/v2: openstack_compute_api_v21_legacy_v2_compatible
/v2.1: openstack_compute_api_v21

[app:osapi_compute_app_v21]
paste.app_factory = nova.api.openstack.compute:APIRouterV21.factory

This is the first layer of routing implemented by OpenStack using PasteDeploy. If you are not interested, you can skip this section and go to the next section. The next section introduces the use of PasteDeploy and teaches you to implement a simple web server demo. I recommend it.

7. PasteDeploy Instructions

Up to the previous step, I have gotten useful clues about application. Considering that it is the first time for many people to come into contact with PasteDeploy, here is a summary based on the online blog. It will help you get started.

To master PasteDeploy, you only need to follow the following three steps one by one.

1. Configure the ini file used by PasteDeploy;

2. Define WSGI applications;

3. Load the WSGI application through the loadapp function;

Step 1: Write paste.ini file

Before writing, we have to know the format of the ini file.

First, a segment like the following is called section.

[type:name]
key = value
...

The above types are mainly as follows

  1. composite (Combination): routing distribution of multiple apps;

    [composite:main]
    use = egg:Paste#urlmap
    / = home
    /blog = blog
    /wiki = wiki
  2. app (application): specify the path of the WSGI application;

    [app:home]
    paste.app_factory = example:Home.factory
  3. pipeline: Bind multiple filters to an app. Connect multiple filters to the last WSGI application in series.

    [pipeline:main]
    pipeline = filter1 filter2 filter3 myapp
    
    [filter:filter1]
    ...
    
    [filter:filter2]
    ...
    
    [app:myapp]
    ...
  4. filter: A function that takes app as the only parameter and returns a "filtered" app. The key value next can specify to whom the request needs to be passed. The next specified can be a normal WSGI application or another filter. Although the name is a filter, the function is not limited to the filtering function. It can be other functions, such as the log function, which will record the important request data.

    [app-filter:filter_name]
    use = egg:...
    next = next_app
    
    [app:next_app]
    ...

After you have a certain understanding of the ini file, you can understand the following ini configuration file

[composite:main]
use = egg:Paste#urlmap
/blog = blog
/wiki = wiki

[app:blog]
paste.app_factory = example:Blog.factory

[app:wiki]
paste.app_factory = example:Wiki.factory

The second step is to define an applicaiton object that conforms to the WSGI specification.

The application object conforming to the WSGI specification can have various forms, functions, methods, classes, and instance objects. Here only instance of an object, for example (need to implement __call__method), do a demonstration.

import os
from paste import deploy
from wsgiref.simple_server import make_server

class Blog(object):
    def __init__(self):
        print("Init Blog.")

    def __call__(self, environ, start_response):
        status_code = "200 OK"
        response_headers = [("Content-Type", "text/plain")]
        response_body = "This is Blog's response body.".encode('utf-8')

        start_response(status_code, response_headers)
        return [response_body]

    @classmethod
    def factory(cls, global_conf, **kwargs):
        print("Blog factory.")
        return Blog()

Finally, the third step is to load the WSGI application using the loadapp function.

loadapp is a function provided by PasteDeploy, which can easily load app from the ini configuration file in the first step

The loadapp function can receive two actual parameters:

  • URI: "config:<full path of configuration file>"
  • name: the name of the WSGI application
conf_path = os.path.abspath('paste.ini')

# 加载 app
applications = deploy.loadapp("config:{}".format(conf_path) , "main")

# 启动 server, 监听 localhost:22800 
server = make_server("localhost", "22800", applications)
server.serve_forever()

applications are URLMap objects.

Complete and integrate the contents of the second and third steps and write a Python file (wsgi_server.py). The content is as follows

import os
from paste import deploy
from wsgiref.simple_server import make_server

class Blog(object):
    def __init__(self):
        print("Init Blog.")

    def __call__(self, environ, start_response):
        status_code = "200 OK"
        response_headers = [("Content-Type", "text/plain")]
        response_body = "This is Blog's response body.".encode('utf-8')

        start_response(status_code, response_headers)
        return [response_body]

    @classmethod
    def factory(cls, global_conf, **kwargs):
        print("Blog factory.")
        return Blog()


class Wiki(object):
    def __init__(self):
        print("Init Wiki.")

    def __call__(self, environ, start_response):
        status_code = "200 OK"
        response_headers = [("Content-Type", "text/plain")]
        response_body = "This is Wiki's response body.".encode('utf-8')

        start_response(status_code, response_headers)
        return [response_body]

    @classmethod
    def factory(cls, global_conf, **kwargs):
        print("Wiki factory.")
        return Wiki()


if __name__ == "__main__":
    app = "main"
    port = 22800
    conf_path = os.path.abspath('paste.ini')

    # 加载 app
    applications = deploy.loadapp("config:{}".format(conf_path) , app)
    server = make_server("localhost", port, applications)

    print('Started web server at port {}'.format(port))
    server.serve_forever()

After everything is ready, in a terminal execute python wsgi_server.pyto start the web server

If everything is normal like the picture above, then open the browser

Note: urlmap is sensitive to the case of the url, for example, if you visit http://127.0.0.1:8000/BLOG, the uppercase BLOG cannot be found in the url mapping.

At this point, you have learned the simple use of PasteDeploy.

8. webob.dec.wsgify decorator

After routing scheduling PasteDeploy, we find the nova.api.openstack.compute:APIRouterV21.factoryentrance of this application, look at the code know that it actually returns an instance of APIRouterV21 class.

WSGI a predetermined application must be callable object, functions, methods, classes, instances, if a class instance, in this example belongs to the class requires implementation of __call__the method.

APIRouterV21 itself is not implemented __call__, but its parent router implements__call__

We know that the application must comply with the WSGI specification

  1. Must receive environ, start_responsetwo parameters;
  2. Must return "an iterable object".

However Router's __call__Code view, it does not comply with this specification, it does not receive these two parameters and returns no response, but only returns the object of another callable, so our eyes were once again transferred, but It doesn't matter, these __call__are coats, as long as these coats are removed, we can see the core app.

This layer is responsible stripped off the coat, is its head decorator @webob.dec.wsgify, wsgify is a class, its __call__source code to achieve the following:

It can be seen that wsgify here encapsulates the original request (dict object) of req into a Request object (the environ mentioned in Specification 1). Then the function decorated by wsgify (self._route) will be executed layer by layer to get the innermost core application.

The first parameter in Specification 1 is mentioned above, and the second parameter start_response is added. Where is it defined and passed in?

In fact, we don't need to worry about this, it is provided by wsgi server, if we use wsgiref library as the server. Then the start_response at this time is provided by wsgiref.

Back to wsgify, its main function is to encapsulate WSGI app and simplify the definition and writing of wsgi app. It can easily encapsulate a callable function or object into a WSGI app.

The above actually left a question, how does self._route (the RoutesMiddleware object) find the real application?

With this question in mind, we understand how routes implement the second route for us.

9. The second route: middleware routes routing

At the beginning of the article, we drew a picture for everyone.

This picture roughly divides an HTTP request into two processes. But in fact, the whole process is much more complicated than this process.

In fact, in the process from WSGI Server to WSGI Application, we add a lot of functions (such as authentication, URL routing), and the implementation of these functions is called middleware.

Middleware, to the server, it is an application, a callable object, has two parameters, and returns a callable object. For the application, it is a server, providing parameters for the application, and calling the application.

Taking URL routing as an example today, let's talk about how middleware works in actual production.

When the server gets the URL requested by the client, different URLs need to be processed by different functions. This function is called URL Routing.

In Nova, the routes library is used to implement routing scheduling for URLs. Next, I will analyze this process from the source code.

There are routes in the module in a middleware called routes.middleware.RoutesMiddleware, it will receive a url, automatically call the map.match()method, url routing to match, and the result is stored in the environment variable request matches the request ['wsgiorg.routing_args']will eventually call self._dispatch(dispatch returns a true application ) Return the response, and finally return this response to the WSGI Server.

The principle of this middleware seems to be quite simple. There is no complicated logic.

However, when I was reading the routes code, I found another point that confuses me.

self._dispatch In the function (also self.app in the picture above), we see the very important words app and controller. Are they the application objects I'm looking for?

To understand this problem, just see what match is?

This match object is RoutesMiddleware.__call__()inside stuffed req.environ, and it is what is it, I will print it out.

{'action': u'detail', 'controller': <nova.api.openstack.wsgi.ResourceV21 object at 0x667bad0>, 'project_id': u'2ac17c7c792d45eaa764c30bac37fad9'}

{'action': u'index', 'controller': <nova.api.openstack.wsgi.ResourceV21 object at 0x6ec8910>, 'project_id': u'2ac17c7c792d45eaa764c30bac37fad9'}

{'action': u'show', 'controller': <nova.api.openstack.wsgi.ResourceV21 object at 0x6ed9710>, 'project_id': u'2ac17c7c792d45eaa764c30bac37fad9', 'id': u'68323d9c-ebe5-499a-92e9-32fea900a892'}

The result is disappointing, this app is not the Controller object we are looking for. Rather, it is an instance object of the nova.api.openstack.wsgi.ResourceV21 class, which is simply a Resource object.

Seeing this, I have a mentality to collapse, why haven't I arrived at the Controller? The code of the OpenStack framework goes around and is really hard to read without a little patience.

Now that I have started, I have to bite the bullet and continue reading.

Finally, I found out that when APIRouter is initialized, it will register all the Resources, and at the same time hand over these Resources to routes.Mapper to manage and create routing mapping, so the routes.middleware.RoutesMiddleware mentioned above can pass the mapper according to the url. The match gets the corresponding Resource.

It can be seen from the Nova code that each Resource corresponds to a Controller object, because the Controller object itself is a collection of operations on a resource.

Through the log printing, you can find out how many and complicated the Resource objects managed by nova are.

os-server-groups
os-keypairs
os-availability-zone
remote-consoles
os-simple-tenant-usage
os-instance-actions
os-migrations
os-hypervisors
diagnostics
os-agents
images
os-fixed-ips
os-networks
os-security-groups
os-security-groups
os-security-group-rules
flavors
os-floating-ips-bulk
os-console-auth-tokens
os-baremetal-nodes
os-cloudpipe
os-server-external-events
os-instance_usage_audit_log
os-floating-ips
os-security-group-default-rules
os-tenant-networks
os-certificates
os-quota-class-sets
os-floating-ip-pools
os-floating-ip-dns
entries
os-aggregates
os-fping
os-server-password
os-flavor-access
consoles
os-extra_specs
os-interface
os-services
servers
extensions
metadata
metadata
limits
ips
os-cells
versions
tags
migrations
os-hosts
os-virtual-interfaces
os-assisted-volume-snapshots
os-quota-sets
os-volumes
os-volumes_boot
os-volume_attachments
os-snapshots
os-server-groups
os-keypairs
os-availability-zone
remote-consoles
os-simple-tenant-usage
os-instance-actions
os-migrations
os-hypervisors
diagnostics
os-agents
images
os-fixed-ips
os-networks
os-security-groups
os-security-groups
os-security-group-rules
flavors
os-floating-ips-bulk
os-console-auth-tokens
os-baremetal-nodes
os-cloudpipe
os-server-external-events
os-instance_usage_audit_log
os-floating-ips
os-security-group-default-rules
os-tenant-networks
os-certificates
os-quota-class-sets
os-floating-ip-pools
os-floating-ip-dns
entries
os-aggregates
os-fping
os-server-password
os-flavor-access
consoles
os-extra_specs
os-interface
os-services
servers
extensions
metadata
metadata
limits
ips
os-cells
versions
tags
migrations
os-hosts
os-virtual-interfaces
os-assisted-volume-snapshots
os-quota-sets
os-volumes
os-volumes_boot
os-volume_attachments
os-snapshots

You must be curious about how this route is created. The key code is the following line. If you want to know more about the route creation process, you can take a look at this article ( summary of Python Route ), which is well written.

routes.mapper.connect("server",
               "/{project_id}/servers/list_vm_state",
               controller=self.resources['servers'],
               action='list_vm_state',
               conditions={'list_vm_state': 'GET'})

After all the hardships, I finally found the Controller object. After knowing the request, how did the wsgi server find the corresponding Controller according to the url (according to routes.Mapper routing mapping).

But soon, you will ask again. There are many actions for a resource, such as adding, deleting, updating, etc.

Different operations need to execute different functions in the Controller.

If it is a new resource, call create()

If it is to delete the resource, call delete()

If it is to update the resource, call update()

How does the code know which function to execute?

Taking the /servers/xxx/action request as an example, the function called by the request is actually included in the body of the request.

After the __call__function analysis of routes.middleware.RoutesMiddleware , the Resource to be called at this time has been determined to be the Resource constructed by the Controller in which module, and the action parameter is "action". Next, in the Resource __call__function, it will be because action== "action" thus begins to parse the content of the body and find the corresponding method in the Controller.

Controller may be due to the influence of MetaClass all types of action populate a dictionary method, key by each during the construction of the _action_xxxprior method @wsgi.action('xxx')is given decorative function, value is the name of each _action_xxx methods (which can be seen The rule is to add _aciton_ before the method name requested in the body to be the corresponding method called in the Controller).

Later, in the process of using the Controller to construct the Resource object, the content in the dictionary of the Controller will be registered with the Resource. In this way, you only need to give the key of the calling method in the body of the request, and then you can find the method mapped by this key, and finally this function of the Controller class will be called in the call function of the Resource !

In fact, when I printed the match object above, I already printed the corresponding function.

Take nova show (showing resources as an example) here to understand.

When you call the nova show [uuid] command, novaclient will send an http request to nova-api

nova show 1c250b15-a346-43c5-9b41-20767ec7c94b

The match object obtained by printing is as follows

{'action': u'show', 'controller': <nova.api.openstack.wsgi.ResourceV21 object at 0x667bad0>, 'project_id': u'2ac17c7c792d45eaa764c30bac37fad9'}

The action is the corresponding processing function, and the controller is the corresponding Resource object, and project_id is the tenant id (you can ignore it).

ResourceV21 class continue to look at the __call__code for the function.

As shown in the figure, you will see the specific code to get the action from the environ get

I will print out the action_args here

{'action': 'show', 'project_id': '2ac17c7c792d45eaa764c30bac37fad9', 'id': '1c250b15-a346-43c5-9b41-20767ec7c94b'}

Where action is still the name of the function, and id is the unique id of the resource to be operated.

In __call__the end, the call _process_stackmethod

At the icon, get_method will get the processing function object based on action (function name).

meth :<bound method ServersController.show of <nova.api.openstack.compute.servers.ServersController object at 0x7be3750>>

Finally, the implementation of this function, get action_result, in the _process_stackinitial package will response.

Then the response is returned to wsgify, and this professional tool function performs the final encapsulation of the response and returns it to the client.

At this point, a request is over from when it is sent to the response.


Appendix: Reference Article

Guess you like

Origin blog.csdn.net/weixin_36338224/article/details/109330348