[Learn python from zero] 87. Manually build the Python implementation of the HTTP server and multi-threaded concurrent processing

import the necessary modules

import re
import socket
from multiprocessing import Process

We imported the re module for regular expression operations, the socket module for network communication, and the Process class in the multiprocessing module for creating subprocesses

Define the WSGIServer class

class WSGIServer():
    def __init__(self, server, port, root):
        self.server = server
        self.port = port
        self.root = root
        self.server_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
        self.server_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
        self.server_socket.bind((self.server, self.port))
        self.server_socket.listen(128)

In the class initialization method __init__, we pass in the server address server, port number port and root directory root as parameters, then create a socket object server_socket, and set some properties. We bind the server address and port number to server_socket through the bind method, and call the listen method to listen for connection requests.

Handle client requests

def handle_socket(self, socket):
    data = socket.recv(1024).decode('utf-8').splitlines()[0]
    file_name = re.match(r'[^/]+(/[^ ]*)', data)[1]

    if file_name == '/':
        file_name = self.root + '/index.html'
    else:
        file_name = self.root + file_name

    try:
        file = open(file_name, 'rb')
    except IOError:
        response_header = 'HTTP/1.1 404 NOT FOUND \r\n'
        response_header += '\r\n'
        response_body = '========Sorry,file not found======='.encode('utf-8')
    else:
        response_header = 'HTTP/1.1 200 OK \r\n'
        response_header += '\r\n'
        response_body = file.read()

    finally:
        socket.send(response_header.encode('utf-8'))
        socket.send(response_body)

Next, a handle_socket method is defined to handle requests from clients. First, we receive the request data through the socket.recv method and decode the data. Then, use a regular expression to extract the filename from the request data. If the file name is '/' in the root directory, we set it to the 'index.html' file in the root directory, otherwise it is concatenated with the root directory path. Then, try to open the corresponding file. If the file does not exist, we return a 404 status code and an error message; if the file exists, we return a 200 status code and the file content as the response body.
Finally, we send the response headers and response body to the client using the socket.send method through the finally block.

Continuously listen for connection requests

def forever_run(self):
    while True:
        client_socket, client_addr = self.server_socket.accept()
        p = Process(target=self.handle_socket, args=(client_socket,))
        p.start()
        client_socket.close()

In the WSGIServer class, a forever_run method is defined to continuously monitor connection requests. In the loop, we use the accept method to accept the client's connection request, and create a subprocess Process to handle the connection request to achieve multi-threaded concurrent processing. Then close the client socket.

main program entry

if __name__ == '__main__':
    ip = '0.0.0.0'
    port = 8899
    server = WSGIServer(ip, port, './pages')
    print('server is running at {}:{}'.format(ip, port))
    server.forever_run()

In the main program, we instantiate the WSGIServer class and pass in the parameters of the server address, port number and root directory. Then print out the running information of the server, and finally call the forever_run method to start listening for connection requests.

Advanced case

[Python] Python realizes the word guessing game-challenge your intelligence and luck!

[python] Python tkinter library implements GUI program for weight unit converter

[python] Use Selenium to get (2023 Blog Star) entries

[python] Use Selenium and Chrome WebDriver to obtain article information in [Tencent Cloud Studio Practical Training Camp]

Use Tencent Cloud Cloud studio to realize scheduling Baidu AI to realize text recognition

[Fun with Python series [Xiaobai must see] Python multi-threaded crawler: download pictures of emoticon package websites

[Play with Python series] [Must-see for Xiaobai] Use Python to crawl historical data of Shuangseqiu and analyze it visually

[Play with python series] [Must-see for Xiaobai] Use Python crawler technology to obtain proxy IP and save it to a file

[Must-see for Xiaobai] Python image synthesis example using PIL library to realize the synthesis of multiple images by ranks and columns

[Xiaobai must see] Python crawler actual combat downloads pictures of goddesses in batches and saves them locally

[Xiaobai must see] Python word cloud generator detailed analysis and code implementation

[Xiaobai must see] Python crawls an example of NBA player data

[Must-see for Xiaobai] Sample code for crawling and saving Himalayan audio using Python

[Must-see for Xiaobai] Technical realization of using Python to download League of Legends skin pictures in batches

[Xiaobai must see] Python crawler data processing and visualization

[Must-see for Xiaobai] Python crawler program to easily obtain hero skin pictures of King of Glory

[Must-see for Xiaobai] Use Python to generate a personalized list Word document

[Must-see for Xiaobai] Python crawler combat: get pictures from Onmyoji website and save them automatically

Xiaobai must-see series of library management system - sample code for login and registration functions

100 Cases of Xiaobai's Actual Combat: A Complete and Simple Shuangseqiu Lottery Winning Judgment Program, Suitable for Xiaobai Getting Started

Geospatial data processing and visualization using geopandas and shapely (.shp)

Use selenium to crawl Maoyan movie list data

Detailed explanation of the principle and implementation of image enhancement algorithm Retinex

Getting Started Guide to Crawlers (8): Write weather data crawler programs for visual analysis

Introductory Guide to Crawlers (7): Using Selenium and BeautifulSoup to Crawl Douban Movie Top250 Example Explanation [Reptile Xiaobai must watch]

Getting Started Guide to Crawlers (6): Anti-crawlers and advanced skills: IP proxy, User-Agent disguise, Cookie bypass login verification and verification code identification tools

Introductory Guide to Crawlers (5): Distributed Crawlers and Concurrency Control [Implementation methods to improve crawling efficiency and request rationality control]

Getting started with crawlers (4): The best way to crawl dynamic web pages using Selenium and API

Getting Started Guide to Crawlers (3): Python network requests and common anti-crawler strategies

Getting started with crawlers (2): How to use regular expressions for data extraction and processing

Getting started with reptiles (1): Learn the basics and skills of reptiles

Application of Deep Learning Model in Image Recognition: CIFAR-10 Dataset Practice and Accuracy Analysis

Python object-oriented programming basics and sample code

MySQL database operation guide: learn how to use Python to add, delete, modify and query operations

Python file operation guide: encoding, reading, writing and exception handling

Use Python and Selenium to automate crawling#【Dragon Boat Festival Special Call for Papers】Explore the ultimate technology, and the future will be due to you"Zong" #Contributed articles

Python multi-thread and multi-process tutorial: comprehensive analysis, code cases and optimization skills

Selenium Automation Toolset - Complete Guide and Tutorials

Python web crawler basics advanced to actual combat tutorial

Python introductory tutorial: master the basic knowledge of for loop, while loop, string operation, file reading and writing and exception handling

Pandas data processing and analysis tutorial: from basics to actual combat

Detailed explanation of commonly used data types and related operations in Python

[Latest in 2023] Detailed Explanation of Six Major Schemes to Improve Index of Classification Model

Introductory Python programming basics and advanced skills, web development, data analysis, and machine learning and artificial intelligence

Graph prediction results with 4 regression methods: Vector Regression, Random Forest Regression, Linear Regression, K-Nearest Neighbors Regression

Guess you like

Origin blog.csdn.net/qq_33681891/article/details/132476755