Article directory
import the necessary modules
import re
import socket
from multiprocessing import Process
We imported the re module for regular expression operations, the socket module for network communication, and the Process class in the multiprocessing module for creating subprocesses
Define the WSGIServer class
class WSGIServer():
def __init__(self, server, port, root):
self.server = server
self.port = port
self.root = root
self.server_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.server_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
self.server_socket.bind((self.server, self.port))
self.server_socket.listen(128)
In the class initialization method __init__, we pass in the server address server, port number port and root directory root as parameters, then create a socket object server_socket, and set some properties. We bind the server address and port number to server_socket through the bind method, and call the listen method to listen for connection requests.
Handle client requests
def handle_socket(self, socket):
data = socket.recv(1024).decode('utf-8').splitlines()[0]
file_name = re.match(r'[^/]+(/[^ ]*)', data)[1]
if file_name == '/':
file_name = self.root + '/index.html'
else:
file_name = self.root + file_name
try:
file = open(file_name, 'rb')
except IOError:
response_header = 'HTTP/1.1 404 NOT FOUND \r\n'
response_header += '\r\n'
response_body = '========Sorry,file not found======='.encode('utf-8')
else:
response_header = 'HTTP/1.1 200 OK \r\n'
response_header += '\r\n'
response_body = file.read()
finally:
socket.send(response_header.encode('utf-8'))
socket.send(response_body)
Next, a handle_socket method is defined to handle requests from clients. First, we receive the request data through the socket.recv method and decode the data. Then, use a regular expression to extract the filename from the request data. If the file name is '/' in the root directory, we set it to the 'index.html' file in the root directory, otherwise it is concatenated with the root directory path. Then, try to open the corresponding file. If the file does not exist, we return a 404 status code and an error message; if the file exists, we return a 200 status code and the file content as the response body.
Finally, we send the response headers and response body to the client using the socket.send method through the finally block.
Continuously listen for connection requests
def forever_run(self):
while True:
client_socket, client_addr = self.server_socket.accept()
p = Process(target=self.handle_socket, args=(client_socket,))
p.start()
client_socket.close()
In the WSGIServer class, a forever_run method is defined to continuously monitor connection requests. In the loop, we use the accept method to accept the client's connection request, and create a subprocess Process to handle the connection request to achieve multi-threaded concurrent processing. Then close the client socket.
main program entry
if __name__ == '__main__':
ip = '0.0.0.0'
port = 8899
server = WSGIServer(ip, port, './pages')
print('server is running at {}:{}'.format(ip, port))
server.forever_run()
In the main program, we instantiate the WSGIServer class and pass in the parameters of the server address, port number and root directory. Then print out the running information of the server, and finally call the forever_run method to start listening for connection requests.
Advanced case
[Python] Python realizes the word guessing game-challenge your intelligence and luck!
[python] Python tkinter library implements GUI program for weight unit converter
[python] Use Selenium to get (2023 Blog Star) entries
Use Tencent Cloud Cloud studio to realize scheduling Baidu AI to realize text recognition
[Xiaobai must see] Python word cloud generator detailed analysis and code implementation
[Xiaobai must see] Python crawls an example of NBA player data
[Must-see for Xiaobai] Sample code for crawling and saving Himalayan audio using Python
[Xiaobai must see] Python crawler data processing and visualization
[Must-see for Xiaobai] Python crawler program to easily obtain hero skin pictures of King of Glory
[Must-see for Xiaobai] Use Python to generate a personalized list Word document
Geospatial data processing and visualization using geopandas and shapely (.shp)
Use selenium to crawl Maoyan movie list data
Detailed explanation of the principle and implementation of image enhancement algorithm Retinex
Getting Started Guide to Crawlers (8): Write weather data crawler programs for visual analysis
Getting started with crawlers (4): The best way to crawl dynamic web pages using Selenium and API
Getting Started Guide to Crawlers (3): Python network requests and common anti-crawler strategies
Getting started with crawlers (2): How to use regular expressions for data extraction and processing
Getting started with reptiles (1): Learn the basics and skills of reptiles
Python object-oriented programming basics and sample code
MySQL database operation guide: learn how to use Python to add, delete, modify and query operations
Python file operation guide: encoding, reading, writing and exception handling
Selenium Automation Toolset - Complete Guide and Tutorials
Python web crawler basics advanced to actual combat tutorial
Pandas data processing and analysis tutorial: from basics to actual combat
Detailed explanation of commonly used data types and related operations in Python
[Latest in 2023] Detailed Explanation of Six Major Schemes to Improve Index of Classification Model