[Learn python from zero] 90. Use if to manage request paths

file structure:

├── server.py
├── utils.py
├── pages
    └── index.html
└── templates
    └── info.html

utils.pydocument:

PAGE_ROOT = './pages'
TEMPLATE_ROOT = './templates'

def load_html(file_name, start_response, root=PAGE_ROOT):
    """
    加载HTML文件时调用的方法
    :param file_name: 需要加载的HTML文件
    :param start_response: 函数,用来设置响应头。如果找到文件,请求头设置为200,否则设置为410
    :param root: HTML文件所在的目录。默认PAGE_ROOT表示静态HTML文件,TEMPLATE_ROOT表示的是模板文件
    :return: 读取HTML文件成功的话,返回HTML文件内容;读取失败提示资源被删除
    """

    file_name = root + file_name

    try:
        file = open(file_name, 'rb')
    except IOError:
        start_response('410 GONE', [('Content-Type', "text/html;charset=utf-8")])
        return ['资源被删除了'.encode('utf-8')]
    else:
        start_response('200 OK', [('Content-Type', "text/html;charset=utf-8")])
        content = file.read()
        return [content]

def load_template(file_name, start_respone, **kwargs):
    """
    加载模板文件
    :param file_name: 需要加载的模板文件名
    :param start_respone: 函数,用来设置响应头。如果找到文件,请求头设置为200,否则设置为410
    :param kwargs: 用来设置模板里的变量
    :return: 读取HTML文件成功的话,返回HTML文件内容;读取失败提示资源被删除
    """

    content = load_html(file_name, start_respone, root=TEMPLATE_ROOT)
    html = content[0].decode('utf-8')

    if html.startswith('<!DOCTYPE html>'):
        return [html.format(**kwargs).encode('utf-8')]
    else:
        return content

service.pydocument:

from wsgiref.simple_server import make_server
from utils import load_html, load_template

def show_home(start_response):
    return load_html('/index.html', start_response)

def show_test(start_response):
    start_response('200 OK', [('Content-Type', "text/html;charset=utf-8")])
    return ['我是一段普通的文字'.encode('utf-8')]

def show_info(start_response):
    return load_template('/info.html', start_response, name='张三', age=18)

def application(environ, start_response):
    path = environ.get('PATH_INFO')

    # 处理首页请求(加载一个HTML文件)
    if path == '/' or path == '/index.html':
        result = show_home(start_response)
        return result
    # 处理test.html请求(返回一个普通的字符串)
    elif path == '/test.html':
        return show_test(start_response)
    # 处理info.html请求(加载一个模板并且返回)
    elif path == '/info.html':
        return show_info(start_response)
    # 其它请求暂时无法处理,返回404
    else:
        start_response('400 NOT FOUND', [('Content-Type', "text/html;charset=utf-8")])
        return ['页面未找到'.encode('utf-8')]

httpd = make_server('', 8000, application)
print("Serving HTTP on port 8000...")
httpd.serve_forever()

Advanced case

[Python] Python realizes the word guessing game-challenge your intelligence and luck!

[python] Python tkinter library implements GUI program for weight unit converter

[python] Use Selenium to get (2023 Blog Star) entries

[python] Use Selenium and Chrome WebDriver to obtain article information in [Tencent Cloud Studio Practical Training Camp]

Use Tencent Cloud Cloud studio to realize scheduling Baidu AI to realize text recognition

[Fun with Python series [Xiaobai must see] Python multi-threaded crawler: download pictures of emoticon package websites

[Play with Python series] [Must-see for Xiaobai] Use Python to crawl historical data of Shuangseqiu and analyze it visually

[Play with python series] [Must-see for Xiaobai] Use Python crawler technology to obtain proxy IP and save it to a file

[Must-see for Xiaobai] Python image synthesis example using PIL library to realize the synthesis of multiple images by ranks and columns

[Xiaobai must see] Python crawler actual combat downloads pictures of goddesses in batches and saves them locally

[Xiaobai must see] Python word cloud generator detailed analysis and code implementation

[Xiaobai must see] Python crawls an example of NBA player data

[Must-see for Xiaobai] Sample code for crawling and saving Himalayan audio using Python

[Must-see for Xiaobai] Technical realization of using Python to download League of Legends skin pictures in batches

[Xiaobai must see] Python crawler data processing and visualization

[Must-see for Xiaobai] Python crawler program to easily obtain hero skin pictures of King of Glory

[Must-see for Xiaobai] Use Python to generate a personalized list Word document

[Must-see for Xiaobai] Python crawler combat: get pictures from Onmyoji website and save them automatically

Xiaobai must-see series of library management system - sample code for login and registration functions

100 Cases of Xiaobai's Actual Combat: A Complete and Simple Shuangseqiu Lottery Winning Judgment Program, Suitable for Xiaobai Getting Started

Geospatial data processing and visualization using geopandas and shapely (.shp)

Use selenium to crawl Maoyan movie list data

Detailed explanation of the principle and implementation of image enhancement algorithm Retinex

Getting Started Guide to Crawlers (8): Write weather data crawler programs for visual analysis

Introductory Guide to Crawlers (7): Using Selenium and BeautifulSoup to Crawl Douban Movie Top250 Example Explanation [Reptile Xiaobai must watch]

Getting Started Guide to Crawlers (6): Anti-crawlers and advanced skills: IP proxy, User-Agent disguise, Cookie bypass login verification and verification code identification tools

Introductory Guide to Crawlers (5): Distributed Crawlers and Concurrency Control [Implementation methods to improve crawling efficiency and request rationality control]

Getting started with crawlers (4): The best way to crawl dynamic web pages using Selenium and API

Getting Started Guide to Crawlers (3): Python network requests and common anti-crawler strategies

Getting started with crawlers (2): How to use regular expressions for data extraction and processing

Getting started with reptiles (1): Learn the basics and skills of reptiles

Application of Deep Learning Model in Image Recognition: CIFAR-10 Dataset Practice and Accuracy Analysis

Python object-oriented programming basics and sample code

MySQL database operation guide: learn how to use Python to add, delete, modify and query operations

Python file operation guide: encoding, reading, writing and exception handling

Use Python and Selenium to automate crawling#【Dragon Boat Festival Special Call for Papers】Explore the ultimate technology, and the future will be due to you"Zong" #Contributed articles

Python multi-thread and multi-process tutorial: comprehensive analysis, code cases and optimization skills

Selenium Automation Toolset - Complete Guide and Tutorials

Python web crawler basics advanced to actual combat tutorial

Python introductory tutorial: master the basic knowledge of for loop, while loop, string operation, file reading and writing and exception handling

Pandas data processing and analysis tutorial: from basics to actual combat

Detailed explanation of commonly used data types and related operations in Python

[Latest in 2023] Detailed Explanation of Six Major Schemes to Improve Index of Classification Model

Introductory Python programming basics and advanced skills, web development, data analysis, and machine learning and artificial intelligence

Graph prediction results with 4 regression methods: Vector Regression, Random Forest Regression, Linear Regression, K-Nearest Neighbors Regression

Guess you like

Origin blog.csdn.net/qq_33681891/article/details/132477236