Design and implementation of Python3 simple interface automated testing framework

1. Development environment

  • Operating system: Ubuntu18
  • Development tools: IDEA+PyCharm plug-in
  • Python version: 3.6

2. Modules used

  • requests: used to send requests
  • xlrd: operate Excel and organize test cases
  • smtplib, email: send test report
  • logging: log tracking
  • json: data formatting
  • Django: interface development
  • configparser: read configuration files

3. Frame design

3.1. Process

Interface use cases are organized through Excel, and columns such as URL and Request Body are defined. The execution process is as follows:

  • Use the xlrd tool to read the information in Excel and splice it into complete requests.
  • The interface request class executes complete requests one by one. This process needs to be logged, and each execution must be traceable.
  • Backfill test results, send emails, and archive the results of each run. A better approach is to make a report of the historical operation status, which is more intuitive.

advantage:

  • Use cases are organized through Excel, there is no need to write code, and it is easy to get started.
  • When the number of use cases is not large, the development speed is fast.

shortcoming:

  • Use case dependencies are a pain point.
  • Only interface automation use cases are supported.
  • Use cases in Excel cannot be checked for correctness in advance. You can only know by running them.
  • It cannot manage a large number of use cases well and does not support team collaboration. It would be a good choice for individuals to use it for regression testing or smoke testing after going online.

By comparing the advantages and disadvantages, it can be clearly discovered that this framework actually has many shortcomings. Therefore, no matter whether it is an open source automated testing framework in the industry or one developed by an enterprise, Excel has never been used to organize use cases. It is worth mentioning that the automation framework developed by some companies is very difficult to use, or it is a simple combination of a bunch of tools. It simply doesn’t improve your team’s productivity. However, good products are not achieved overnight and require a process of continuous optimization. Therefore, the above framework of using Excel to organize use cases is worth playing around with. Let’s name it apitest for the time being. Currently, the better automated testing frameworks include unittest, testng, pytest, etc.

3.2. Project structure

  • testcase: json file that stores test cases or requests.
  • config: configuration file.
  • report: test reports and log files and their archives.
  • untils: Tool set, send_request is used to send requests, email_tool is used to send emails, excel_tool is used to read data in Excel, check_result is used to verify results, run_main use case execution entry, log_trace is used to track logs.

5. Log printing

Use the built-in logging module to record running logs and set the log level.
log_trace.log:

import  logging
filename = "../report/test_case_run.log"
logging.basicConfig(level=logging.INFO,
                    format='%(asctime)s %(levelname)s1 %(filename)s [line:%(lineno)d]  %(message)s',
                    datefmt='%a, %d %b %Y %H:%M:%S',
                    filename=filename,
                    filemode='w')

6. Interface request class encapsulation

Install third-party module requests

pip install requests

Define the function send_request, and call the get, post, delete, put and other methods of request according to the incoming method type to send the request. send_request.py:

import  requests
from untils. log_trace import  *

#发送get请求
def get_request(url,data=None,headers=None):
    res = requests.get(url=url,data=data,headers=headers)
    return res

#发送post请求
def post_request(url,data,headers=None):
    res = requests.post(url=url,data=data,headers=headers)
    return res

#发送delete请求
def del_request(url,data=None,headers=None):
    res = requests.delete(url,data=data)
    return res

#发送put请求
def put_request(url,data,headers=None):
    pass

def send_request(method,url,data=None,headers=None):
    try:
        logging.info(headers)
        if headers:
            if method == "GET":
                return get_request(url,data,headers=headers)
            if method == "POST":
                return post_request(url,data=data,headers=headers)
            if method == "DELETE":
                return  del_request(url,data=data,headers=headers)
            #put使用频率低,暂时不写
            if method == "PUT":
                return  put_request(url,data=data,headers=headers)
        else:
            logging.info("Header is null")
    except Exception as e:
        logging.info("send request fail:%s"%e)

Write code in untils_test.py to test the send_request method. The code is as follows:

#coding:utf-8
from untils.send_request import send_request

def test_send_request():
    url="http://127.0.0.1:9000/articles/"
    headers = {
        "X-Token":"0a6db4e59c7fff2b2b94a297e2e5632e"
    }
    res = send_request("GET",url,headers=headers)
    print(res.json())



if __name__ == "__main__":
    test_send_request()

PYTHON Copy full screen

operation result:

/usr/bin/python3.6 /home/stephen/IdeaProjects/apitest/untils/untils_test.py
{'status': 'BS.200', 'all_titles': {'amy1': 'alive', 'modifytest': 'alive', 'addTest': 'alive'}, 'msg': 'query articles sucess.'}

Process finished with exit code 0

Python interface automated testing from zero foundation to mastery (2023 latest version)

 

Guess you like

Origin blog.csdn.net/ada4656/article/details/134321039