Software testing shares 15 projects suitable for practice [finance/banking/mall/e-commerce/pharmaceutical...]

 Worried about not having a practice project? I have prepared it for you, you need to leave your thoughts in the comment area

 1 Introduction

  1.1 Document version

Version

author

approval

Remark

V1.0

XXXX

Create a test plan document

  1.2 Project situation

project name

XXX

project version

V1.0

project manager

XX

Testers

XXXXX,XXX

Department

XX

Remark

  1.3 Document purpose

    This document is mainly used to guide the development of automated testing of commonly used interfaces in the XXX-YY project. The main purpose of this document is to provide technical solutions, implementation solutions, and planning solutions for automated testing of project interfaces.

2. Implementation goal of interface automation

  2.1 Implementation Principles

    The XXX-YY project adopts interface automation testing, the main purpose is to deal with repetitive tasks in the iterative version testing process, in order to achieve the following results:

  • Reduce test cost
  • Improve test efficiency
  • Perform overriding of important interfaces more frequently
  • Provides greater accuracy and consistency
  • Save time and cost

  Although the above expected effect can be achieved, it should be noted in the actual implementation process that the efficient application of interface automation has higher requirements for the system under test, and it also needs to follow a reasonable method process, which is summarized as follows:

  • The implementation of interface automation should be used to solve the highly repetitive work in the testing process, a large part of which is used to regression test the old functional interface, otherwise its own workload investment will be greater than its benefits, so it is not possible to blindly pursue all interfaces or functions automation.
  • For the testing version, its own stability needs to be guaranteed to a certain extent. Too frequent interface changes will increase the difficulty of implementing subsequent interface automation and increase the cost of automation script maintenance.
  • The overall implementation of interface automation should be carried out in a distributed manner. During the test process, the interfaces with stable functions and more important interfaces should be covered first, and then gradually extended to the interface regression of the overall project.
  • Interface automation testing is a long-term process. With the continuous iterative optimization of the project version, the interface of the project itself will also be continuously optimized or newly developed, so the code maintenance and tuning of the subsequent automated test scripts also have a considerable workload.

  2.2 Interface automation test scope

    System wide:

Automation Implementation Phase

Module under test

Functional Interface Range

The first stage

Login to get token, YY tab page

second stage

    Stage scope:

      Here we give priority to testing the login interface and some interfaces that insert XXX functions.

  2.3 Interface automated testing tasks

  • Develop a test plan

Before coding scripts, you need to have an overall grasp of the project and reasonably estimate the number and complexity of interfaces. Combined with the version iteration time, the automated script development time is estimated, and the corresponding interface automated testing plan is formulated.

  • Extract analysis test points

According to the previously written interface automation test scope, analyze the test points of each interface, including request method, incoming parameters, request header, return status, return data, etc. During this process, it is necessary to connect with the corresponding development to clarify the relevant information of the interfaces within the test range, and confirm the calls one by one in postman in advance, and generate corresponding test documents or write them into test cases if necessary.

  • Build a test framework

    This time, the interface automation testing framework uses Python language as the script development language, and the unittest interface testing framework is selected. The purpose is to achieve configurability, automatically run scripts, automatically generate test reports and send the generated test reports to specified emails.

  • write script code

    The first implementation of the script does not need to cover every interface. It is expected to select several important interfaces for coverage testing. After the overall test framework is built and the overall process is confirmed to be correct, the scripts will be maintained and improved to cover more functional interfaces.

  • continuous integration

    As above, after the initial script code is completed, it is necessary to upgrade the existing automation script for continuous integration development, and continuously complete the interfaces that have not been covered, and add these interfaces to the scope of automated testing, so that the overall degree of automation will be further deepened, and to a greater extent Save manpower and time costs.

  • script maintenance

    Script maintenance is to archive and arrange the existing generated deliverables to the corresponding responsible person for management after the overall automation script is completed in stages, and perform periodic updates, arrangements and maintenance. Including the part of the interface that has been changed during the iterative maintenance of the daily version of the project, and the automatic coverage of the newly added interface in the future.

3. Interface automation technology selection

  3.1 Overall system

    Considering the test pyramid (from bottom to top: unit test, service test, user interface test) and the process characteristics of the XXX-YY project itself, this automation is mainly carried out in the form of interface automation. The entire automation script  uses the requests library  in  Python3.X  as the core mechanism,  unittest  as the test organization, and  HTMLTestRunner  to generate the final test report. Jenkins  implements continuous integration, and  Python3.X is selected  as the programming language for implementation.

  3.2 Core Technology

    3.2.1 Interface automation execution library--Requests

    First of all, Requests is written in Python language, based on urllib, HTTP library with Apache2 licensed license. It is more convenient and concise than general urllib and other libraries, which can save us a lot of work and fully meet the requirements of HTTP testing. It can be summed up in one sentence: Requests is an easy-to-use HTTP library implemented in Python. Second, the Requests library is very convenient to install and import.

    pip3 install requests ## Install the Requests library

    import requests ## Import the Requests library into the project

    We can use this library to implement the following various methods:

    requests.get("https://url.cn")              # GET请求

    requests.post("http://url.cn") # POST request

    requests.put("http://url.cn")               # PUT请求

    requests.delete("http://url.cn")            # DELETE请求

    requests.head("http://url.cn")              # HEAD请求

    requests.options("http://url.cn")           # OPTIONS请求

    3.2.2 Test organization and assertion mechanism --unittest

    The unittest module is a unit test module that comes with Python, and we can use it as a test organization and assertion mechanism at the interface automation code level. The more important widget modules are: TestCase, TestSuit, TestLoader, TextTestRunner, TextTestResult, etc.

    TestCase: Used to write test cases one by one, it is the base class of all test cases, and it is the most basic unit in the unittest module. A testcase is a test case, which is a complete test process, including setting up the environment before the test to prepare for setUp, executing the test code and assertion mechanism, and restoring the environment tearDown after a test case is completed.

    TestSuit: Multiple TestCases form a TestSuit (set of test cases), which is used to store a collection of test cases one by one.

    TestLoader: It is used to load the test case TestCase one by one into the test case collection TestSuit. There are many ways to load it, that is, to find individual use cases from script projects, create their instances, and then load them together to form TestSuit. Then return an instance of TestSuit.

    TextTestRunner: It is used to execute the test cases in the test case set TestSuit.

    TextTestResult: used to save test results, including how many test cases were executed, the number of successful and failed test cases, and other information.

    The schematic diagram is as follows (network source diagram):

    3.2.3 Test report generation --HTMLTestRunner

    After we execute the interface use cases in the use case set TestSuit in batches, the generated test report is displayed in text format on the dos side, which is not very intuitive. Here we introduce a third-party library --HTMLTestRunner. Using this third-party library, we can produce Test report in HTML web format.

    3.2.4 Continuous Integration Mechanism--Jenkins

    Here we choose Jenkins to implement the script continuous integration. By using Jenkins, we can realize the automatic execution of the script, including the scheduled execution of the automated test script, and the test report after the automated script is run is sent to the specified mailbox.

  3.3 Framework idea

    3.3.1 Encapsulation idea

    The entire interface automation test script adopts the idea of ​​object-oriented encapsulation, and extracts some configurable modules separately as much as possible to facilitate subsequent operation and configuration, making the overall project more flexible and changeable, and facilitating subsequent iterative maintenance and secondary development. The idea of ​​encapsulation is mainly reflected in the fact that the test environment can be configured, test cases can be imported, test data and scripts are separated, file paths are represented by relative paths, etc., which will be reflected in the actual coding layering.

    3.3.2 Data-driven implementation

    Data-driven here we use the decorator in Python - DDT (Data Driven Tests) to reflect. Through him, we can reuse our script code to achieve the purpose of data-driven testing. The official description of DDT is: DDT (Data Driven Testing) allows you to run a test case by using different test data and make it appear as multiple test cases.

    Here is a brief description of the introduction of DDT on the official website. First of all, DDT is mainly composed of a class decorator @ddt (for the TestCase subclass in our script) and two method decorators (for the test data we want to batch process) respectively @data (here we need to keep providing The number of parameters is the same as the number of parameters to be tested) and @file_data (will load test data from JSON or YAML files). Secondly, the parameters passed in by the method of data modification will be passed in as a whole. If these parameters are like parameters such as tuples, then you must break them down and pass them into your test. Alternatively, you can use another decorator, @unpack, to unpack your arguments into multiple arguments.

4. Test environment requirements

  4.1 Hardware environment

  At present, it does not involve performance-related or distributed execution content, so the hardware requirements are not very high, and daily office hardware is sufficient. If performance-related content is involved in the follow-up, the hardware environment needs to be reflected in another performance test plan.

  4.2 Software environment

software related

version number

Remark

Python

v3.7

The script coding language is Python3.x

PyCharm

v2016.3.3

5. Personnel schedule arrangement

  5.1 Assignment of Responsibilities

Group/Personnel

responsibility

Remark

  5.2 Schedule

test task

principal

Starting time

Remark

Automated test plan formulation

Interface use case writing

Build an automated test environment

Automated test framework construction

Automated scripting code writing

Continuous integration implementation

Test report output

Secondary maintenance and development of scripts

  5.3 Deliverables Management

Deliverables

principal

Remark

"Automated Test Solution"

automation framework

automation script code

Test Execution Report

Guess you like

Origin blog.csdn.net/xiao1542/article/details/130186682