【HttpRunner】Interface automation testing framework

Introduction

At the Python Developer Conference in 2018, I learned about the HttpRuuner open source automated testing framework, which uses YAML/JSON format to manage use cases. It can record and convert to generate use case functions, fully separating use cases from test code, and maintaining test scenario data is more concise than excel. . Here, use your spare time to study this framework to implement automated interface testing.

HttpRunner is a general open source testing framework for the HTTP(S) protocol. You only need to write and maintain one script YAML/JSON to achieve automated testing, performance testing, and online monitoring. , continuous integration and other testing requirements.

Mainstream interface automation tool framework:

As can be seen from the above figure, technology selection generally gives priority to Requests+Python and HttpRunner. Requests+Python has been discussed in articles in the automated testing summary category. Here we explore HttpRunner.

Framework process

Main features:

Inherit all the features of Requests and easily implement various testing requirements of HTTP(S)
Use YAML/JSON to describe test scenarios to ensure the uniformity and reproducibility of test case descriptions. Maintainability
With the help of auxiliary functions (debugtalk.py), complex dynamic calculation logic can be easily implemented in test scripts
Supports a complete test case layering mechanism, fully Realize the reuse of test cases
Support a complete hook mechanism before and after testing
Response results support a rich verification mechanism
Based on HAR Implement interface recording and use case generation functions (har2case)
Combined with the Locust framework, distributed performance testing can be implemented without additional work
The execution method uses CLI calling, which can Perfectly integrated with continuous integration tools such as Jenkins
The test result statistical report is concise and clear, with detailed statistical information and log records
Extremely scalable, easy to implement the second Secondary development and Web platformization

 Environment installation:

HttpRunner is a testing framework developed based on Python and can run on macOS, Linux, and Windows system platforms. HttpRunner supports all versions of Python 3.4 and above, and it is recommended to use Python 3.4 and above.

Use the pip command to install: pip3 install httprunner [Because my environment is equipped with two versions of python, the python3.6 version is used here, so use the pip3 command. If it is the python2.7 version, use the pip command to install]

Verify after installation is complete:

If the version number is displayed normally, the installation is normal.

basic function

  1. Record and generate use cases

Before converting and generating test cases, the captured data packets need to be exported to HAR format files. The operation method of using Charles packet capture tool Proxy is to select the interface that needs to be converted (you can select multiple or all), right-click, click [Export...] in the floating menu directory, and select HTTP Archive(.har) as the format ) and save it; assuming the file name we save is test.har

 Convert test cases

Run the har2case command in the command line terminal to convert test.har into a test case file for HttpRunner. When using har2case to convert the script, it will be converted to JSON format by default

Add -2y/--to-yml parameters and convert to YAML format

The two formats are completely equivalent. The YAML format is more concise, and the JSON format supports more tools. You can choose according to your personal preferences.

The generated use case is shown below:

json format:

YAML format:

config: used as a global configuration item for the entire test case set
test: corresponding to a single test case
name The name of this test< a i=3> request This test specifically sends various information of the http request, as follows: url request path (if base_url is defined in the config, the full path is base_url + url) a> method request method POST, GET, etc. data pass in parameter value validate content to be verified after completing the request. All The test will be considered passed only if all the verification contents pass the test, otherwise it will fail.




  2. Run the test

If you want the test case to stop running subsequent test cases when it encounters a failure during running, you can add --failfast to the command. For example: hrun test.yaml --failfast

If you need to view more detailed information, such as request parameters and response details, you can set the log level to DEBUG, that is, add --log-level debug to the command. For example: hrun test.yaml --log-level debug

In order to facilitate problem location, you can specify the --save-tests parameter when running the test to save the intermediate data during the running process as a log file.

The log files will be saved in the logs folder in the project root directory. The generated files are as follows (XXX is the test case name):

XXX.loaded.json: The data structure content after the test case is loaded, loading all project files including test case files (YAML/JSON), debugtalk.py, .env, etc., such as test-quickstart-6.loaded .json
quickstart-6.parsed.json
XXX.summary.json: The data structure content before the test report is generated, such as test-quickstart-6.summary.json

  3. Test report

By default, the generated test report file will be located in the reports folder in the project root directory, and the file name is the timestamp when the test started. HttpRunner comes with a default report template in Jinja2 format

The test report format is as follows:

In the Summary, the overall information of this test will be listed, including the test start time, total running time, running Python version and system environment, and running result statistics.

In Details, the running results of each test case will be displayed in detail.

Click the log button corresponding to the test case, and the detailed data of the test case execution will be displayed in the pop-up box, including request headers and body, response headers and body, verification results, response, response time (elapsed) and other information.

By default, the generated test report files will be located in the reports folder in the project root directory. If you need to specify the path to generate the report, you can use the --report-dir parameter.

如: hrun test.yaml --dirreport-name g:\home

For other advanced features such as data parameterization, use case layering, environment variables, etc., please refer to the official Chinese manual for detailed introduction.

  4. Create a project

The usage of is similar to Django, you only need to specify the name of the new project through --startproject. For example: hrun --startproject httpapidemo

After running, the directory structure of the new project will be generated in the specified directory. Next, we can add use case description information to it according to the interface-module-scenario layering principle of the test case.

It should be noted that when we organize the file directory structure described by the test case, we follow the principle of convention over configuration:

API interface definition must be placed in the api directory
Module definition must be placed in the suite directory
Test scenario files must be placed in the testcases directory< /span> The specific example of the new use case directory structure is as follows:
The relevant function definitions are placed in debugtalk.py

Specific examples of the new use case directory structure are as follows:


          [The following is the most comprehensive software testing engineer learning knowledge architecture system diagram in 2023 that I compiled]


1. Python programming from entry to proficiency


2. Practical implementation of interface automation projects  

3. Web automation project actual combat


4. Practical implementation of App automation project 

5. Resumes of first-tier manufacturers


6. Test and develop DevOps system 

7. Commonly used automated testing tools


8. JMeter performance test 

9. Summary (little surprise at the end)

life is long so add oil. Every effort will not be disappointed, as long as you persevere, you will eventually be rewarded. Cherish your time and pursue your dreams. Don’t forget your original intention and forge ahead. Your future is in your control!

Life is short and time is precious. We cannot predict what will happen in the future, but we can control the present. Cherish every day, work hard, and make yourself stronger and better. With firm belief and persistent pursuit, success will eventually belong to you!

Only by constantly challenging yourself can you constantly surpass yourself. Keep pursuing your dreams and move forward bravely, and you will find that the process of struggle is so beautiful and worthwhile. Believe in yourself, you can do it!

Finally, I would like to thank everyone who reads my article carefully. Reciprocity is always necessary. Although it is not a very valuable thing, if you can use it, you can take it directly:

This information should be the most comprehensive and complete preparation warehouse for [software testing] friends. This warehouse has also accompanied tens of thousands of test engineers through the most difficult journey. I hope it can also help you!​ 

Guess you like

Origin blog.csdn.net/2301_78276982/article/details/134896162