Let small scripts become your own efficient testing tool (detailed version)

Table of contents

Tools that will be encountered in the test

Test common open source tools

Python

Mobile/UI automation testing platform

what the platform needs to do

Technology needed for self-build

Interface test platform

operation and maintenance

Django operation and maintenance deployment framework

Unit Testing Overview

What is unit testing

What does unit testing do?

Who is responsible for unit testing?

Unit testing needs attention

unit test coverage type

python unit testing framework

unittest

pytest testing framework


Tools that will be encountered in the test

If software testing is performed only by hand, we will find that we are unable to do what we want in many places. Although there are already some open source testing tools available for us to use, but in some company-specific business aspects, we need to rely on open source or create our own. test tools.

Test common open source tools

  • Dead link detection tool Xenu home.snafu.de/tilman/xenu… xenu has been used by some individual webmasters to detect dead links on their own websites and has also been used in tests
  • Soupai SoloPi github.com/alipay/Solo…Ali's open source wireless, non-intrusive Android automation tools, with functions such as recording and playback, performance testing, etc.
  • AppCrawler github.com/seveniruby/… A monkey tool open sourced by Sihan, which can be used to stress test the APP
  • httprunner github.com/httprunner/… debugtalk is an open source interface testing framework that can be used as the bottom layer of the company's internal automated testing
  • STF github.com/openstf/stf can be used for multi-device testing (similar to cloud testing provided by cloud service providers)
  • ....

Python

The ease of use of Python has become the language of choice for most testers. Most of the open source tools in the natural community are based on Python. Although test development is widely used in the software testing industry, its return on investment in enterprises (ROI)
is not very ideal, especially in small and medium-sized enterprises. The reasons are difficult maintenance of automation scripts, large investment,
unintuitive automation use cases, inapplicable automation framework, high rate of unstable false positives, large and frequent front-end UI changes, etc. Therefore
, the automation of many small and medium-sized enterprises is still in the state of groping in the demo version.
My point of view is: first write some tools to help testers solve repetitive and low-value labor in their work, try to use open source technology to solve open source solutions, and consider making a comprehensive one when it is difficult for a single tool to improve test efficiency platform.
In fact, the so-called test platform is a toolbox that contains the tools that will be used in the test.

type of data

String json list and tuple The IP port number of the server does not want to be modifiable through the interface --- use the tuple list to modify --- the dictionary type can be dynamically modified to define the purpose {'键':值} of the collection {1,2,3}

  • Deduplication
  • Relationship Testing: Intersection Union
# 去重
list1 = [1,2,3,3]
print(set(list1))
# 交集 并集
set1 ={1,2,3,4}
set2 = {4,5,6,7}
# 交集
print(set1 & set2)
# 并集

Socket programming technology

Socket practice

one-on-one chat

ip_port  = ('127.0.0.1',9999)
import socket
# 创建socket 对象
sk = socket.socket()
# 绑定ip port 
sk.bind(ip_port)
# 开启监听
sk.listen()
print('------服务已经启动-------')
# 阻塞  等待连接
conn,addr = sk.qccept()
print('客户端地址:',addr)
# 接收数据---客户端数据
client——data conn.recv(1024).decode('utf-8')
print('接收的客户端说:',client_data)

# 发送数据
send_data = input('请输入')
conn.sendall(send_data.enode('tuf-8'))
# 关闭socket
conn.close()


import socket
# 创建socket 对象
sk = socket.socket()
# 连接服务器
sk.connect(('127.0.0.1',9999))

#3- 发送数据
send_data = input('请输入:')
sk.sendall(send_data.encode('utf-8'))#  要求是byte

#4- 接收数据---服务端数据
server_data = sk.recv(1024).decode('utf-8')
print('接收的服务端 ',server_data)

# 关闭socket
conn.close()

one-to-many chat

import sockketserver
# 需要继承一个类
class sqServer(socketserver.BaseRequestHandler):
    def handle(self):
        print('----聊天服务器上线了----')
        #逻辑
        while True:
            #接收数据
            client_data = self.request.recv(1024)
            print(client_data.decode('utf-8'))
            #if  xxx: break
            #发数据
            send_data = input('请输入>>> ')
            self.request.sendall(send_data.encode('utf-8'))
        self.request.close()

#2- 创建服务
server = socketserver.ThreadingTCPServer(('127.0.0.1',8888),sqServer)
#3- 一直在线
server.serve_forever()

import socket
#1- 创建socket对象
sk = socket.socket()
#2- 连接服务器
sk.connect(('127.0.0.1',8888))

#3- 发送数据
while True:
    send_data = '\na: '+input('请输入>>> ')
    sk.sendall(send_data.encode('utf-8'))#  要求是byte

    #4- 接收数据---服务端数据
    server_data = sk.recv(1024).decode('utf-8')
    print('接收的客户端数据>>> ',server_data)

#7- 关闭socket
sk.close()

import socket
#1- 创建socket对象
sk = socket.socket()
#2- 连接服务器
sk.connect(('127.0.0.1',8888))

#3- 发送数据
while True:
    send_data = '\nb: '+input('请输入>>> ')
    sk.sendall(send_data.encode('utf-8'))#  要求是byte

    #4- 接收数据---服务端数据
    server_data = sk.recv(1024).decode('utf-8')
    print('接收的客户端数据>>> ',server_data)

#7- 关闭socket
sk.close()


As someone who has been here, I also hope that you will avoid some detours. If you don’t want to experience the feeling of not being able to find information when learning, no one answering questions, and giving up after persisting for a few days, here I will share with you some learning about automated testing. Resources, I hope to help you on the way forward. Friends who need it, you can follow my official account [Software Testing Fange] to get it for free! 


Mobile/UI automation testing platform

what the platform needs to do

Basic management : project management, personnel management, test environment management Test management : use case management, test execution management, test report management Test capability integration :

  • Automated testing, automatic traversal testing: functional test regression and exploration
  • User experience testing: performance, robustness, weak network, power consumption, security, etc.
  • Compatibility testing: user-side acceptance testing of various devices Data analysis : test data storage, test data analysis

Technology needed for self-build

  • Frontend: Vue + Bootstrap
  • Background: Django
  • Test Execution: Jenkins
  • Test data storage: MySQL, ELS, RethinkDB
  • Data analysis: Kiababa, ECharts D3.js open source technology that can be reused
  • Front and back test case management: Pytest, Git
  • Test Equipment Management: STF Sonic

Interface test platform

System architecture: front-end and back-end separation. The front end is developed and designed with HTML, CSS, JS, and VUE technologies, and Nginx is used as the front-end web server. The backend is developed and designed with Django technology, and the uwsgi service hosts the background service. The operation and maintenance uses docker containerization and cooperates with Jenkins pipelnine to complete the continuous deployment Job responsibilities-Development is responsible for the overall business and database structure design of the test platform Responsible for the back-end API design and development of all modules of the test platform Cooperate with the front-end development to complete the interface between the page and the back-end The continuous integration and continuous deployment experience summary of the joint debugging project. Through this project, the Django web development technology has been mastered to a higher level, and the mechanism and development skills of the separation of front and back ends of the web have been deeply understood. Among them, the development of views adopts technologies such as templates and reflections to realize general views and reduce the amount of code development. In the area of ​​interface test management, the association between use cases and interface APIs, and the association between test plans and reports has been completed. The overall business process is OK, but the details still need to be optimized. In the future, pre- and post-functions and parameterization functions will be iterated. Due to the restful style of the interface, it can be replaced by django-restfulframework to develop more efficiently

operation and maintenance

The so-called operation and maintenance is to deploy the system on the server and connect to the public network. In this process, the system is required to effectively receive external requests and work normally. There is usually no unique solution and tool for this process, and each technology stack has its own skills. If the operation and maintenance is to be automated, it is as difficult as test development

Django operation and maintenance deployment framework

Overall deployment architecture linux+mysql+nginx+uwsgl

[toc]

Unit Testing Overview

What is unit testing

A unit test is a small piece of code written by a developer to verify that a small and clear function of the code under test is correct. Generally speaking, a unit test is to judge the behavior of a specific function under a specific condition (or scenario).

What does unit testing do?

The earlier the unit test is, the better it is for the later integration test

Who is responsible for unit testing?

The developer is responsible

Unit testing needs attention

A major premise of unit testing is that you need to clearly know the expected input and output of the program block you want to test , and then write the case according to this expectation and program logic. The expected results here must be written for the logic of the requirements/design, not for the program implementation, otherwise the unit test will lose its meaning, and the case designed according to the wrong implementation may also be wrong. Unit coverage code coverage is also used in automated testing and manual testing to measure whether the test is comprehensive, and the idea of ​​application coverage enhances the design of test cases

unit test coverage type

code snippet to test

def demo_method(a,b,x):
	if (a>1 and b==0):
  	x=x/a
  if (a==2 or x>1
      x= x+1
   return x 

statement coverage

  • Definition By designing a certain amount of test cases, it is guaranteed that each line of the tested method will be executed once. The lines of code that are hit when running a test case are called covered statements
  • The test case needs a case to achieve line coverage: a=10, b=0, x=3 Vulnerability and -> or: the first judgment is to change the and in the if to or, and then this test case can pass the line coverage. The most basic coverage method, but it is also the weakest. If you completely rely on row coverage, there will be serious problems in judging coverage.
  • Definition: A decision statement test case that is hit during the running of the test case

Vulnerabilities: Most of the judgment statements are composed of multiple logical conditions. If only the entire final result is judged and the results of each value condition are ignored, part of the test path condition coverage will inevitably be missed . a ==2 or x>1 ---> a==2 or x<1

  • Definition: Condition coverage is similar to decision coverage, but decision coverage focuses on the entire decision statement while condition coverage focuses on a certain decision condition

Test case: if (a >1 and b ==0)

  Defect: test cases increase exponentially (2**conditions) path coverage

  • Definition: Covers all possible execution path test paths

Test cases can use stub technology to help achieve path coverage

python unit testing framework

unittest: Python's built-in standard library. Its API is compatible with Java's JUnit, .net's NUnit, and C++'s CppUnit pytest: a rich and flexible test framework with simple syntax, which can be combined with allure to generate a cool test report Nose: The extension of unittest makes python testing easier Mock: unittest.mock is a library used to test python, this is a standard library (appeared after python 3.3)

unittest

The unit testing framework that comes with python is commonly used in unit testing to provide use case organization and execution in automated testing and provide rich assertions --- verification functions and other functions plus HTMLTestRunner can generate HTML reports

unittest writing specification

  • Unittest provides test cases, test suite, test fixtures, test runner related components
  • The test module first imports unittest
  • The test class must integrate unittest.TestCase
  • Test methods must start with ".test_"
  • Module name, class name has no special requirements unittest test framework structure
  • SetUp is used to prepare the test environment and tearDown is used to clean up the environment
  • If you want to prepare the environment once before all cases are executed, and clean up the environment after all cases are executed, we can use setUpClass(), and tearDownClass()
  • If some methods are not executed this time use @unittest.skip
  • The name of the test method: start with test

Various execution lines - single use case, all

 
 
import unittest
class TestClass(unittest.TestCase):
    @classmethod
    def setUpClass(cls):
        print("这个测试整个类前要执行的方法")
    def setUp(self):
        print("这是每一个类创建的方法")
    def tearDown(cls):
        print("这是每一个方法后面运行的方法")
    def test_first(self):
        print("这是测试方法1")
        self.assertEqual(1,1)
    @unittest.skip("这次不想执行这个测试")
    def test_second(self):
        print("这是测试方法2")
        self.assertEqual(1,'1',"1 is not equal '1'")
    @classmethod
    def tearDownClass(cls):
        print("这是测试整个类后要执行的方法")
if __name__ == '__main__':
    unittest.main()

assert asserts python3---assert official documentation: docs.python.org/3/library/u… unittest executes test cases The collection of multiple test cases is the test suite, and multiple test cases are managed through the test suite

Execution Method 1unittest.main()

Execution method 2: Join the container to execute

suite.addTest(TestMethod("test_01"))

suite.addTest(TestMethod(("test_02"))

unittest.TextRunner().run(suite)

Test method three: This usage can test multiple classes at the same time

suite1 = unittest.TestLoader().loadTestsFromTestsFromTestCase(TestCase1)

suite2 = unittest.TestLoader().loadTestsFromTestsFromTestCase(TestCase2)

suite = unitest.TestSuite([suite1,suite2])

unitest.TextTestRunner(verbosity=2).run(suite)

Execution method 4: match all py files starting with test in a certain directory, and execute all test cases under these files

test_dir ="./test_case"

discover  = unittest.defaultTestLoader.discover(test_dir,pattern="test*.py"

discover 可以一次调用多个脚本

test_dir 被测试脚本的路径

pattern  脚本名称匹配规则

Test case execution process

  1. Write a good TestCase
  2. TestLoader loads TestCase to TestSuite
  3. Use TextRunner to run TestSuite and save the running results in TextResult. The whole process is integrated in the unittest.main module. There can be multiple TestCases, or multiple TestSuites. Generate test reports HTMLTestRunner_py2: tungwaiyip.info/software/HT… HTMLTestRunner_py3 : github.com/ huilansame/…


As someone who has been here, I also hope that you will avoid some detours. If you don’t want to experience the feeling of not being able to find information when learning, no one answering questions, and giving up after persisting for a few days, here I will share with you some learning about automated testing. Resources, I hope to help you on the way forward. Friends who need it, you can follow my official account [Software Testing Fange] to get it for free! 


pytest testing framework

pytest framework introduction pytest is a very mature full-featured python test framework, simple and flexible, easy to use, supports skip and xfail of parameterized test cases, automatic failure retry can support simple unit tests and complex functional tests, and can also be used to Do automated testing such as selenium/appium, interface automated testing (pytest +requests) pytest has many third-party plug-ins, and can customize extensions, better ones such as pytest-allure can integrate well with jenkins Official documentation: Full pytest documentation — pytest documentation

Third-party library: Search results PyPI

pytest simple exercises

# 文件名 test_simple.py
def func(x):
    return x+1
def test_answer():
    assert func(3)==5

pytest .\test_simple.py pytest .\test_simple.py pip install -U pytest U means upgrade pip install pytest pytest-sugar pip install pytest-rerunfailures pip install pytest-xdist pip install pytest-assume pip install pytest-html pip list view pytest -h help

Identification and execution of test cases

test file

  • test_*.py
  • *_test.py

use case identification

  • The Test class contains all test_ methods (test classes cannot have __init__ methods)
  • All test_* methods that are not in the class pytest can also execute use cases and methods written by the unittest framework for terminal execution
 
 
pytest/py.test

pytest -v (最高级别信息--verbose)打印详细运行日志信息

pytest -v -s 文件名(s 是带控制台输出结果,也是详细输出)

pytest 文件名.py   执行单独一个 pytest 模块

pytest 文件名.py::类名      运行某个模块里面某个类

pytest 文件名.py:: 类名::方法名   运行某个模块里面的某个类里面的方法

pytest -v -k "类名 and or 方法名"  跳过某个用例

pytest -m [标记名]   @ Mark.[标记名]  将运行有这个标记的测试用例

pytest -x 文件名  一旦运行报错就停止运行

pytest --maxfail==[num]  当运行错误达到 num 的时候就停止运行

pytest execute --rerun on failure

Scenario: After the test fails, it needs to be re-run n times, and it is necessary to add a delay between re-runs, and then run it at an interval of n secondspip install pytest-rerunfailures

pytest -v --reruns 3 -s test_class.py

pytest -v - -reruns 5 --returns-delay 1 Multiple assertions are run even if they fail Scenario: Write multiple assertions in a method, usually the first one fails, and the following will not be executed. We want to report an error and execute it.

pip install pytest-assume

pytest.assume(1==4)

pytest.assume(2==4)

Identification and execution of test cases

pycharm configuration and execution pytest test framework

pytest framework structure import pytest similar setup, teardown is also more flexible

  • Module level (setup_module/ teardown_module) module beginning and end, global (highest priority)

  • Function level (setup_function/teardown_function) only applies to function cases (not in classes)

  • Class-level (setup_class/teardown_class) only runs once in the class (inside the class)

  • Method level (setup_method/teardown_method) starts at the beginning and end of the method (in the class)

  • (Setup/teardown) in the class runs before and after calling the method

pytest-fixture usage scenarios

  • Use case 1 needs to log in first, use case 2 does not need to log in, use case 3 needs to log in

In front of the method @pytest.fixture() Example 1: Application scenario in front-end automation: When test cases are executed, some use cases need to log in to execute, and some use cases do not need to log in. Setup and teardown cannot be satisfied, but fixture can. , the default scope function

  1. import pytest
  2. Add @pytest.fixture() to the login function
  3. Pass in (login function name) in the test method to be used, and log in first
  4. If it is not passed in, it will not log in and directly execute the test method

  5. In test teamwork, you can write public methods (pytest) in conftest.py Example 2: Application in front-end automation ---conftest

Scenario: When developing together with other test engineers, the public modules should be in different files, and they should be in a place that everyone can access

  1. conftest.py This file is for data sharing, and it can be placed in different locations to play different scope sharing functions
  2. When the system executes the parameter Login, it first checks whether there is a variable with this name in the text file, and then checks whether there is a variable in conftest.py

step:

Write the login module with @pytest.fixture in conftest.py

Attention should be paid to the configuration of conftest.py:

The conftest file name cannot be changed

conftest.py and the running use case should be under the same package, and have __init__.py

No need to import to import conftest.py, pytest use cases will be automatically searched

The global configuration and pre-work can be written here, and placed under a certain package, which is where the package data is shared

Example 3: Applying -yield in front-end automation

Scenes:

You can already solve the dependencies to be executed before the test method, how to destroy and clear the data after the test method?

solve

By adding the yield keyword in the same module, yield is the first call to return the result, and the second execution of the statement below it returns

step

在 @pytest.fixture(scope= module)

Add yield in the login method, and then add the step of destroying and clearing, (this method has no return value, if you want to return, use addfinalizer)

Automatic application of fixtures

Scenes

If you don’t want any changes in the source test method, or all of them are automatically applied automatically, you can choose to apply automatically when you don’t need to return a value in every special case

solve

Implemented with autouse = True in fixture

step

Add @pytest.fixture(autouse = True) to the method

Add @pytest.mark.usefixtures("start") to the test method

From the results, we can see that the execution software of each test method executes the open function

fixture passing with parameters

Scenes

Testing is inseparable from data. For the sake of data flexibility, general data purchases through parameter

Solution: Fixture passed through fixed parameter request

step

Add @pytest.fixture(params=[1,2,3,'linda'] in the fixture to write request in the parameter

import pytest

@pytest.mark.parametrize("test_input,expected",[{"3+5",8},("2+5",7),("7+5",30)]) def test_eval(test_input,expected): assert eval(test_input) == expected

import pytest

test_user_data =['Toma','Jerry'] @pytest.fixture(scope="module") def login_r(request): # This is to receive and pass in parameters user = request.param print( f"\n Open the homepage and prepare Login, login user: {user}") return user indirect = true You can use the passed parameters as a function to execute @pytest.mark.parametrize("login_r",test_user_data,indirect=True) def test_login(login_r): a = login_r print(f"The return value of login in the test case: {a}") assert a!= ""

skip skip

xfail

skip and xfail in mark

skip usage scenario

Debug is not wanting to run this use case

Flag beta features that don't work on some platforms

Executed in some versions, skipped by others

Skip when the current external resource is unavailable (if the test data is obtained from the database, the function of connecting to the database will be skipped if the return result is not successful, so the execution will also report an error)

solve

@pytest.mark.skip Skip this test case, you can add conditional skipif, only hope to pass under certain conditions, otherwise skip this test

Xfail scenario

Bugs where functional testing is not yet implemented or not yet fixed, when the current test passes, although it is expected to fail (marked as pytest.xfail, which is an xpass, will be reported in the test summary

You want the test to fail due to a condition

solve

@pytest.mark.xfail

Use a custom mark to execute only some use cases

Scenes

Only execute a certain part of the use cases that meet the requirements. You can divide a web project into multiple modules, and then specify the module name to execute

For app automation, if you want to share a set of codes between Android and IOS, you can use the marking function to mark those that are Android and those that are iOS, just specify the Mark name when running

solve

Add @pytest.mark.webtest on the test case method

implement

-s parameter: output print information for all tests

-m : Execute custom mark related use cases pytest -s test_mark_zi_09.py

pytest -s test-mark_zi_09.py -m=webtest

pytest -s test_mark_zi_09.py -m apptest

pytest -s test_mark_zi_09.py -m "not ios"

Multi-threaded parallel execution and distributed execution

Scenes

To test 1000 items, it takes 1 minute to execute a use case, and it takes 1000 minutes for a tester to execute. Usually, we will exchange labor costs for time costs, and add several people to execute together, and the time will be shortened. If it only takes 100 minutes for 10 people to execute together, this is a parallel test, distributed scenario

solve

pytest distributed plug-in: pytest-xdist, multiple CPUs or hosts Execution premise: use cases are independent, there is no order, can be executed randomly, and can be run repeatedly without affecting other use cases

Install:

pip3 install pytest-xdist

Multiple CPUs execute use cases in parallel, directly add -n 3 is the number of parallelism pytest -n 3

Execute together under multiple terminals

pytest-html generate report

Install

pip imstall pytest-html

pypi.org/project/pyt…

generate html report

pytest -v -d --html - - self-contained-html

Parameterization use case pytest data parameterization

@python.mark.parametrize (argnames, argvalue)

argnames: variables to be parameterized, string (comma separated) list tuple

use string

@pytest.mark.parametrize("a,b",[(10,20),(10,30)] def test_param(self,a,b): print(a+b) 使用 list

@pytest.mark.parametrize(["a","b"],[10,20],[10,30]) def test_param(self,a,b): print(a+b) 使用 tuple

@pytest.mark.parametrize(("a","b"),[10,20],[10,30]) def test_param(self,a,b): print(a+b)

class Testdata: # @pytest.mark.parametrize("a,b",[(10,20),(10,5),(3,9)]) # @pytest.mark.parametrize(("a", "b"),[(10,20),(10,5),(3,9)]) @pytest.mark.parametrize(["a","b"],[(10,20),( 10,5),(3,9)]) # ["a","b"] can be modified # ("a","b") can be modified

def test_data(self,a,b):
    # a=10
    # b=20
    print(a+b)

argvalues: parameterized values, list, list[tuple]

Basic use of yaml

yaml implements list

list

  • 10

  • 20

  • 30

yaml implementation dictionary

dict

by:id

locator:name

action:click

yaml for nesting

  • by:id

-locator:name

  • action:click

yaml for more complex nesting

companies:

id:1

name: company1

price:200w

id:2

name:company2

price:500w

companies:[{id:1,name:company,price:200w},{id:2,name:company2,price:500w}]

load yaml file

yaml.safe_load(open("./data.yaml"))

class Testdata_yaml: @pytest.mark.parametrize(("a","b"),yaml.safe_load(open("./data.yaml"))) def test_data_yml(self,a,b): print(a+b) pip install PyYaml

data.yml

operation result


As someone who has been here, I also hope that you will avoid some detours. If you don’t want to experience the feeling of not being able to find information when learning, no one answering questions, and giving up after persisting for a few days, here I will share with you some learning about automated testing. Resources, I hope to help you on the way forward. Friends who need it, you can follow my official account [Software Testing Fange] to get it for free! 


Introduction to Data Driven

Data-driven is the change of data to drive the execution of automated tests, which ultimately leads to changes in test results (parameterized applications)

Test cases with a large amount of data can use a structure haul file (yml, JSON) to store the data, and then read the data in the test case

simple exercise

import pytest import yaml

class TestDemo: @pytest.mark.parametrize("env",yaml.safe_load(open("./env.yml"))) def test_demo(self,env): if "test" in env: print("This is Test environment") print("The IP of the test environment is:",env["test"]) elif "dev" in env: print("This is the development environment") print("The IP of the development environment is:",env ["dev"]) def test_yaml(self): print(yaml. safe_load(open("./env.yml")))

Application Scenario

App, Web, interface automation testing

Data-driven test steps

Data Driven by Test Data

Configuration Data Driven

Test Report Beautification Allure Introduction

Allure is a lightweight, flexible, multilingual test reporting tool

Multi-platform, luxurious report framework

Can provide detailed test report and test step log for dev/qa

It can also provide high level statistical reports for management

Developed in Java language, support running pytest, javascript, PHP, ruby

Can be inherited to jenkins

allure install

Universal installation method for Windows/mac

Releases · allure-framework/allure2 · GitHub

Unzip ---> enter the bin directory ---> run allure.bat

Add the bin directory to the PATH environment variable

Mac can use brew to install

brew install allure

Official website: Allure Framework (qameta.io)

Use allure2 to generate beautiful reports

Install allure-pytest

pip install allure-pytest

run:

Collect results during test execution

pytest[test file] -s -q --alluredir= ./alluredir =./result/ (--alluredir This option is used to specify the path to store test results)

View test report

After the test is completed, view the actual report and view the report online, and the default browser will be opened directly to display the current report

allure serve ./result/

Generate a report from the results, which is a service that starts Tomcat, and requires two steps: generate a report, open the report

Generate report: allure generate ./result/5 -o ./report/ --clean (Note: add --clean to the coverage path)

Open report: allure open -h 127.0.0.1 -p 8883 ./report/

Common features of Allure

Scenes

Features, sub-functions or scenarios, test steps that you want to see in the report, including test additional information

solve:

@Feature, @story, @step, @attach

step:

import allure

Add @allure.feature("feature name") to the function

Add @allure.story("sub-function name") to the sub-function

Add @allure.step("step details") to the step

@allure.attach("specific text information"), additional information is required, which can be data, text, pictures, video web pages

If you only test the login function, you can add restriction filtering:

pytest filename --allure-features "shopping cart features" --allure-storws "add to cart"

allure trait-feature/story

Annotate the relationship between @allure.feature and @allure.store

feature is equivalent to a function, a large module, classifying cases into a certain feature, displayed in behaviore in the report, equivalent to testsuite

story corresponds to this function or different scenarios under the module, the branch function belongs to the structure under the feature, and the report is displayed in the features, which is equivalent to the testcase

feature is similar to story and parent-child relationship

allure characteristic-step

Each step in the test process is generally placed in a specific logic method

Can be placed in the key steps, displayed in the report

In app and web automated testing, it is recommended that each switch to a new page be regarded as a step

usage:

@allure.step() can only be placed on a class or method as a decorator

with allure.step(): can be placed in the test case method, but the code of the test step needs to contain the statement

allure trait-issue, testcase

Associated test cases (you can directly link to the test case address)

Associated bugs

When executing, you need to add a parameter

--allure-link-pattern==issue:www.mytesttracker.com/issue{}

the code

@allure.issue('140','Pytest-flaky test retires shows like steps') def test_with_issue_link(): pass TEST_CASE_LINK = ' github.com/qameta/allu ...' @allure.testcase(TEST_CASE_LINK,'Test case title' ) def test_with_testcase_link(): pass Perform a certain range of tests according to the importance level

Scenes

Usually, the tests include p0, smoke test, verification verification and online test. It is executed separately according to the importance level. For example, the main process and important modules must be run once when going online.

solve

Mark via pytest.mark

via allure.feature, allure.story

Marks can also be attached via allure.severity

Level: Trivial: Not important, Minor is not important, Normal: Normal problem, Critical: Serious, Blocker: Blocking

Blocker level: Interruption defect (the client program is unresponsive and cannot perform the next step)

Critical level: critical defect (missing function point)

Minor level: Minor defects (interface errors do not match UI requirements)

 -Trival 级别:轻微缺陷(必输项无提示,或者提示不规范)

step

Above methods, functions and classes add

@allure.servity(allure.severity_level.TRIVIAL)

at execution time

pytest -s -v filename --allure-severities normal.critycal

Front-end automated testing---screenshot

Scenes

Front-end automated tests often need to attach pictures or html, take screenshots at the right place and at the right time

solve

allure.attach(body(内容),name, attachment_type,extension);

allure.attach('Homepage', 'This is the result of the error page', allure.attachment_type.HTML)

Attach pictures to test reports:

allure.attach.file(source,name,attachment_type,extension):

allure.attach.file("./result/b.png",attachment_type=allure.attachment_type.PNG)


As someone who has been here, I also hope that you will avoid some detours. If you don’t want to experience the feeling of not being able to find information when learning, no one answering questions, and giving up after persisting for a few days, here I will share with you some learning about automated testing. Resources, I hope to help you on the way forward. Friends who need it, you can follow my official account [Software Testing Fange] to get it for free!


Guess you like

Origin blog.csdn.net/weixin_67553250/article/details/131231559