[Automated testing] Pytest+Appium+Allure to do those things of UI automation

The text mainly introduces some processes and experiences recorded by Pytest+Allure+Appium.

What does the law mainly use:

Python3
Appium
Allure-pytest
Pytest
Appium Uncommon but easy-to-use method
Appium directly executes adb shell method
# Add --relaxed-security parameter Appium when Appium starts to execute a method similar to adb shell
> appium -p 4723 --relaxed-security

# Use method
def adb_shell(self, command, args, includeStderr=False):
"""
appium --relaxed-security way to start
adb_shell('ps',['|','grep','android'])

:param command:command
:param args:parameter
:param includeStderr: If it is True, an exception will be thrown
:return:
"""
result = self.driver.execute_script('mobile: shell', { 'command': command, 'args': args, 'includeStderr': includeStderr, 'timeout': 5000 }) return result['stdout']





Appium's method of directly intercepting element images
element = self.driver.find_element_by_id('cn.xxxxxx:id/login_sign')
pngbyte = element.screenshot_as_png
image_data = BytesIO(pngbyte)
img = Image.open(image_data)
img.save('element .png')
# This method can directly get the screenshot of the login button area

Appium directly obtains the log on the mobile phone
# After using this method, the logcat cache on the mobile phone will be cleared and reset to zero, and record again
# It is recommended to clean up after each use case is executed, and then save and reduce the old log output when there is an error
# Android
logcat = self. driver. get_log('logcat')

# iOS needs to install brew install libimobiledevice
logcat = self.driver.get_log('syslog')

# web get console log
logcat = self.driver.get_log('browser')

c = '\n'.join([i['message'] for i in logcat])
allure.attach(c, 'APPlog', allure.attachment_type.TEXT)
#write to allure test report

Appium transfers files directly to the device
# Send file
#Android
driver.push_file('/sdcard/element.png', source_path='D:\works\element.png')

# Get phone file
png = driver.pull_file('/sdcard/element.png')
with open('element.png', 'wb') as png1:
png1.write(base64.b64decode(png))

# Get the mobile phone folder, and export the zip file
folder = driver.pull_folder('/sdcard/test')
with open('test.zip', 'wb') as folder1:
folder1.write(base64.b64decode(folder) )

# iOS
# Need to install ifuse
# > brew install ifuse or > brew cask install osxfuse or search for the installation method by yourself

driver.push_file('/Documents/xx/element.png', source_path='D:\works\element.png')

# Send files to the App sandbox
# After iOS 8.3, the application needs to enable the UIFileSharingEnabled permission, otherwise an error will be reported
bundleId = 'cn.xxx.xxx' # APP name
driver.push_file('@{bundleId}/Documents/xx/element.png' .format(bundleId=bundleId), source_path='D:\works\element.png')

The difference between Pytest and Unittest initialization.
Many people have used unitest. Let me talk about some differences between pytest and unitest in the Hook method.

1. Pytest is similar to unitest, with some differences, the following is Pytest
class TestExample:
def setup(self):
print("setup class:TestStuff")

def teardown(self):
print ("teardown class:TestStuff")

def setup_class(cls):
print ("setup_class class:%s" % cls.__name__)

def teardown_class(cls):
print ("teardown_class class:%s" % cls.__name__)

def setup_method(self, method):
print ("setup_method method:%s" % method.__name__)

def teardown_method(self, method):
print ("teardown_method method:%s" % method.__name__)

2.使用 pytest.fixture()
@pytest.fixture()
def driver_setup(request):
request.instance.Action = DriverClient().init_driver('android')
def driver_teardown():
request.instance.Action.quit()
request.addfinalizer(driver_teardown)

Initialize the instance
1.setup_class method to call
class Singleton(object):
"""Singleton
ElementActions encapsulates the operation class for itself"""
Action = None

def __new__(cls, *args, **kw):
if not hasattr(cls, '_instance'):
desired_caps={}
host = "http://localhost:4723/wd/hub"
driver = webdriver.Remote(host, desired_caps)
Action = ElementActions(driver, desired_caps)
orig = super(Singleton, cls)
cls._instance = orig.__new__(cls, *args, **kw)
cls._instance.Action = Action
return cls._instance

class DriverClient(Singleton):
pass

Called in the test case

class TestExample:
def setup_class(cls):
cls.Action = DriverClient().Action

def teardown_class(cls):
cls.Action.clear()


def test_demo(self)
self.Action.driver.launch_app()
self.Action.set_text('123')

2. pytest.fixture() calls
class DriverClient():

def init_driver(self,device_name):
desired_caps={}
host = "http://localhost:4723/wd/hub"
driver = webdriver.Remote(host, desired_caps)
Action = ElementActions(driver, desired_caps)
return Action

# This function needs to be placed in conftest.py, and pytest will automatically pick up
@pytest.fixture()
def driver_setup(request):
request.instance.Action = DriverClient().init_driver()
def driver_teardown():
request.instance. Action.clear()
request.addfinalizer(driver_teardown)

Called in the test case

#The decorator will directly introduce the driver_setup function
@pytest.mark.usefixtures('driver_setup')
class TestExample:

def test_demo(self):
self.Action.driver.launch_app()
self.Action.set_text('123')

Pytest parameterization method
1. The first method parametrize decorator parameterization method
@pytest.mark.parametrize(('kewords'), [(u"Xiaoming"), (u"Xiaohong"), (u"Xiaobai ")])
def test_kewords(self,kewords):
print(kewords)

# 多个参数
@pytest.mark.parametrize("test_input,expected", [
("3+5", 8),
("2+4", 6),
("6*9", 42),
])
def test_eval(test_input, expected):
assert eval(test_input) == expected

2. The second method is to use pytest hook to add parameterization in batches
# conftest.py
def pytest_generate_tests(metafunc):
"""
Use the hook to add parameters to the use case
metafunc.cls.params corresponding to the params parameter in the class

"""
try:
if metafunc.cls.params and metafunc.function.__name__ in metafunc.cls.params: ## 对应 TestClass params
funcarglist = metafunc.cls.params[metafunc.function.__name__]
argnames = list(funcarglist[0])
metafunc.parametrize(argnames, [[funcargs[name] for name in argnames] for funcargs in funcarglist])
except AttributeError:
pass

# test_demo.py
class TestClass:
"""
:params 对应 hook 中 metafunc.cls.params
"""
# params = Parameterize('TestClass.yaml').getdata()

params = {
'test_a': [{'a': 1, 'b': 2}, {'a': 1, 'b': 2}],
'test_b': [{'a': 1, 'b': 2}, {'a': 1, 'b': 2}],
}
def test_a(self, a, b):
assert a == b
def test_b(self, a, b):
assert a == b

Pytest use case dependencies
Use the pytest-dependency library to create dependencies
When the upper-level use case fails, subsequent dependency use cases will be skipped directly, and you can filter across Classes.
If you need to run across .py files, you need to copy the site-packages/pytest_dependency.py file of

class DependencyManager(object):
"""Dependency manager, stores the results of tests.
"""

ScopeCls = {'module':pytest.Module, 'session':pytest.Session}

@classmethod
def getManager(cls, item, scope='session'): # Change it to session here

if

> pip install pytest-dependency

class TestExample(object):

@pytest.mark.dependency()
def test_a(self):
assert False

@pytest.mark.dependency()
def test_b(self):
assert False

@pytest.mark.dependency(depends=["TestExample::test_a"])
def test_c(self):
# If TestExample::test_a fails, the use case will not be executed
# Can filter across classes
print("Hello I am in test_c ")

@pytest.mark.dependency(depends=["TestExample::test_a","TestExample::test_b"])
def test_d(self):
print("Hello I am in test_d")

pytest -v test_demo.py
2 failed
- test_1.py:6 TestExample.test_a
- test_1.py:10 TestExample.test_b
2 skipped

Pytest custom mark to perform use case screening
1. Use the @pytest.mark module to mark classes or functions for screening when executing use cases
@pytest.mark.webtest
def test_webtest():
pass


@pytest.mark.apitest
class TestExample(object):
def test_a(self):
pass

@pytest.mark.httptest
def test_b(self):
pass

Execute only the use case marked webtest

pytest -v -m webtest

Results (0.03s):
1 passed
2 deselected

Execution mark multiple use cases

pytest -v -m "webtest or apitest"

Results (0.05s):
3 passed

Only the use case that does not execute the marked webtest

pytest -v -m "not webtest"

Results (0.04s):
2 passed
1 deselected

Mark multiple use cases without execution

pytest -v -m "not webtest and not apitest"

Results (0.02s):
3 deselected

2. Select the use case according to the test node
pytest -v Test_example.py::TestClass::test_a
pytest -v Test_example.py::TestClass
pytest -v Test_example.py Test_example2.py

3. Use pytest hook to batch mark use cases
# conftet.py

def pytest_collection_modifyitems(items):
"""
Get the name of each function, and mark it when the character is included in the use case
"""
for item in items:
if "http" in item.nodeid:
item.add_marker(pytest.mark.http )
elif "api" in item.nodeid:
item.add_marker(pytest.mark.api)

class TestExample(object):
def test_api_1(self):
pass

def test_api_2(self):
pass

def test_http_1(self):
pass

def test_http_2(self):
pass
def test_demo(self):
pass

Execute only the use case of the tagged api

pytest -v -m api
Results (0.03s):
2 passed
3 deselected
You can see that after batch marking is used, only the method with api is executed in the test case

Use case error handling screenshots, app logs, etc.
1. The first method using python function decorator

def monitorapp(function):
"""
Use case decorator, screenshot, log, whether to skip, etc.
Get system log, Android logcat, ios use syslog
"""

@wraps(function)
def wrapper(self, *args, **kwargs):
try:
allure.dynamic.description('Use case start time: {}'.format(datetime.datetime.now()))
function(self, *args, **kwargs)
self.Action.driver.get_log('logcat')
except Exception as E:
f = self.Action.driver.get_screenshot_as_png()
allure.attach(f, 'Failed screenshot', allure.attachment_type. PNG)
logcat = self.Action.driver.get_log('logcat')
c = '\n'.join([i['message'] for i in logcat])
allure.attach(c, 'APPlog', allure. attachment_type.TEXT)
raise E
finally:
if self.Action.get_app_pid() != self.Action.Apppid:
raise Exception('The device process ID changes, a crash may occur')
return wrapper

2. The second method of using pytest hook (choose one from the other)

@pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(item, call):
Action = DriverClient().Action
outcome = yield
rep = outcome.get_result()
if rep.when == "call" and rep. failed:
f = Action.driver.get_screenshot_as_png()
allure.attach(f, 'failed screenshot', allure.attachment_type.PNG)
logcat = Action.driver.get_log('logcat')
c = '\n'.join([ i['message'] for i in logcat])
allure.attach(c, 'APPlog', allure.attachment_type.TEXT)
if Action.get_app_pid() != Action.apppid:
raise Exception('Device process ID changes, may A crash occurred')

How to use other hooks in Pytest
1. Customize Pytest parameters
> pytest -s -all

# content of conftest.py
def pytest_addoption(parser):
"""
自定义参数
"""
parser.addoption("--all", action="store_true",default="type1",help="run all combinations")

def pytest_generate_tests(metafunc):
if 'param' in metafunc.fixturenames:
if metafunc.config.option.all: # Here you can get custom parameters
paramlist = [1,2,3]
else:
paramlist = [1,2, 4]
metafunc.parametrize("param",paramlist) # Add parameterization to the use case

# How to get custom parameters in test cases?
# content of conftest.py
def pytest_addoption(parser):
"""
Custom parameters
"""
parser.addoption("--cmdopt", action="store_true",default= "type1",help="run all combinations")


@pytest.fixture
def cmdopt(request):
return request.config.getoption("--cmdopt")


# test_sample.py
def test_sample(cmdopt):
if cmdopt == "type1":
print("first")
elif cmdopt == "type2":
print("second")
assert 1

> pytest -q --cmdopt=type2
second
.
1 passed in 0.09 seconds

2. Pytest filter test directory
#Filter the folder or file name that pytest needs to execute
def pytest_ignore_collect(path,config):
if 'logcat' in path.dirname:
return True #Return True, the file will not be executed

Some common methods of Pytest
Pytest use case priority (such as priority login or something)
> pip install pytest-ordering

@pytest.mark.run(order=1)
class TestExample:
def test_a(self):

Pytest use case failure retry
#Original method
pytet -s test_demo.py
pytet -s --lf test_demo.py #When executing for the second time, only failed use cases will be executed
pytet -s --ll test_demo.py #Second execution , all use cases will be executed, but failure cases will be executed first
#Use third-party plug-in
pip install pytest-rerunfailures #Use plug-in
pytest --reruns 2 #Failed case retry twice

Other common parameters of Pytest
pytest --maxfail=10 #Stop running pytest if failure exceeds 10 times
pytest -x test_demo.py #Stop if failure occurs

In terms of learning arrangements,
as a person who has been there, I hope that everyone will avoid some detours. If you don’t want to experience the feeling of not being able to find information when learning, no one answering questions, and giving up after a few days of persistence, here I will share with you some automation. The learning resources of the test hope to help you along the way. 【Guaranteed 100% Free】

How to get the video file:

This document and video material should be the most comprehensive and complete preparation warehouse for friends who want to engage in [software testing] . This warehouse has also accompanied me through the most difficult journey, and I hope it can help you too! All of the above can be shared, and you can receive it yourself by clicking the small card below.

Guess you like

Origin blog.csdn.net/2301_76643199/article/details/131141930