2023 Interface Automation Test, Complete Introduction

1. What is interface testing

As the name suggests, interface testing is to test the interface between systems or components, mainly to verify data exchange, transfer and control management process, and mutual logical dependencies. Among them, the interface protocol is divided into HTTP, WebService, Dubbo, Thrift, Socket and other types, and the test type is mainly divided into functional test, performance test, stability test, security test, etc.

In the "pyramid" model of layered testing, interface testing belongs to the category of second-tier service integration testing. Compared with UI layer (mainly WEB or APP) automated testing, interface automated testing has greater benefits, is easy to implement, has low maintenance costs, and has a higher input-output ratio. It is the first choice for every company to carry out automated testing.

Below we take an HTTP interface as an example to fully introduce the interface automation testing process: from requirement analysis to use case design, from script writing, test execution to result analysis, and provide complete use case design and test scripts.

2. Basic process

The basic interface function automation testing process is as follows:

Requirements Analysis -> Use Case Design -> Script Development -> Test Execution -> Result Analysis

2.1 Example interface

Interface name: Douban Movie Search

The address will be blocked and cannot be sent out, so it will be placed at the end of the article

3. Demand Analysis

Requirements analysis refers to documents such as requirements and design. On the basis of understanding the requirements, it is necessary to be clear about the internal implementation logic, and at this stage, it is possible to propose unreasonable or omissions in the requirements and design.

For example: Douban movie search interface, the requirement I understand is to support the search for film titles, cast members and tags, and return the search results in pages.

4. Use case design

Use case design is based on understanding the interface test requirements, using mind mapping software such as MindManager or XMind to write test case design, the main content includes parameter verification, function verification, business scenario verification, security and performance verification, etc. Commonly used use case design methods include equivalence class partitioning, boundary value analysis, scenario analysis, cause-and-effect diagrams, and orthogonal tables.

For the function test part of the Douban movie search interface, we mainly design test cases from three aspects: parameter verification, function verification, and business scene verification:

5. Script development

Based on the test case design written above, we used the python+nosetests framework to write related automated test scripts. It can fully realize the function of interface automation test, automatic execution and email sending test report.

5.1 Related lib installation

The necessary lib libraries are as follows, just use the pip command to install:

pip install nose pip install nose-html-reporting pip install requests

5.2 Interface call

Using the requests library, we can easily write the above interface call method (for example, search for q=Andy Lau, the sample code is as follows):

#coding=utf-8 import requests import json url = ' https://api.douban.com/v2/movie/search' params=dict(q=u'刘德华') r = requests.get(url, params=params) print 'Search Params:\n', json.dumps(params, ensure_ascii=False) print 'Search Response:\n', json.dumps(r.json(), ensure_ascii=False, indent=4)

When actually writing automated test scripts, we need to do some encapsulation. In the following code, we encapsulate the Douban movie search interface. The test_q method only needs to use the yield method provided by nosetests to execute each test set in the list qs very conveniently:

class test_doubanSearch(object): @staticmethod def search(params, expectNum=None): url = ' https://api.douban.com/v2/movie/search ' r = requests.get(url, params=params) print 'Search Params:\n', json.dumps(params, ensure_ascii=False) print 'Search Response:\n', json.dumps(r.json(), ensure_ascii=False, indent=4) def test_q(self) : # Check the search conditions q qs = [u'White Night Chasing the Murder', u'A Chinese Journey to the West', u'Zhou Xingchi', u'Zhang Yimou', u'Zhou Xingchi,Wu Mengda', u'Zhang Yimou,Gong Li', u'Zhou Xingchi ,A Chinese Journey to the West', u'White Night Chase, Pan Yueming'] for q in qs: params = dict(q=q) f = partial(test_doubanSearch.search, params) f.description = json.dumps(params, ensure_ascii=False ).encode('utf-8') yield (f,)

We follow the test case design and write the automated test scripts for each function in turn.

5.3 Result verification

When manually testing the interface, we need to judge whether the test is passed through the results returned by the interface, and the same is true for automated testing.

For this interface, we search for "q=Andy Lau", we need to judge whether the returned results contain "actor Andy Lau or film title Andy Lau", when searching for "tag=comedy", we need to judge the type of movie in the returned results Whether it is "comedy", when the results are paginated, it is necessary to verify whether the number of returned results is correct, etc. The complete result verification code is as follows:

class check_response(): @staticmethod def check_result(response, params, expectNum=None): # Since there are fuzzy matches in the search results, the simple processing here only verifies the correctness of the first returned result if expectNum is not None: # When the number of expected results is not None, only the number of returned results is judged eq_(expectNum, len(response['subjects']), '{0}!={1}'.format(expectNum, len(response['subjects'] ))) else: if not response['subjects']: # The result is empty, return directly to failure assert False else: # The result is not empty, check the first result subject = response['subjects'][0] # First check the search condition tag if params.get('tag'): for word in params['tag'].split(','): genres = subject['genres'] ok_(word in genres, 'Check { 0} failed!'.format(word.encode('utf-8'))) # Check the search condition again q elif params.get('q'): # Determine whether the title, director or actor contains the search Word, if any one contains it, it will return success for word in params['q'].split(','): title = [subject['title']] casts = [i['name'] for i in subject['casts']] directors = [i['name'] for i in subject['directors']] total = title + casts + directors ok_(any(word.lower() in i.lower() for i in total), 'Check {0} failed!'.format(word.encode('utf-8'))) @staticmethod def check_pageSize(response): # 判断分页结果数目是否正确 count = response.get('count') start = response.get('start') total = response.get('total') diff = total - start if diff >= count: expectPageSize = count elif count > diff > 0: expectPageSize = diff else: expectPageSize = 0 eq_(expectPageSize, len(response['subjects']), '{0}!={1}'.format(expectPageSize, len(response['subjects'])))lower() in i.lower() for i in total), 'Check {0} failed!'.format(word.encode('utf-8'))) @staticmethod def check_pageSize(response): # 判断分页结果数目是否正确 count = response.get('count') start = response.get('start') total = response.get('total') diff = total - start if diff >= count: expectPageSize = count elif count > diff > 0: expectPageSize = diff else: expectPageSize = 0 eq_(expectPageSize, len(response['subjects']), '{0}!={1}'.format(expectPageSize, len(response['subjects'])))lower() in i.lower() for i in total), 'Check {0} failed!'.format(word.encode('utf-8'))) @staticmethod def check_pageSize(response): # 判断分页结果数目是否正确 count = response.get('count') start = response.get('start') total = response.get('total') diff = total - start if diff >= count: expectPageSize = count elif count > diff > 0: expectPageSize = diff else: expectPageSize = 0 eq_(expectPageSize, len(response['subjects']), '{0}!={1}'.format(expectPageSize, len(response['subjects'])))expectPageSize = count elif count > diff > 0: expectPageSize = diff else: expectPageSize = 0 eq_(expectPageSize, len(response['subjects']), '{0}!={1}'.format(expectPageSize, len(response['subjects'])))expectPageSize = count elif count > diff > 0: expectPageSize = diff else: expectPageSize = 0 eq_(expectPageSize, len(response['subjects']), '{0}!={1}'.format(expectPageSize, len(response['subjects'])))

5.4 Executing tests

For the above test scripts, we can use the nosetests command to run automated tests conveniently, and use the nose-html-reporting plug-in to generate test reports in html format.

Run the command as follows:

nosetests -v test_doubanSearch.py:test_doubanSearch --with-html --html-report=TestReport.html

5.5 Send email report

After the test is completed, we can use the method provided by the smtplib module to send the html format test report. The basic process is to read the test report -> add email content and attachments -> connect to the mail server -> send email -> exit, the sample code is as follows:

import smtplib from email.mime.text import MIMEText from email.mime.multipart import MIMEMultipart def send_mail(): # Read test report content with open(report_file, 'r') as f: content = f.read().decode ('utf-8') msg ​​= MIMEMultipart('mixed') # Add email content msg_html = MIMEText(content, 'html', 'utf-8') msg.attach(msg_html) # Add attachment msg_attachment = MIMEText(content, 'html', 'utf-8') msg_attachment["Content-Disposition"] = 'attachment; filename="{0}"'.format(report_file) msg.attach(msg_attachment) msg['Subject'] = mail_subjet msg ['From'] = mail_user msg['To'] = ';'.join(mail_to) try: # connect to mail server s = smtplib.SMTP(mail_host, 25) # login s.login(mail_user, mail_pwd) # send mail s.sendmail(mail_user, mail_to, msg.as_string()) # exit s.quit() except Exception as e:print "Exceptioin ", e

6. Results Analysis

Open the test report generated after the nosetests run is complete, it can be seen that a total of 51 test cases were executed in this test, 50 of which succeeded and 1 failed.

Failed use cases can see that the incoming parameters are: {"count": -10, "tag": "comedy"}, and the number of results returned at this time is inconsistent with our expected results (when the count is negative, the expected result is The interface reports an error or uses the default value of 20, but the actual number of returned results is 189. Hurry up and file a bug with Douban - -)

7. Test report

Finally send the test report email, the screenshot is as follows:

8. Resource sharing

Finally, in order to facilitate self-study software testing, I specially prepared a 13G super practical dry learning resource for you, involving all testing knowledge points.

These materials should be the most comprehensive and complete preparation warehouse for friends who want to advance [automated testing]. This warehouse has also accompanied me through the most difficult journey, and I hope it can help you too! Everything should be done as early as possible, especially in the technical industry, we must improve our technical skills. I hope to be helpful…… 

Guess you like

Origin blog.csdn.net/IT_LanTian/article/details/130133990