What is interface testing and how do we implement interface testing?

1. What is interface testing ?

As the name suggests, interface testing is to test the interface between systems or components, mainly to verify the exchange of data, transfer and control management processes, and mutual logical dependencies. The interface protocols are divided into HTTP, WebService, Dubbo, Thrift, Socket and other types. The test types are mainly divided into functional testing, performance testing, stability testing, security testing, etc.

In the "pyramid" model of layered testing, interface testing belongs to the second layer of service integration testing. Compared with UI layer (mainly WEB or APP) automated testing, interface automated testing has greater benefits, is easy to implement, has low maintenance costs, and has a higher input-output ratio. It is the first choice for every company to carry out automated testing.

Below we take an HTTP interface as an example to fully introduce the interface automation testing process: from requirement analysis to use case design, from script writing, test execution to result analysis, and provide complete use case design and test scripts.

2. Basic process

The basic interface function automated testing process is as follows:

Requirements analysis->Use case design->Script development->Test execution->Result analysis

2.1 Sample interface

Interface name: Douban movie search

Interface document address: https://developers.douban.com/wiki/?title=movie_v2#search

Interface calling example:

1) Search by cast and crew: https://api.douban.com/v2/movie/search?q=Zhang Yimou

2) Search by movie title: https://api.douban.com/v2/movie/search?q=西游之西游

3) Search by genre: https://api.douban.com/v2/movie/search?tag=comedy

3. Need analysis

Requirements analysis refers to documents such as requirements and designs. On the basis of understanding the requirements, it is also necessary to understand the internal implementation logic, and at this stage, unreasonable or omitted requirements and designs can be pointed out.

For example: Douban movie search interface, the requirement I understand is to support the search for movie titles, cast members and tags, and return the search results in pages.

4. Use case design

Use case design is based on understanding the interface testing requirements and using mind mapping software such as MindManager or XMind to write test case design. The main content includes parameter verification, function verification, business scenario verification, security and performance verification, etc. Commonly used use case design methods include equivalence class division, boundary value analysis, scenario analysis, cause-and-effect diagrams, orthogonal tables, etc.

For the Douban movie search interface function testing part, we mainly focus on three aspects: parameter verification, function verification, and business scenario verification. The design test examples are as follows:

5. Script development

Based on the test case design written above, we used the python+nosetests framework to write relevant automated test scripts. It can fully realize the functions of interface automated testing, automatic execution and email sending of test reports.

5.1 Related lib installation

The necessary lib libraries are as follows. You can install them using the pip command:

pip install nose
pip install nose-html-reporting
pip install requests

5.2 Interface call

Using the requests library, we can easily write the above interface calling method (such as searching for q=Andy Lau, the sample code is as follows):

#coding=utf-8
import requests
import json
 
url = 'https://api.douban.com/v2/movie/search'
params=dict(q=u'刘德华')
r = requests.get(url, params=params)
print 'Search Params:\n', json.dumps(params, ensure_ascii=False)
print 'Search Response:\n', json.dumps(r.json(), ensure_ascii=False, indent=4)

When actually writing automated test scripts, we need to perform some encapsulation. In the following code, we encapsulate the Douban movie search interface. The test_q method only needs to use the yield method provided by nosetests to conveniently loop through each test set in the list qs:

class test_doubanSearch(object):
 
    @staticmethod
    def search(params, expectNum=None):
        url = 'https://api.douban.com/v2/movie/search'
        r = requests.get(url, params=params)
        print 'Search Params:\n', json.dumps(params, ensure_ascii=False)
        print 'Search Response:\n', json.dumps(r.json(), ensure_ascii=False, indent=4)
 
    def test_q(self):
        # 校验搜索条件 q
        qs = [u'白夜追凶', u'大话西游', u'周星驰', u'张艺谋', u'周星驰,吴孟达', u'张艺谋,巩俐', u'周星驰,大话西游', u'白夜追凶,潘粤明']
        for q in qs:
            params = dict(q=q)
            f = partial(test_doubanSearch.search, params)
            f.description = json.dumps(params, ensure_ascii=False).encode('utf-8')
            yield (f,)

We can write automated test scripts for each function in sequence according to the test case design.

5.3 Result verification

When testing the interface manually, we need to judge whether the test passes based on the results returned by the interface. The same is true for automated testing.

For this interface, we search for "q=Andy Lau" and we need to determine whether the returned results contain "actor Andy Lau or film title Andy Lau". When searching for "tag=comedy", we need to determine the type of movie in the returned results. Whether it is "comedy", it is necessary to verify whether the number of returned results is correct when paging the results, etc. The complete result verification code is as follows:

class check_response():
    @staticmethod
    def check_result(response, params, expectNum=None):
        # 由于搜索结果存在模糊匹配的情况,这里简单处理只校验第一个返回结果的正确性
        if expectNum is not None:
            # 期望结果数目不为None时,只判断返回结果数目
            eq_(expectNum, len(response['subjects']), '{0}!={1}'.format(expectNum, len(response['subjects'])))
        else:
            if not response['subjects']:
                # 结果为空,直接返回失败
                assert False
            else:
                # 结果不为空,校验第一个结果
                subject = response['subjects'][0]
                # 先校验搜索条件tag
                if params.get('tag'):
                    for word in params['tag'].split(','):
                        genres = subject['genres']
                        ok_(word in genres, 'Check {0} failed!'.format(word.encode('utf-8')))
 
                # 再校验搜索条件q
                elif params.get('q'):
                    # 依次判断片名,导演或演员中是否含有搜索词,任意一个含有则返回成功
                    for word in params['q'].split(','):
                        title = [subject['title']]
                        casts = [i['name'] for i in subject['casts']]
                        directors = [i['name'] for i in subject['directors']]
                        total = title + casts + directors
                        ok_(any(word.lower() in i.lower() for i in total),
                            'Check {0} failed!'.format(word.encode('utf-8')))
 
    @staticmethod
    def check_pageSize(response):
        # 判断分页结果数目是否正确
        count = response.get('count')
        start = response.get('start')
        total = response.get('total')
        diff = total - start
 
        if diff >= count:
            expectPageSize = count
        elif count > diff > 0:
            expectPageSize = diff
        else:
            expectPageSize = 0
 
        eq_(expectPageSize, len(response['subjects']), '{0}!={1}'.format(expectPageSize, len(response['subjects'])))

5.4 Executing tests

For the above test script, we can use the nosetests command to easily run automated tests, and use the nose-html-reporting plug-in to generate html format test reports.

Run the command as follows:

nosetests -v test_doubanSearch.py:test_doubanSearch --with-html --html-report=TestReport.html

5.5 Send email report

After the test is completed, we can use the method provided by the smtplib module to send the html format test report. The basic process is to read the test report -> add email content and attachments -> connect to the mail server -> send email -> exit. The sample code is as follows:

import smtplib
from email.mime.text import MIMEText
from email.mime.multipart import MIMEMultipart
 
def send_mail():
    # 读取测试报告内容
    with open(report_file, 'r') as f:
        content = f.read().decode('utf-8')
 
    msg = MIMEMultipart('mixed')
    # 添加邮件内容
    msg_html = MIMEText(content, 'html', 'utf-8')
    msg.attach(msg_html)
 
    # 添加附件
    msg_attachment = MIMEText(content, 'html', 'utf-8')
    msg_attachment["Content-Disposition"] = 'attachment; filename="{0}"'.format(report_file)
    msg.attach(msg_attachment)
 
    msg['Subject'] = mail_subjet
    msg['From'] = mail_user
    msg['To'] = ';'.join(mail_to)
    try:
        # 连接邮件服务器
        s = smtplib.SMTP(mail_host, 25)
        # 登陆
        s.login(mail_user, mail_pwd)
        # 发送邮件
        s.sendmail(mail_user, mail_to, msg.as_string())
        # 退出
        s.quit()
    except Exception as e:
        print "Exceptioin ", e

6. Result analysis

Open the test report generated after nosetests is run. It can be seen that a total of 51 test cases were executed in this test, 50 of which were successful and one of which failed.

In the failed use case, you can see that the parameters passed in are: {"count": -10, "tag": "Comedy"}, and the number of results returned at this time is inconsistent with our expected results (when count is a negative number, the expected result is The interface reports an error or uses the default value of 20, but the actual number of results returned is 189. Hurry up and report a bug to Douban - -)

7. Complete script

I have uploaded the complete automated test script of Douban movie search interface to GitHub. Download address: test_demo/test_douban at master · lovesoo/test_demo · GitHub

After the download is complete, use the following command to perform complete interface automation testing and send the final test report via email:

python test_doubanSearch.py

Finally send the test report email, the screenshot is as follows:

After practicing 30 practical projects of interface automation testing in 7 days, 28K people joined the byte testing position. [Automated testing/interface testing/software testing/performance testing/Jmeter]

Finally, I would like to thank everyone who reads my article carefully. Reciprocity is always necessary. Although it is not a very valuable thing, if you can use it, you can take it directly:

Insert image description here

This information should be the most comprehensive and complete preparation warehouse for [software testing] friends. This warehouse has also accompanied tens of thousands of test engineers through the most difficult journey. I hope it can also help you!

Guess you like

Origin blog.csdn.net/NHB456789/article/details/133170878