Test real interview questions (1)

A fund management company tests offline interview questions

Public account: Meng Wuji’s road to test development

The test questions are as follows

You can try to write it yourself first, and then look at the reference solution after finishing it ~

1. Write a piece of code to square the numbers in the list (no language restrictions)

ListA = [1, 3, 5, 7, 9, 11]

2. Use Python language to write a log decorator

3. What is the difference between process, thread and coroutine?

4. Please draw the working principle of the Selenium framework (Appium is also available)

5. What are the key nodes in the landing automation test project? Please give an example

6. Please draw the Django framework request process (that is, the life cycle of the request). If you can write out what is the function call link? (If you have not used Django, you can draw the framework you have used)

7. What is the function of wsgiref?

8. What middleware does Django have? List 5 methods and application scenarios of middleware?

9. Please briefly describe the difference between the three concepts of WSGl/uwsgi/uwSGI? Why do you need nginx when you have uWSGI?

10. Please list several MySQL storage engines. What are their advantages and disadvantages?

11. Please draw the Docker C/S architecture diagram

12. Please use the docker command to operate

​ a) Create a volume named kuma

​ b) Start a container named yapi, execute it in the background, map the 5000 port of the host to the 3000 port in the container, and use the volume created above to mount it to the /data/db directory

The problem-solving reference is as follows

1. Write a piece of code to square the numbers in the list (no language restrictions)

Input: ListA = [1, 3, 5, 7, 9, 11]

Output: [1, 9, 25, 49, 81, 121]

java code:

// 方法一
import java.util.ArrayList;
import java.util.List;

public class SquareList {
    
    
    public static void main(String[] args) {
    
    
        List<Integer> listA = new ArrayList<Integer>();
        listA.add(1);
        listA.add(3);
        listA.add(5);
        listA.add(7);
        listA.add(9);
        listA.add(11);

        List<Integer> squaredList = new ArrayList<Integer>();
        for (int num : listA) {
    
    
            squaredList.add(num * num);
        }

        System.out.println(squaredList);
    }
}

// 方法二
import java.util.Arrays;
import java.util.List;
import java.util.stream.Collectors;

public class SquareList {
    
    
    public static void main(String[] args) {
    
    
        List<Integer> listA = Arrays.asList(1, 3, 5, 7, 9, 11);

        List<Integer> squaredList = listA.stream()
                .map(num -> num * num)
                .collect(Collectors.toList());

        System.out.println(squaredList);
    }
}

By using the Stream API introduced in Java 8, cleaner code can be achieved. In the above code, we will listAconvert to a stream, then use map()an operation to square each element, and finally use collect()the operation to collect the results into a new list. In this way, we get the squared list and output the result.

go code

package main

import (
	"fmt"
)

func main() {
    
    
	listA := []int{
    
    1, 3, 5, 7, 9, 11}
	squaredList := make([]int, len(listA))

	for i, num := range listA {
    
    
		squaredList[i] = num * num
	}

	fmt.Println(squaredList)
}

python code

# 方法一,使用列表推导式
listA = [1, 3, 5, 7, 9, 11]
squared_list = [num**2 for num in listA]
print(squared_list) # [1, 9, 25, 49, 81, 121]

# 方法二,使用map方法
listA = [1, 3, 5, 7, 9, 11]
squared_list = list(map(lambda num: num**2, listA))
print(squared_list)

map()Function takes a function and an iterable as arguments and applies the function to each element in the iterable

#方法三,普通for循环(这应该不是面试官想看到的,但却是最好理解的)
listA = [1, 3, 5, 7, 9, 11]
squared_list = []
for i in listA:
    squared_list.append(i*i) # 写成 i**2 也是可以的
print(squared_list)

**2Represents the square of a number.

2. Use Python language to write a log decorator

Method 1: Simple version

def log_decorator(func):
    def wrapper(*args, **kwargs):
        print("调用函数:", func.__name__)
        print("传入的参数:", args, kwargs)
        result = func(*args, **kwargs)
        print("函数返回结果:", result)
        return result
    return wrapper

@log_decorator
def add(a, b):
    return a + b

result = add(3, 5)
print("最终结果:", result) # 最终结果: 8

Method 1: Slightly less simple version

import logging

logging.basicConfig(level=logging.INFO)

def log_decorator(func):
    def wrapper(*args, **kwargs):
        logger = logging.getLogger()
        logger.info("调用函数: %s", func.__name__)
        logger.info("传入的参数: %s %s", args, kwargs)
        result = func(*args, **kwargs)
        logger.info("函数返回结果: %s", result)
        return result
    return wrapper

@log_decorator
def add(a, b):
    return a + b

result = add(3, 5)
print("最终结果:", result)

The code will not be explained too much.

More formats can be added to the log. Here is a reference to the log part in my previous article: python command line or console or log output with color (qq.com)

3. What is the difference between process, thread and coroutine?

Processes, threads and coroutines are concepts used to implement concurrency and parallelism in computers. The differences between them are as follows:

  1. Process:

    • The process is the basic unit for resource allocation and scheduling by the operating system.
    • Each process has its own independent address space, stack and data segment, and does not share memory with each other.
    • Communication between processes requires inter-process communication (IPC) mechanisms, such as pipes, signals, message queues, etc.
    • Switching between processes is expensive and consumes a lot of resources.
    • Processes are independent of each other, and crashes or exceptions will not affect other processes.
  2. Thread:

    • Threads are independent streams of execution that execute within a process.
    • Threads in the same process share resources, including memory, file handles, etc.
    • The overhead of switching between threads is relatively small and consumes less resources.
    • Threads communicate through shared memory, but thread synchronization and mutual exclusion issues need to be paid attention to.
    • A thread crash or exception may cause the entire process to crash.
  3. Coroutine:

    • A coroutine is a lightweight thread in user mode, also called a micro-thread.
    • The scheduling of the coroutine is controlled by the programmer himself, and the context can be saved and restored during execution through keywords such as yield/yield from.
    • The switching overhead between coroutines is very small, which can efficiently perform asynchronous operations and improve the concurrent performance of the program.
    • Coroutines are suitable for processing IO-intensive tasks, but for computing-intensive tasks, they need to be used in conjunction with multi-threading or multi-processing.

What scenarios are suitable for using processes?

Computation-intensive tasks (such as: large-scale data calculation and processing)

What scenarios are suitable for using threads?

IO-intensive tasks (for example: tasks with many file reads and writes, and many network requests)

What scenarios are suitable for using coroutines?

IO-intensive projects require high concurrency (for example, coroutines are used in stress testing using locust). In fact, in real projects, high-concurrency businesses do not choose to use the Python language.

In summary, the process is the basic unit of resource allocation and scheduling in the operating system, the thread is an independent execution stream executed within the process, and the coroutine is a lightweight thread in user mode. They are different in terms of resource occupation, switching overhead, and communication methods, and the appropriate concurrent implementation method should be selected according to the specific situation.

Later I will write several articles specifically about process thread coroutines.

4. Please draw the working principle of the Selenium framework (Appium is also available)

How selenium works

insert image description here

  • selenium client (automated test script written in python and other languages) initializes a service service and starts the browser driver chromedriver.exe through Webdriver
  • Send an HTTP request to the browser driver through RemoteWebDriver. The browser driver parses the request, opens the browser, and obtains the sessionid. If you operate the browser again, you need to carry this id.
  • Open the browser, bind a specific port, and use the started browser as the remote server of the webdriver
  • After opening the browser, all selenium operations (access addresses, search elements, etc.) are linked to the remote server through RemoteConnection, and then use the execute method to call the _request method to send a request to the remote server through urlib3
  • The browser performs the corresponding action based on the requested content.
  • The browser then returns the result of the executed action to the test script through the browser driver

How Appium works

[The external link image transfer failed. The source site may have an anti-leeching mechanism. It is recommended to save the image and upload it directly (img-FXQJbdVR-1692691937356)(images/image-20230809110637898.png)]

5. What are the key nodes in the landing automation test project? Please give an example

This question is very broad, and there are many factors that need to be considered. It can be explained in combination with your resume and work experience. The following are key points for reference.

1. Filter automated test cases from functional test cases

2. Research practice and discuss executable automated test cases

3. Schedule, expectation and prospect of relevant automation solutions

3. Choose an automated testing framework or build a corresponding automated testing framework by yourself

4. Automated scripting

5. Continuous integration and automated construction

6. Regular maintenance and updates

7. Automation implementation (the most important)

6. Please draw the Django framework request process (that is, the life cycle of the request). If you can write out what is the function call link? (If you have not used Django, you can draw the framework you have used)

insert image description here

  1. User sends request through browser
  2. The request reaches the request middleware, and the middleware preprocesses the request or directly returns the response.
  3. If no response is returned, the urlconf route will be reached and the corresponding view function will be found.
  4. The view function performs corresponding preprocessing or directly returns the response.
  5. Methods in View can selectively access underlying data through Models
  6. After getting the corresponding data, return to the Django template system. Templates renders the data to the template through filters or tags.
  7. Return the response to the browser and display it to the customer

7. What is the function of wsgiref?

wsgiref is a module in the Python standard library that provides a simple and effective implementation of WSGI (Web Server Gateway Interface) server and middleware. It is mainly divided into five modules: simple_server, util, headers, handlers, validate.

wsgiref source code address: https://pypi.python.org/pypi/wsgiref

8. What middleware does Django have? List 5 methods and application scenarios of middleware?

Django provides a lot of built-in middleware for handling requests and responses. The following are 5 commonly used middleware and their application scenarios:

  1. SessionMiddleware: Middleware that handles session state. It supports session management by adding a session object during request handling. Application scenarios include user authentication and user status tracking functions.

  2. AuthenticationMiddleware: Middleware that handles user authentication. It is responsible for checking the user's authentication status during each request processing and adding the user's authentication information to the request object. Application scenarios include user login, permission control and authentication.

  3. CsrfViewMiddleware: Middleware that handles cross-site request forgery (CSRF) protection. It automatically generates a CSRF token for each POST request and verifies the token's validity when submitting the form. Application scenarios include protecting form submissions from CSRF attacks.

  4. GZipMiddleware: Middleware for handling compressed responses. It GZip-compresses the content before sending the response, thereby reducing the size of the data transfer. Application scenarios include improving website performance and reducing bandwidth consumption.

  5. LocaleMiddleware: Middleware that handles multi-language support. It does this by setting the appropriate locale based on the language preference provided with the request and applying it to the request's response. Application scenarios include multilingual websites and internationalized applications.

These middlewares provide a series of commonly used functions and handlers that can be easily integrated into Django applications, simplifying the work of developers. According to specific requirements, these middleware can be enabled and configured as needed to implement different functions and processing logic.

9. Please briefly describe the difference between the three concepts of WSGl/uwsgi/uwSGI? Why do you need nginx when you have uWSGI?

WSGI

WSGI (Web Server Gateway Interface): WSGI is a widely accepted and used standard interface between Python web applications and servers. It defines the rules of communication between the Web server and the Web application, enabling the server to understand and interact with the application. The WSGI specification allows developers to write web applications in a uniform way without worrying about server-specific details.

In short, it is a protocol that describes how web servers (such as nginx, uWSGI and other servers) communicate with web applications (such as programs written in Django and Flask frameworks) .

uwsgi protocol

It is a uWSGI server's own protocol, which is a wire protocol rather than a communication protocol. It is used to define the type of information transmitted. The first 4 bytes of each uwsgi packet are descriptions of the type of information transmitted. It is used to communicate with proxy servers such as nginx. It is two different things compared with WSGI.

uwSGI

It is a web server that implements two protocols, uwsgi and WSGI.

Why do we need nginx when we have uWSGI?

This is because the roles of Nginx and uWSGI are different. Nginx mainly serves as a front-end server, reverse proxy and load balancer. It can handle static resources and a large number of concurrent connections, and forward requests to the back-end uWSGI process to handle dynamic requests. uWSGI focuses on processing requests from Web applications. It supports the WSGI protocol and is responsible for parsing and executing application code. Therefore, through the combination of Nginx and uWSGI, the performance, reliability and security of the system can be improved, and better load balancing and higher concurrent processing capabilities can be achieved.

10. Please list several MySQL storage engines. What are their advantages and disadvantages?

MySQL provides a variety of storage engines, each with its own unique features and applicable scenarios. Here are some common MySQL storage engines and their pros and cons:

  1. InnoDB:

    • Advantages: Support transaction processing and foreign key constraints, provide high concurrency performance and data integrity. It has row-level locking and multi-version concurrency control (MVCC) support, and is suitable for high concurrent writing and a large number of mixed reading and writing scenarios.
    • Disadvantages: Compared to other storage engines, InnoDB's storage and reading speeds are relatively slow. Because it supports transactions and ACID properties, it requires more disk space.
  2. MyISAM:

    • Advantages: It has high reading performance and is suitable for a large number of read-only operations and full-text search. Storing and indexing data is very compact and takes up less disk space.
    • Disadvantages: Transactions and foreign key constraints are not supported. It does not have row-level locking and only supports table-level locking, so the performance is poor in concurrent writing scenarios. It is prone to table damage and does not have the ability to recover from failures.
  3. Memory:

    • Pros: Data is stored entirely in memory, very fast to read and write. Suitable for scenarios such as cached tables, temporary tables, and high-speed data capture.
    • Disadvantages: Can only be stored in memory, power outage or restart will cause data loss. Does not support transaction processing and is not suitable for long-term storage.
  4. Archive:

    • Advantages: It is suitable for archiving and historical data storage, with very high storage and compression efficiency, and takes up little disk space. It is suitable for scenarios where data is sparsely inserted and updated infrequently.
    • Disadvantages: Indexing and transaction processing are not supported. It can only perform append operations, and is not suitable for regular query and update operations.
  5. NDB Cluster:

    • Advantages: Suitable for high-availability and high-capacity distributed systems, supporting data sharding and automatic failure recovery. It has transaction processing and ACID characteristics and is suitable for high concurrent reading and writing and real-time application scenarios.
    • Disadvantages: Relatively complex, requiring special configuration and management, and high hardware requirements. Not suitable for stand-alone and small-scale applications.

The selection of these storage engines should be determined based on actual needs and application scenarios, weighing the advantages and disadvantages of each storage engine, and determining the most suitable storage engine based on the read and write requirements, data consistency and availability requirements of the specific scenario.

11. Please draw the Docker C/S architecture diagram

[The external link image transfer failed. The source site may have an anti-leeching mechanism. It is recommended to save the image and upload it directly (img-GbXlw24u-1692691937357) (images/image-20230809140151499.png)]

In the Docker C/S architecture, there are the following key components:

  1. Docker Host: It is a physical or virtual machine running the Docker engine. The Docker host is responsible for managing the creation, running, and destruction of containers, as well as resource management and isolation of containers.
  2. Docker Engine: It is the core component of Docker, responsible for receiving and processing commands from the Docker client, and performing operations such as creating, running, and stopping containers. The Docker engine consists of the Docker daemon (Docker Daemon) and the Docker REST API.
  3. Docker Client: It is a user interface that communicates with the Docker engine. You can use command line tools (such as docker commands) or graphical interface tools to interact with the Docker engine and control the container by sending commands to the Docker engine. Create, run and manage.
  4. Docker Image: It is the basis of containers and is used to create templates for containers. A Docker image contains a complete file system containing all the files and configuration required to run. You can build your own image by downloading an existing image from Docker Hub or a private image repository, or by using a Dockerfile.
  5. Docker Registry: It is a central repository for storing and sharing Docker images. Docker Hub is the default public repository, providing a large number of official and community-maintained images for users to use. Users can also deploy private Docker repositories to save and manage their own images.

12. Please use the docker command to operate

a) Create a volume named kuma

docker volume create kuma

b) Start a container named yapi, execute it in the background, map the 5000 port of the host to the 3000 port in the container, and use the volume created above to mount it to the /data/db directory

docker run -d --name yapi -p 5000:3000 -v kuma:/data/db <yapi_image_name>

Need to be <yapi_image_name>replaced with the actual yapi image name.

In this way, you can use the volume kuma created in the above steps and mount it to the /data/db directory in the container, and perform port mapping at the same time to map the 5000 port of the host to the 3000 port in the container. The container name is yapi and it runs in background mode.

Some of the references in this article are as follows:

https://blog.csdn.net/baidu_36943075/article/details/107671011
https://www.cnblogs.com/jiangchunsheng/p/8986532.html
https://www.cnblogs.com/MrYuChen-Blog/p/15571639.html
https://www.cnblogs.com/sunsky303/p/8274586.html
https://blog.csdn.net/weixin_45455015/article/details/100113330

Guess you like

Origin blog.csdn.net/qq_46158060/article/details/132430219