Share a very unpleasant test open interview experience

Make up for the previous interview, it should be at the end of last month, pure KPI, it is the worst interview experience in social recruitment.

I can't remember whether the business department should be a drone or an unmanned vehicle. It should be a drone.

Because it was caught, I first asked if I knew about the test, and whether I knew about the test, and then officially started.

The interviewer was an older lady with disheveled hair, eyes that could not be opened, and a look of bitterness and disdain.

Project-related, some specific in-depth issues

MySQL four isolation levels

MySQL provides four isolation levels, namely:

  1. READ UNCOMMITTED (read uncommitted): This is the lowest isolation level that allows a transaction to read data that has not yet been committed by another transaction. This can lead to dirty reads (reads of uncommitted data) and non-repeatable reads (reads of the same data but different values). In MySQL, this level is not commonly used.

  2. READ COMMITTED (commit reading): This is the default isolation level of MySQL. It allows one transaction to read data that another transaction has committed. This avoids dirty reads, but can lead to non-repeatable reads (reads of the same data but different values) and phantom reads (reads of new data inserted by another transaction).

  3. REPEATABLE READ (repeatable read): This isolation level guarantees that when the same data is read multiple times within the same transaction, the result is always the same. It avoids dirty reads and non-repeatable reads, but can lead to phantom reads (reading new data inserted by another transaction). In MySQL, this level is the most commonly used.

  4. SERIALIZABLE (serializable): This is the highest isolation level, which avoids dirty reads, non-repeatable reads, and phantom reads by forcing transactions to be executed serially. This means that only one transaction can access the data while others have to wait. This level can guarantee the integrity of the data, but it has a great impact on performance and is not commonly used in MySQL.

Say something about MVCC

MVCC, that is, multi-version concurrency control, is a technology used to implement database concurrency control. It is mainly used to solve some problems that may occur during concurrent database operations, such as lost modification, non-repeatable read, phantom read, etc.

In MVCC, every time the database is modified, a new version will be generated instead of directly overwriting the original data. When each transaction reads data, it will only see the version data earlier than the transaction start time, and will not see subsequent modifications. In this way, the isolation between concurrent transactions can be achieved, so as to ensure that each transaction can see consistent data when it is executed.

The implementation method of MVCC is to add a version number or timestamp to each row of records, and a deletion marker to mark whether the record has been deleted. When a transaction starts, a snapshot is created to record all data versions and delete markers in the current database. During transaction execution, if other transactions modify or delete some data, the data will be marked as "deleted", but the row record will not be deleted directly. In this way, the current transaction can read the old version of the row record.

MVCC can improve the concurrency performance of the database, because multiple transactions can read data in the database at the same time without blocking each other. However, when using MVCC, it is also necessary to pay attention. If there are too many transactions, there will be a large amount of version data in the database, which will increase the storage space and query time of the database. Therefore, it is necessary to select an appropriate isolation level and concurrency control method according to the actual situation.

Do you use excessive database sub-tables? Under what circumstances?

Usually, when the amount of data in an application reaches a certain scale, in order to cope with data access with high concurrency and large data volume, the sub-database and sub-table technology will be used.

Sub-database sub-table is to divide a large database into multiple small databases (sub-databases), and each small database divides the data table into multiple small data tables (sub-tables), so as to realize the decentralized storage and query of data .

The main situations of using sub-database and sub-table include:

  1. Database performance bottleneck: When the load of a single database reaches the bottleneck, in order to improve the concurrent processing capability of the database, the data can be divided into multiple databases.

  2. Database capacity limitation: When a single database cannot store all the data, the data can be divided into multiple databases to increase the storage capacity.

  3. Data access hotspots: When a data table has a large amount of data access, in order to reduce the load on the data table, the table data can be distributed to multiple data tables, thereby improving query performance.

  4. Business requirements: When business requirements require multiple databases to be completed together, for example, user data in multiple regions needs to be stored in different databases, which can be achieved by using sub-databases and sub-tables.

It should be noted that although sub-database and sub-table can improve the performance and scalability of the database, it also brings some challenges, such as distributed transactions, data consistency, data migration, backup and recovery and other issues. Therefore, when using sub-database and sub-table, it is necessary to carefully consider its impact on the system, and choose appropriate sharding strategies and tools to achieve it.

MySQL optimization

MySQL optimization includes the following aspects:

  1. Hardware configuration optimization: including the configuration and optimization of hardware resources such as CPU, memory, and disk, such as increasing CPU cores, expanding memory, and using solid-state hard disks.

  2. Database configuration optimization: including MySQL configuration parameter optimization, such as cache size, number of concurrent connections, thread pool size, etc.

  3. Index optimization: Indexes are an important means to optimize database performance. You can use appropriate indexes to improve query speed and reduce query disk I/O times.

  4. SQL statement optimization: SQL statement is an important means of operating the database. By optimizing the structure and syntax of SQL statements, the amount of query data can be reduced and the query logic can be optimized to improve query performance.

  5. Partition table optimization: Partition table is a special table structure supported by MySQL, which can split a large table into multiple small tables, thereby improving the performance of data query and update.

  6. Database architecture optimization: Database architecture design and optimization are also important means to improve database performance. You can improve query performance and reduce data redundancy by optimizing database design and adjusting data models.

It should be noted that different optimization methods target different problems and applicable scenarios, and need to be selected and adopted in combination with specific business requirements and database performance bottlenecks. At the same time, optimization is not done at one time, and requires continuous observation and adjustment to adapt to changing business needs and data scale.

Redis cache penetration

Redis cache penetration means that there is no data that needs to be queried in the cache, causing all requests to be sent to the database, resulting in increased pressure on the database.

Cache penetration usually occurs in the following situations:

  1. Query a data that does not exist, such as querying a data with a negative ID.

  2. Malicious attacks or illegal access, such as using blasting algorithms to try to query all possible IDs, so as to exhaust system resources.

In order to avoid cache penetration, the following methods can be used:

  1. Bloom filter: A Bloom filter is a space-efficient random data structure that can be used to determine whether an element exists in a set. Possible query keys can be stored in the Bloom filter in advance. When a query request arrives, it is first judged whether the key is in the Bloom filter. If not, an empty result is returned directly, otherwise, the cache or database is queried.

  2. Cache empty value: You can store non-existent key-value pairs in the cache, the cache time is short, and it will automatically expire after a period of time. In this way, the next time you query for a key that does not exist, you will get the null value from the cache, avoiding directly querying the database.

  3. Data preheating: When the system starts, load commonly used data into the cache to reduce the pressure on the database during the first query.

  4. Limit access frequency: Limit access frequency can effectively prevent malicious attacks and illegal access.

It should be noted that the above methods are only some common means to avoid cache penetration, but they are not absolute solutions. In practical applications, it is necessary to choose an appropriate method according to the specific situation to ensure the security and stability of the system.

cache avalanche

Cache avalanche means that a large amount of data stored in the cache expires within the same period of time, causing a large number of requests to fall directly to the back-end database, causing database performance problems.

Cache avalanches typically occur in the following situations:

  1. The cache server is down or restarted.

  2. The cache data expiration time setting is unreasonable, causing a large amount of cache data to expire and become invalid at the same time.

  3. The data stored in the cache is concentrated on a certain hot key. When the hot key expires, a large number of requests fall directly to the back-end database.

To avoid cache avalanche, there are several methods that can be used:

  1. Set a random value for the expiration time: You can add a random value to the expiration time of the cached data, so that the expiration time of the cached data can be spread out, preventing a large amount of data from expiring at the same time.

  2. Build a cluster: Build a cache server into a cluster, share requests through multiple cache servers, and reduce the risk of single point failure.

  3. Hotspot data never expires: Setting the expiration time of hotspot data to never expires can ensure that hotspot data is always valid and avoid cache avalanche problems caused by hotspot data expiration.

  4. Current limiting and downgrading: For requests that exceed the system's capacity, current limiting or downgrading can be used to protect the stability of the system.

It should be noted that the above methods are just some common means to avoid cache avalanche, but they are not absolute solutions. In practical applications, it is necessary to choose an appropriate method according to the specific situation to ensure the security and stability of the system.

redis persistence mechanism

Redis provides two persistence mechanisms, RDB and AOF.

RDB

RDB is a snapshot persistence method of Redis, which can save Redis data at a certain point in time to a file. The advantage of RDB persistence is that it has less impact on Redis, because the data in Redis memory is written to disk, so it will not have a great impact on the performance of Redis. At the same time, the RDB file is also very compact, because it only contains the data in the Redis database at a certain point in time.

There are two trigger methods for RDB persistence: manual and automatic. Manual triggering can use  SAVE commands to force the Redis database to be written to the hard disk. The automatic trigger can be realized through  save or  bgsave configuration. The former means that when certain conditions are met (for example, the number of keys in the current Redis database exceeds the specified number, or the number of keys modified exceeds the specified number), Redis will perform RDB persistence; RDB persistence is performed asynchronously in the background, so that the main thread of Redis will not be blocked.

AOF

AOF is another persistence method of Redis, and its full name is Append Only File, which is an appended file. Different from RDB, the AOF persistence method is to save the operation log of Redis to a file. This file can be used to restore the state of the Redis database through the replay operation. The AOF persistence method has the following advantages:

  • Operational logs are more detailed, reducing the possibility of data loss.

  • It can ensure that the Redis database will not be damaged as much as possible when it crashes unexpectedly.

Compared with RDB, AOF is more resource-intensive because it needs to write all Redis operations to disk. At the same time, the AOF file is relatively large because it contains all the operation records of the Redis database.

Similar to RDB, AOF also supports two trigger methods: manual and automatic. Manual triggering can use  BGREWRITEAOF commands to force Redis to rewrite the AOF file; automatic triggering can be achieved through  auto-aof-rewrite-percentage and  auto-aof-rewrite-min-size configuration. The former means that when the AOF file size exceeds a certain percentage of the current file size (the default is 100%), Redis will perform AOF rewriting; the latter means that when the AOF file size exceeds the specified size, Redis will perform AOF rewriting.

Sentinel working mechanism

Sentinel is one of the high-availability solutions provided by Redis. It runs as an independent process to monitor the health status of the Redis master node, and automatically upgrades one of the slave nodes to Master node to ensure the availability of the system.

Sentry works as follows:

  1. Sentinels periodically send PING commands to the master node to check the health of the master node.

  2. If the sentinel does not receive the PONG response from the master node for several consecutive times, it is considered that the master node has been down.

  3. After the master node goes down, Sentinel will select one of all the slave nodes as the new master node and upgrade it to the master node.

  4. Sentinel will post a notification to all clients of the new master node address and port number.

  5. Sentinel will send a configuration command to the new master node, let it configure all slave nodes as its own slave nodes, and update the master node address and port number of all slave nodes.

  6. When the original master node restarts, it will become a new slave node and replicate data synchronously according to the new master node.

It should be noted that Sentinel itself may also fail, so in practical applications, Sentinel needs to be deployed on different servers to improve system availability. In addition, Sentry also supports mutual monitoring between multiple Sentinels to improve the fault tolerance of the system.

JVM memory partition

JVM memory is divided into the following areas:

1. Program Counter

The program counter is a small area of ​​memory that can be thought of as an indicator of the bytecode line number being executed by the current thread. In a virtual machine, each thread has its own independent program counter to ensure that it can return to the correct execution position after thread switching. The program counter is thread-private, so there are no thread-safety issues.

2. Java virtual machine stack

The Java virtual machine stack, also known as the Java method stack, is used to store information such as the local variable table, operand stack, dynamic link, and method exit of each method during execution. Similar to the program counter, the virtual machine stack is also thread-private, and each thread has its own independent virtual machine stack.

3. Native method stack

The native method stack is similar to the virtual machine stack, except that the native method stack is used to store the information of the JNI method (that is, the method implemented in C language or other native languages). The virtual machine specification does not specify the specific implementation of the native method stack, but generally speaking, it is similar to the virtual machine stack.

4. Java heap

The Java heap is the largest memory area in the Java virtual machine, used to store object instances and arrays. The Java heap is shared by all threads, thus creating thread safety issues.

5. Method area

The method area, also known as the permanent generation, is used to store data such as class information, constant pools, static variables, and code compiled by the just-in-time compiler. In JDK8 and later versions, the method area is removed and replaced by Metaspace.

6. Runtime constant pool

The runtime constant pool is used to store various literals and symbol references generated during compilation, and it is part of the method area. Similar to the Java heap, the runtime constant pool is also shared by threads.

garbage collection mechanism

The Java garbage collection mechanism is one of the important features of the Java language, which improves the reliability and maintainability of programs by automatically managing memory. The Java garbage collection mechanism mainly includes the following aspects:

  1. Object creation: Java programs will continuously create new objects during runtime, and these objects are allocated in the heap memory of the Java virtual machine.

  2. Reference counting: The Java virtual machine uses reference counting to track the references of objects. When an object is referenced, its reference count increases by one, and when an object is no longer referenced, its reference count decreases by one.

  3. Garbage collector: The Java virtual machine has a variety of built-in garbage collectors, which are used to periodically recycle objects that are no longer referenced and release occupied memory.

  4. Mark removal algorithm: The Java virtual machine uses the mark removal algorithm to reclaim garbage. This algorithm divides the heap memory into two areas, one for storing active objects and one for storing garbage objects. When the garbage collector starts, it will first mark All live objects, then clear all unmarked objects.

  5. Generational collection algorithm: The Java virtual machine also uses a generational collection algorithm to divide the heap memory into two parts: the new generation and the old generation. The new generation is used to store newly created objects, and the old generation is used to store objects that have survived for a long time. The garbage collector will allocate objects to different generations according to their age to improve the efficiency of garbage collection.

  6. Triggering of garbage collection: The Java virtual machine triggers garbage collection according to the usage of heap memory. When the heap memory usage exceeds a certain threshold, the garbage collector will be started to recycle objects that are no longer referenced.

It should be noted that although the garbage collection mechanism facilitates the development work of Java programmers, too much garbage collection will have a certain impact on the performance of the program. Adjust the strategy and parameters of garbage collection to improve the performance and response speed of the program.

OOM

OOM is the abbreviation of OutOfMemoryError, which means out of memory error. An OOM error is thrown when the Java virtual machine can no longer allocate memory. This condition is usually caused by:

  1. Memory Leak: There are a large number of objects in the program that are not released in time, resulting in high memory usage.

  2. Memory Overflow (Memory Overflow): The program uses too much memory space, exceeding the maximum memory limit set by the JVM.

  3. Excessive creation of large objects: In an application, large objects are created frequently, causing memory speed to be exhausted.

  4. Too many concurrent threads: Too many threads are enabled in the application, causing the JVM internal thread stack to occupy too much memory.

When an OOM error occurs, we can solve the problem through the following steps:

  1. Find memory leaks: Use memory analysis tools to check which objects in the program have not been released, and make corresponding adjustments.

  2. Increase the JVM memory limit: If the application really needs a lot of memory space, you can increase the JVM memory limit by modifying the JVM startup parameters.

  3. Optimize the program: Avoid creating large objects frequently as much as possible, and when writing code, pay attention to memory usage. The number of concurrent threads should also be properly controlled.

  4. Optimize code: When using collection classes, pay attention to delete useless elements in time, avoid using static variables as much as possible, and reduce memory overhead.

In short, in application development, avoiding OOM errors requires us to have some basic knowledge of memory management, and to adopt corresponding solutions for different situations.

Parental delegation, are there any examples of breaking the parental delegation mechanism?

In the Java language, the parental delegation mechanism is a class loading mechanism that requires each class loader to have a parent class loader except the top-level startup class loader. When a class loader needs to load a class, it First delegate this task to its parent class loader. If the parent class loader cannot complete this task (that is, it cannot find the corresponding class within its search scope), the class loader will try to load the class itself. This mechanism can avoid repeated loading of classes and ensure the security and stability of the Java core library.

But there are also some special cases that need to break the parental delegation mechanism, such as OSGi framework, Web application and other scenarios. In the OSGi framework, each component (Bundle) has its own class loader, and the class loading between them is independent of each other, and the parental delegation mechanism needs to be broken; in Web applications, different Web applications need to be shared The same class, but their respective class loaders are different, which also needs to break the parental delegation mechanism.

Therefore, in these special cases, it is necessary to break the parental delegation mechanism by writing a custom class loader or using a special class loader to meet specific needs.

say three handshakes

TCP is a connection-oriented protocol that uses a three-way handshake to establish a connection and four handshakes to disconnect. The following is the three-way handshake process:

  1. The client sends a SYN message segment to the server, in which the SYN flag is set to 1, indicating that the client requests to establish a connection.

  2. After receiving the SYN message segment, if the server agrees to establish a connection, it will send a SYN-ACK message segment to the client, in which both the SYN and ACK flags are set to 1, indicating that the server agrees to establish a connection and notify the client The next step is to send data to the server.

  3. After receiving the SYN-ACK message segment from the server, the client sends an ACK message segment to the server, in which the ACK flag is set to 1, indicating that the client confirms that the connection has been established. At this point, the connection between the client and the server is successfully established, and data transmission can be performed.

The purpose of the three-way handshake is to ensure that the client and the server can communicate with each other and avoid invalid connections. Specifically, the first handshake is when the client sends a request to the server, the second handshake is when the server responds to the client's request and agrees to establish a connection, and the third handshake is when the client confirms the server's response and notifies the server at the same time The client is ready to send data. In this way, a reliable connection between the client and the server can be ensured, and normal data transmission can be performed.

TCP and UDP

TCP and UDP are two internet transport protocols. TCP (Transmission Control Protocol) is a connection-oriented, high-reliability protocol. It establishes a connection, transmits data, and disconnects through a three-way handshake. UDP (User Datagram Protocol) is a connectionless and simple protocol, which does not guarantee the reliability of data transmission, but the transmission speed is fast.

TCP is widely used on the Internet, such as web browsing, mail sending and other applications all use the TCP protocol. Because TCP can guarantee the reliable transmission of data and ensure that data will not be lost, damaged or duplicated, it can meet the requirements of these applications for data transmission. In addition, TCP also has a congestion control mechanism, which can avoid network congestion caused by excessive data packet accumulation.

UDP is suitable for applications with high real-time requirements, such as online games, streaming media transmission, etc. Since UDP does not guarantee the reliable transmission of data, problems such as data packet loss and disorder may occur during the transmission process, but its transmission speed is fast, which can meet the needs of applications with high real-time requirements.

Why is TCP reliable transmission

The reason why the TCP protocol can guarantee reliable transmission is mainly because it has the following mechanisms:

  1. Establish a connection: Before data transmission, TCP needs to perform a "three-way handshake" to establish a connection to confirm that the communication between the two ends is normal and ensure that both parties can receive and send data.

  2. Confirmation response: When TCP sends data, it will wait for the receiver's confirmation response to ensure that the data has arrived correctly. If no acknowledgment is received, TCP will attempt to retransmit the data.

  3. Sliding window: TCP uses a sliding window to control the flow of data. The sender and receiver need to dynamically coordinate the rate of sending data to avoid network congestion and packet loss. The sender controls the rate of sending data by adjusting the sliding window, and the receiver controls the rate of receiving data by confirming the response.

  4. Timeout retransmission: During network transmission, data packets may be lost or damaged. In order to ensure the reliable transmission of data, TCP will set a timeout timer. If no confirmation response is received within the specified time, the data will be retransmitted.

  5. Congestion control: When the network is congested, TCP will perform congestion control through mechanisms such as slow start, congestion avoidance, and fast recovery to ensure the reliability and stability of the network.

To sum up, the reason why TCP can achieve reliable transmission is because it has multiple mechanisms such as connection establishment, confirmation response, sliding window, timeout retransmission and congestion control, which can effectively avoid data loss, damage or duplication, and ensure data correct transmission.

Can you introduce the TCP segment?

The TCP segment is the data transmission unit in the TCP protocol, which is controlled and transmitted at the transport layer. A TCP segment consists of header and data. The specific structure is as follows:

  1. Header: Occupies 20 bytes and contains some control information, such as source port, destination port, serial number, confirmation number, flag bits, etc.

  2. Data (Data): It can contain zero or more bytes of data content as required, the longest being 65,535 bytes.

The main functions of the TCP segment are:

  1. Acknowledge received data: tell the peer what data has been received through the confirmation number field.

  2. Segmentation and reassembly: When the amount of data is too large, TCP divides it into several segments for transmission, and reassembles them at the receiving end.

  3. Flow control: Through the window field, inform the peer end how much data it can receive, so as to control the amount of data at the sender end.

  4. Congestion control: through the congestion window field, adjust its own sending rate to avoid network congestion.

  5. Data guarantee reliability: Through retransmission mechanism, timeout retransmission mechanism, cumulative confirmation mechanism, etc., the reliable transmission of data is ensured.

Compared with UDP packets, TCP packets have no length limit, but the packet header is longer, so it will occupy more network resources. At the same time, TCP segments have the advantages of reliability, flow control, and congestion control, and are more suitable for application scenarios that require reliable transmission and control of network load.

There are several ways to achieve multi-threading, let me talk about the advantages and disadvantages

Multithreading can be implemented in the following ways:

  1. Inherit the Thread class: Create a new thread by inheriting the Thread class and rewriting the run() method. This method is simple and easy to use, but since Java does not support multiple inheritance, subclasses cannot inherit other classes.

  2. Implement the Runnable interface: Create a new thread by implementing the Runnable interface and passing it as a parameter to the constructor of the Thread class. This method is more flexible and can avoid the single inheritance restriction caused by class inheritance.

  3. Implement the Callable interface: Similar to implementing the Runnable interface, the difference is that the Callable interface supports returning values ​​and throwing exceptions, and you must use the FutureTask class to obtain the return value.

  4. Create a thread pool: Manage threads through the thread pool to avoid frequent creation and destruction of threads. The thread pool can control the number of threads to avoid invalid thread overhead.

  5. Use the Fork/Join framework: The Fork/Join framework was introduced in Java 7, which is used for concurrent processing of divide-and-conquer tasks, splitting large tasks into several small tasks, assigning a thread to execute each small task, and summarizing the results.

All of the above methods can be used to implement multithreading, and which method to choose depends on the application scenario and requirements. Inheriting the Thread class and implementing the Runnable interface are relatively simple, suitable for some relatively simple concurrent tasks; using the Callable interface can obtain thread execution results and exception information, which is more flexible; thread pool management threads can avoid frequent creation and destruction of threads, and improve program efficiency; the Fork/Join framework is suitable for large-scale parallel computing and divide-and-conquer tasks

Has the thread pool been implemented, what has it been used for, and what parameters have been used, let’s talk about the commonly used blocking queues in the thread pool

Learn about thread pool implementations. Thread pooling is a technology for concurrent execution of tasks that improves performance and scalability by managing and reusing threads.

In the thread pool, the following parameters need to be specified:

  • corePoolSize: The number of core threads in the thread pool, that is, the number of threads that keep running.

  • maximumPoolSize: The maximum number of threads in the thread pool, that is, the maximum number of threads allowed to be created.

  • keepAliveTime: When the number of threads in the thread pool exceeds the number of core threads, the survival time of redundant idle threads.

  • TimeUnit: The time unit of keepAliveTime.

  • workQueue: A blocking queue that stores tasks that have not yet been executed.

  • threadFactory: A factory method for creating new threads.

  • RejectedExecutionHandler: When the thread pool cannot continue to add tasks, the strategy for handling rejected tasks.

Commonly used blocking queues for thread pools include:

  1. ArrayBlockingQueue This is a blocking queue implemented based on an array, and the queue size needs to be specified during construction. When the queue is full, new tasks will be blocked, waiting for other tasks to be taken before execution.

  2. LinkedBlockingQueue This is a blocking queue implemented based on a linked list. The size of the queue can be specified, but if the size is not specified, the default is Integer.MAX_VALUE. When the queue is full, new tasks will be blocked, waiting for other tasks to be taken before execution.

  3. SynchronousQueue This is a blocking queue with no capacity, where each insert operation waits for a corresponding delete operation, and vice versa. Therefore, it is often used in situations where producers and consumers are paired.

The above are the thread pool implementations and commonly used blocking queues that I understand.

Let's talk about ThreadLocal

ThreadLocal is a thread confinement mechanism in Java, which is a class for creating thread-local variables. In the case of use  ThreadLocal , each thread will have its own independent copy of variables, so data between different threads will not interfere with each other.

When in use  ThreadLocal , we can call  a method to set a value for the current thread, and then call  a method to get it set() when we need to access this value  . get()Since each thread has its own independent variable, the value set by each thread will not be seen by other threads, thus avoiding thread safety issues.

Test case design Send files to a WeChat group

Including but not limited to the following use cases:

Based on the scenario of "sending files to a WeChat group", the following are some test case designs:

  1. Test file upload functionality. step:

    • Confirm that you can select the file to upload.

    • After clicking the send button, whether the file can be successfully uploaded to the WeChat server.

    • Verify that the uploaded file is displayed correctly in the chat history.

  2. Test different types of file uploads. step:

    • Try uploading various types of files such as text files, images, video, audio, etc.

    • The specific verification method is the same as step 1.

  3. Test large file uploads. step:

    • Select a large file to upload.

    • Confirm whether the file can be uploaded successfully, and whether the speed is within a reasonable range.

    • Verify whether the uploaded file can be displayed normally in the chat history.

  4. Test multiple file uploads at the same time. step:

    • Select multiple files for upload operation.

    • Confirm whether the file can be uploaded successfully and whether the speed is as expected.

    • Verify that all files are properly displayed in the chat history after they have been uploaded.

  5. Test file download functionality. step:

    • Choose a file that has been uploaded to the chat history.

    • Click the download button to confirm that the file can be successfully downloaded to the local.

    • Verify that the contents of the downloaded file are the same as when uploaded.

  6. Test the case where uploading a file fails. step:

    • Try uploading a file that is too large or in an unsupported file format.

    • Confirm that the upload operation is correctly rejected and an appropriate error message is given.

  7. Test the concurrency of uploading and downloading files. step:

    • Upload and download multiple files at the same time.

    • Verify that all files can be processed without any conflicts or errors.

The above are some test cases that may be used to test the "send files to a WeChat group" scenario. It is necessary to further design and improve the test cases according to the specific requirements and specific functions of the software.

Hand-torn code: Convert Arabic numerals to Chinese characters (the person who finished the question is gone, and I don’t know the upper and lower limits of digits, so I finished writing it with a face of confusion)

def num_to_chinese(num):
# 汉字数值词典
    chinese_num = ['零', '一', '二', '三', '四', '五', '六', '七', '八', '九']
    chinese_unit = ['', '十', '百', '千']

# 将数字拆分成千百十个位上的数字
    digits = [int(x) for x in str(num)]
    digits.reverse()

# 数字转换成汉字
    result = ''
if num == 0:
return chinese_num[0]
else:
for i, digit in enumerate(digits):
if i == 0 and digit == 0:
continue
elif digit == 0:
                result = chinese_num[0] + result
else:
                result = chinese_num[digit] + chinese_unit[i] + result

    return result

Rhetorical question: I asked the interviewer if he has been doing work related to testing and development. Do you think there are any special abilities required for testing and development compared to development? The interviewer said it had nothing to do with me,

I said no problem.

It may be because of being caught, the interview felt that the eldest sister had been looking for questions from the computer next to her, and then I answered the correct answer, and after finding one question, I frowned and corrected the answer, recorded it, and then found the next one Ask, there is no interaction at all, stereotypes are not difficult, there are a lot of questions about the project, and the hand-to-hand is also written out, but the interview experience is very bad,

The first is that I was completely absent-minded during the interview with the older sister. She just looked away when she asked a question, and it didn’t sound good. So although I feel that the answers to the questions are quite good, I still received the thank you letter on time on Friday. If I was caught, it would mean that the KPI increased and the performance was improved. Fortunately, I didn't report too much expectation at that time, otherwise it would take a long time to emo.

Finally: The complete software testing video learning tutorial below has been sorted out and uploaded, and friends can get it for free if they need it【保证100%免费】

insert image description here

 These materials should be the most comprehensive and complete preparation warehouse for [software testing] friends. This warehouse has also accompanied tens of thousands of test engineers through the most difficult journey. I hope it can help you too!

Guess you like

Origin blog.csdn.net/weixin_50829653/article/details/130506205