Netease Java back-end face-to-face (one side)

This is the Java side of Netease, and the questions are very basic.


1. How to deal with session expiration?

Session expiration usually means that the user does not perform any operations for a period of time, resulting in session invalidation. In response to this situation, the following measures can be taken:
1. The front end prompts the user that the session is about to expire, and reminds the user to log in again or refresh the page to maintain the session;
2. The back end adds a judgment to the program. If the session expires, return an error message or ask the user to restart the session. 3.
Add a mechanism to automatically refresh the session, such as automatically renewing the session when it is about to expire through a timer.

 2. How to set the redis expiration time, and how to renew the redis?

The basic command of the test:

The Redis expiration time can be set using the command `EXPIRE key seconds`, where `key` is the name of the key whose expiration time needs to be set, and `seconds` is the expiration time in seconds.
For example: to set the `mykey` key to expire after 10 seconds, the command is `EXPIRE mykey 10`.

Redis renewal can use the command `EXPIREAT key timestamp`, where `key` is the name of the key that needs to be renewed, and `timestamp` is the renewal timestamp in seconds.
For example: To extend the expiration time of the `mykey` key to 10 seconds later, the command is `EXPIREAT mykey 1626914136`, where `1626914136` is the timestamp after 10 seconds.

3. Can rabbitmq have multiple consumers subscribe to one consumer? State the implementation steps.

Yes, the steps are as follows: 

1. Create a RabbitMQ message queue.

2. Create multiple consumers and read messages from the queue respectively.

3. By setting the exchange type of the message queue, bind multiple consumers to the same exchange so that they can receive messages from the same message queue.

4. When a message is published to the exchange, it will be distributed to one of the multiple consumers for processing.

5. After processing the message, the consumer needs to confirm to the queue that the processing has been completed, so that RabbitMQ can delete the message.

4. How does rabbitmq deal with repeated orders?

If RabbitMQ encounters repeated orders, you can follow the steps below:

1. Check the idempotency of the order request on the producer side, that is, judge whether the same order request already exists. If it exists, ignore the request directly and do not send information to RabbitMQ.

2. When the consumer receives the order request, it judges the status of the order. If the order already exists, it will not process it, and directly returns the order status to the producer to inform that the order already exists; if the order does not exist, Then perform business processing such as order creation and order status update, and return the order status to the producer to inform that the order has been successfully created.

3. If you need to ensure strong consistency in your system, you can use distributed locks to ensure that only one consumer performs order creation and update operations at the same time, so as to avoid multiple consumers processing the same order request at the same time.

 5. Redis data type, zest implementation principle?

Redis supports a variety of data types, including string (string), hash (hash), list (list), set (set), ordered set (zset).

The zset data type, also known as an ordered set, is a special type in Redis that adds a weight to the set. That is, each element has a weight value (score), and is sorted according to this weight value. When using an ordered collection, you need to pay attention to the following points:

1. The elements in an ordered set must be unique.
2. The storage order of elements is sorted by score from small to large.
3. Elements can be range searched based on score.

In Redis, the underlying implementation of the zset data type is a skiplist . The skip table is a linked list-based data structure, and its query and insertion time complexity is O(logn), which is highly efficient. Each node in the jump table contains a score and a value. The score is used for element comparison and sorting during insertion and search, and the value is used as the value of the element.

When the user adds an element to zset, Redis will insert the element into the appropriate position in the jump table according to the score of the element. If they have the same score, compare the size of the value, and if they are still the same, compare the storage address of the value.

When the user performs a query operation, Redis will perform a traversal query of the jump table according to the range of the score, and find elements that meet the query conditions. In the conventional linked list query, the time complexity of traversal is O(n), while in the jump table query, since the nodes in the jump list are not arranged in order, the query time complexity can reach O(logn).

6. redis elimination strategy

 The elimination strategy of Redis can choose from the following:

1. Noeviction (no elimination strategy): It means that when the memory is full, the write operation of Redis will directly return the error message of memory overflow. This strategy is suitable for scenarios where you are unwilling to lose data and can tolerate short service interruptions.

2. Volatile-lru (least recently used policy) : Indicates that the least recently used keys are eliminated. Among them, Volatile means that only the keys with an expiration time are set to be eliminated, and the keys without an expiration time are not eliminated. It is suitable for caching scenarios.

3. Volatile-ttl (expiration time policy) : Indicates that the keys that are about to expire are eliminated, and Volatile means that only keys with an expiration time are set to be eliminated, and keys that do not have an expiration time are not eliminated, which is suitable for caching scenarios.

4. Volatile-random (random elimination strategy) : means to delete keys randomly, but only for expired keys, and is suitable for scenarios that do not require accurate control of elimination.

5. Allkeys-lru (least recently used policy) : Indicates the elimination of the least recently used keys, regardless of whether the expiration time is set, and is suitable for caching scenarios.

6. Allkeys-random (random elimination strategy) : means to delete keys randomly, regardless of whether to set an expiration time, and is suitable for scenarios that do not require accurate control of elimination.

Choosing different elimination strategies in different scenarios can improve the performance and stability of Redis.

7. MySQL storage engine

 The MySQL storage engine is a modular architecture based on the MySQL database system, which can provide different data storage and read and write supports.

The MySQL storage engine is built-in when MySQL is delivered, including multiple storage engine types such as MyISAM, InnoDB, and MEMORY, and each storage engine has its own unique characteristics, advantages and disadvantages.

For example, MyISAM is the most commonly used storage engine for MySQL, which is suitable for non-interactive query and read operations, but not for highly concurrent write operations. The InnoDB storage engine is suitable for highly concurrent read and write operations, but not suitable for processing query operations. The MEMORY storage engine can cache data in memory, which can speed up access, but if the power is cut off or the server is restarted, the data will be lost.

8. Transaction isolation level 

A transaction isolation level is a feature of a database system that controls the visibility and scope of influence between multiple concurrent transactions. The four common isolation levels are:

1. Read Uncommitted (Read Uncommitted) : Allowing a transaction to read uncommitted data may cause "dirty read" problems.

2. Read Committed (Read Committed) : A transaction is required to only read committed data, but since other transactions may modify data during reading, it may cause "non-repeatable read" problems.

3. Repeatable Read (Repeatable Read) : When a transaction is required to read the same data multiple times during execution, the results obtained should be consistent. Therefore, within a transaction, the read data cannot be modified by other transactions, and other transactions can only be modified after the current transaction is committed.

4. Serializable (Serializable) : All transactions are required to be executed in sequence, and the database is accessed in sequence. This isolation level can avoid the "phantom read" problem, but will reduce database performance.

 9. The heap and stack in jvm also have the function of method area

In the Java virtual machine, the heap, stack, and method area are three areas used to manage memory , and each area has a different role.

1. Heap

The heap is a storage space in the Java virtual machine for storing Java objects . Heap memory is automatically allocated when the JVM starts and is used to store all Java objects and arrays. When an object is created, it allocates a block of memory on the heap, and the Java virtual machine is responsible for reclaiming memory that is no longer used. The heap is characterized by dynamically allocated memory, which can expand or shrink in size as needed.

2. stack

The stack is another storage area in the Java virtual machine that is used to store local variables and the execution environment for method calls . Whenever a method is called, the Java virtual machine allocates a new stack frame to store the local variables and return values ​​of the method. When the method finishes executing, the stack frame is popped, and the freed memory can be reused by other methods. The stack is characterized by fast memory allocation, but its size is fixed and does not support dynamic expansion.

3. Method area

The method area is also a storage area of ​​the Java virtual machine, used to store class information, constants, static variables, code compiled by the compiler, etc. The method area is shared by all threads and will not be recycled under normal circumstances, and its size is also fixed. The feature of the method area is that the storage content is relatively stable, but it takes up a large amount of memory and requires key management.

 10. The way to create threads

There are two ways to create threads:

1. Inherit the Thread class and override the run() method. Create a custom thread class object and call the start() method to start the thread.

Sample code:

class MyThread extends Thread {
    public void run() {
        System.out.println("MyThread is running");
    }
}

public class Main {
    public static void main(String[] args) {
        MyThread myThread = new MyThread();
        myThread.start();
    }
}

2. Implement the Runnable interface and rewrite the run() method. Create an object of the Runnable interface implementation class, and use it as a parameter to create a Thread object. Call the start() method of the Thread object to start the thread.

Sample code:

class MyRunnable implements Runnable {
    public void run() {
        System.out.println("MyRunnable is running");
    }
}

public class Main {
    public static void main(String[] args) {
        MyRunnable myRunnable = new MyRunnable();
        Thread thread = new Thread(myRunnable);
        thread.start();
    }
}

11. What to do when the thread pool queue is full?

 When the queue of the thread pool is full, there are usually the following processing methods:

1. Throwing an exception : When a task cannot be added to the thread pool queue, a runtime exception can be thrown to notify the program to handle the exception.

2. Blocking : When the queue is full, the newly submitted tasks can be blocked, and wait for the tasks in the queue to be executed before adding them to the queue. This method is suitable for situations where there are not many tasks and the task execution time is short.

3. Rejection : When the queue is full, the currently submitted task can be directly rejected. This method can effectively prevent the thread pool from collapsing due to too many tasks, and ensure that subsequent tasks can continue to execute. Different rejection strategies can be selected according to business needs, such as throwing an exception directly, or adding tasks to other queues for execution.

4. Expansion : When the thread pool queue is full, you can improve the efficiency of task processing by increasing the queue capacity or expanding the number of threads in the thread pool, thereby avoiding the queue being full.

12. The life cycle of been in spring

 In Spring, the Bean life cycle can be divided into the following 8 phases:

1. Loading configuration files : The Spring container loads the Bean configuration files at startup and parses them into corresponding Bean definitions (BeanDefinition).

2. Instantiate the Bean : The Spring container uses the Java reflection mechanism to instantiate the Bean according to the Bean definition.

3. Set Bean properties : The Spring container uses the setter method of JavaBean or the reflection mechanism to set property values ​​for the Bean.

4. Pre-processing of BeanPostProcessor : If the Bean implements the BeanPostProcessor interface, the Spring container will automatically call its postProcessBeforeInitialization method after the Bean is instantiated and the properties are set.

5. Initialize Bean : If the Bean implements the InitializingBean interface, the Spring container will automatically call its afterPropertiesSet method after the Bean property is set, or use the initialization method specified by the init-method attribute of the <bean> element.

6. Post-processing of BeanPostProcessor : If the Bean implements the BeanPostProcessor interface, the Spring container will automatically call its postProcessAfterInitialization method after the Bean is initialized.

7. Use Bean : At this point, the Bean has been fully assembled and can be called or injected into other objects by other objects.

8. Destroy the Bean : If the Bean implements the DisposableBean interface, the Spring container will automatically call its destroy method when the container is closed, or use the destruction method specified by the destroy-method attribute of the <bean> element.

It should be noted that the first 4 steps in the above life cycle occur in the container startup phase or before the Bean instance is acquired for the first time, and the last 4 steps occur in the container shutdown phase or before the Bean instance is destroyed.

 13. LeetCode: Find the longest string without repeating substrings 3. The longest substring without repeating characters - LeetCode

class Solution {
public:
    int lengthOfLongestSubstring(string s) {
        vector<int>f(128, 0);
        int i, j, ans = 0;
        for(i = 0, j = 0; i < s.size(); i ++ ) {
            f[s[i]] ++ ;
            while(f[s[i]] > 1) {
                f[s[j ++ ]] -- ;
            }
            ans = max(ans, i - j + 1);
        }
        return ans;
    }
};

Guess you like

Origin blog.csdn.net/m0_62600503/article/details/131107299