Java backend mock interview, question set ①

1.Spring bean life cycle

  1. Instantiation
  2. Property assignmentPopulate
  3. InitializationInitialization
  4. Destruction

2. In which period of the bean is Spring AOP created?

(The picture is reproduced from the complete life cycle of Spring Bean (with flow chart, easy to remember)

3. How does MQ ensure that messages are not lost even in the event of a power outage?

Persistence to hard disk means writing to disk

4.How does MQ ensure message reliability?

From the message producer to the switch, return callbacks are used to ensure message reliability. This is message-level reliability and is defined in the message;

From the switch to the queue, confirm callback is used to ensure message reliability. This is object-level reliability and is defined in the template;

From the switch to the queue, persist the message;

The above is the reliability guarantee of sending.

Then to ensure the reliability of consumption, set up message retry, including message sending retry, consumer retry, and warehousing after failed retry.

5. What problems did you encounter in the project and how did you solve them?

One business module, the logistics information module, needs to be optimized.

Optimized in 5 aspects. They are to change the database, solve the cache penetration, avalanche, and breakdown problems, and optimize the Bloom filter.

Detailed answer:

OK The logistics information module I made specifically realizes the user's need to check the logistics progress. The logistics progress viewed by the user is added one by one. From Guangzhou to Changsha, it starts from the collection business point, then to the district sorting center, then to the Guangzhou transshipment center, then to the Changsha transshipment center, Yuelu district sortation center, The specific business point will first display "the express delivery has been collected", and then "the express delivery has arrived at a certain district sorting center in Guangzhou"... In short, a user's express delivery, that is, one order, may have multiple waybills. data. The first place to optimize is here. Just imagine, if we use MySQL to store information viewed by users, multiple pieces of data will inevitably be generated. With MongoDB, you can add multiple fields to a piece of data, so that one piece of data can be used to store information about an order.

The second optimization scenario is to solve the cache avalanche problem. Cache avalanche refers to the failure of many Redis keys in a short period of time or even at the same time. Access at this time may put a lot of pressure on the database or even cause the database to crash. In response to this situation, we set a random expiration time, and then performed service degradation, that is, access to non-core data was suspended and the specified information was returned.

Another situation is that Redis is down. At this time, we use Caffeine and Redis to create multi-level cache.

The third optimization point is that we face possible cache penetration problems. If a piece of data is not in the multi-level cache or the database, then the database will be accessed every time. Too many such accesses may cause the server to crash. This is the cache penetration problem. To solve this problem, we use Bloom filter. Bloom filters use some algorithms such as hashing algorithms to map the filtered data to a very long bitmap, setting the original 0s to 1s. The next time we access the data, we first put the queried data into the Bloom filter. If the filter result is 0, it means it does not exist, and it will be returned directly.

The fourth optimization point is that even if the Bloom filter filter result is 1, it is not necessarily guaranteed that the data exists, because it is possible that two different data have the same hash value. At this time, we can add algorithms, but the calculation will also take time, and adding too much will make the efficiency very low. Therefore, the number of algorithms should be added appropriately, not too little, not too much.

Fourth, we optimized the cache breakdown problem. If the specified key fails in a high-concurrency scenario, cache breakdown will occur. We use locking to handle this situation.

For more information, see my last blog interview draft ⑦ The third part of project 1_zrc007007's blog-CSDN blog

6.How does MySQL optimize queries when dealing with large amounts of data?

Discuss in three situations:

  1. A single SQL statement runs slowly
  2. Some SQL runs slowly
  3. The entire SQL runs slowly

Detailed answer:

Take MySQL for example. For databases that need to be optimized and have slow queries, we will discuss them in three situations:

  1. A single SQL statement runs slowly
  2. Some SQL runs slowly
  3. The entire SQL runs slowly

A single SQL statement runs slowly

In the first case, if a single SQL statement runs slowly, there are generally two common reasons:

  1. Index not created or used properly
  2. There is too much data in the table

First, we check whether the index was created normally;

Then, check whether the index query is triggered normally.

The following situations cannot trigger indexing and should be avoided:

  1. When using the != or <> operator in the where clause, the query reference performs a full table scan;

  2. The leading fuzzy query triggers a full index scan or a full table scan, because it cannot use the order of the index and has to search one by one. That is to say, such fields cannot be used in queries: '%XX' or '%XX%';

  3. With conditional or, when not every column in the or condition has an index. At this time, an index must be added to each column to trigger the index query;

  4. Perform expression operations on fields in the where clause.

The following tips can optimize the speed of index queries:

  1. Try to use primary key queries instead of other indexes, because primary key queries will not trigger table return queries;

  2. The query statement should be as simple as possible, and large statements should be split into smaller statements to reduce lock time;

  3. Try to use numeric fields. If fields contain only numerical information, try not to design them as character fields;

  4. Use exists instead of in query;

  5. Avoid using is null and is not null on indexed columns.

Secondly, for databases with too large amounts of data, we can split the data.

Data splitting is divided into vertical splitting and horizontal splitting.

Vertical splitting is generally done like this:

  • Put rarely used fields in a separate table;

  • Split out large fields such as text and blog and put them in the attached table;

  • Columns that are often combined in queries are placed in one table.

When the number of rows in a table generally exceeds 2 million rows, the query will slow down. At this time, it can be split into multiple tables to store data. This is a horizontal split.

Some SQL runs slowly

In order to locate the SQL of these slow queries, we can enable slow query analysis.

The entire SQL runs slowly

We can separate reading and writing.

For more information, please see my previous blog interview draft ③ The third part of professional skills_zrc007007's blog-CSDN blog

7. How is the token authenticated in your project?

Use aop. To be added later

8. Talk about what you know about JVM and how to tune it

First, we add the -Xms and -Xmx parameters, and set the two parameters to the same size, which is half the total number of GB of server memory and minus 1. This is a basic tuning operation;

Then for JDK1.8, you can change the parallel garbage collector to a concurrent garbage collector.

For advanced tuning, use jstat -gc and jstat -gcutil to check what big data objects there are and see if optimization can be done.

For more information, please see my previous blog interview draft ① The first part of professional skills-CSDN Blog

9.Have you ever understood the principle of SpringBoot automatic assembly?

Will add later.

Guess you like

Origin blog.csdn.net/m0_46948660/article/details/133501021