The latest collection of JAVA interview questions (11)

Mybatis

125. What is the difference between #{} and ${} in mybatis?

  • #{} is pre-compilation processing, ${} is string replacement;
  • When Mybatis processes #{}, it replaces #{} in sql with a? Sign, and calls the set method of PreparedStatement to assign a value;
  • When Mybatis processes ${}, it replaces ${} with the value of the variable;
  • Using #{} can effectively prevent SQL injection and improve system security.

126. How many paging methods does mybatis have?

  • Number group page
  • sql paging
  • Interceptor paging
  • RowBounds pagination

128. What is the difference between mybatis logical paging and physical paging?

  • Physical paging speed is not necessarily faster than logical paging, and logical paging speed is not necessarily faster than physical paging.
  • Physical paging is always better than logical paging: there is no need to put pressure from the database side on the application side, even if there is an advantage in speed, but other performance advantages are enough to make up for this shortcoming.

129. Does mybatis support lazy loading? What is the principle of lazy loading?

Mybatis only supports lazy loading of association objects and collection objects. Association refers to one-to-one, and collection refers to one-to-many query. In the Mybatis configuration file, you can configure whether to enable lazy loading lazyLoadingEnabled=true|false.

Its principle is to use CGLIB to create the proxy object of the target object. When the target method is called, enter the interceptor method, such as calling a.getB().getName(), and the interceptor invoke() method finds that a.getB() is null value, then it will separately send the sql saved in advance to query the associated B object, query B up, and then call a.setB(b), so the object b attribute of a has a value, and then complete a.getB( ).getName() method call. This is the basic principle of lazy loading.

Of course, not only Mybatis, but almost all of them, including Hibernate, support the same principle of lazy loading.

130. Tell me about the first level cache and the second level cache of mybatis?

Level 1 cache: HashMap local cache based on PerpetualCache. Its storage scope is Session. After Session flush or close, all Caches in the Session will be emptied, and Level 1 cache is turned on by default.

The mechanism of the second-level cache is the same as that of the first-level cache. PerpetualCache and HashMap storage are also used by default. The difference is that the storage scope is Mapper (Namespace), and the storage source can be customized, such as Ehcache. The second-level cache is not turned on by default. To enable the second-level cache, the use of the second-level cache attribute class needs to implement the Serializable serialization interface (which can be used to save the state of the object), which can be configured in its mapping file;

For the cache data update mechanism, when a C/U/D operation is performed in a certain scope (first-level cache Session/second-level cache Namespaces), by default, all the caches in select under this scope will be cleared.

131. What are the differences between mybatis and hibernate?

(1) Mybatis is different from hibernate, it is not exactly an ORM framework, because MyBatis requires programmers to write Sql statements.
(2) Mybatis directly writes the original ecological SQL, which can strictly control the execution performance of SQL, and has high flexibility. It is very suitable for software development that does not require high requirements for relational data models, because the requirements of this type of software change frequently, and once the requirements change require rapid output. . But the premise of flexibility is that mybatis cannot achieve database independence. If you need to implement software that supports multiple databases, you need to customize multiple sets of sql mapping files, which is a lot of work.
(3) Hibernate has strong object/relational mapping capabilities and good database independence. For software with high requirements for relational models, if you use hibernate to develop software, you can save a lot of code and improve efficiency.

132. What executors (Executor) does mybatis have?

Mybatis has three basic executors (Executor):

  1. SimpleExecutor: Each time an update or select is executed, a Statement object is opened, and the Statement object is closed immediately when it is used up.
  2. ReuseExecutor: Execute update or select, use sql as the key to find the Statement object, use it if it exists, and create it if it does not exist. After use, the Statement object is not closed, but placed in the Map for the next use. In short, it is to reuse the Statement object.
  3. BatchExecutor: execute update (no select, JDBC batch processing does not support select), add all sql to the batch (addBatch()), wait for unified execution (executeBatch()), it caches multiple Statement objects, each Statement objects are all after the addBatch () is completed, waiting for the executeBatch () batch processing one by one. Same as JDBC batch processing.

133. What is the realization principle of the mybatis paging plug-in?

The basic principle of the paging plug-in is to use the plug-in interface provided by Mybatis to implement a custom plug-in, intercept the SQL to be executed in the plug-in's interception method, and then rewrite the SQL, according to the dialect dialect, add the corresponding physical paging statement and physical paging parameters.


Kafka

134. Can kafka be used separately from zookeeper? why?

Kafka cannot be used alone without zookeeper, because kafka uses zookeeper to manage and coordinate kafka's node servers.

135. How many data retention strategies does Kafka have?

Kafka has two data storage strategies: retention according to the expiration time and retention according to the stored message size.

136. Kafka has set 7 days and 10G to clear data at the same time. By the fifth day, the message reached 10G. What will Kafka do at this time?

At this time, Kafka will perform data clearing, and the data will be cleared regardless of the time and size that meets the conditions.

137. What conditions will cause Kafka to run slower?

cpu performance bottleneck

Disk read and write bottlenecks

Network bottleneck

138. What should I pay attention to when using kafka cluster?

The number of clusters is not as large as possible, and it is better not to exceed 7, because the more nodes, the longer the message replication takes, and the lower the throughput of the entire group.

The number of clusters is best to be singular, because more than half of the failed clusters cannot be used, and the singular fault tolerance rate is higher.

Guess you like

Origin blog.csdn.net/weixin_42120561/article/details/115271740