100 Common Mistakes in Java Business Development -- Study Notes

opening words

Two characteristics of business

Due to the tight schedule and complex logic, developers will pay more attention to the correct implementation of the main process logic, ignoring the non-main process logic, or the implementation of guarantee, compensation, and consistency logic; often lack the closed loop of detailed design, monitoring, and capacity planning
. The result is that various accidents occur as the business develops.

Seeing this, I feel particularly deeply about the first point. I vaguely remember that the night before the acceptance of the Introduction to Software Engineering project, we hurriedly started testing. All businesses can complete the expected tasks according to our normal logic, but if According to non-mainstream logic, bugs in the program become endless. For example, when filling in the phone number, there is no verification at the front and back ends, and finally the phone number can be in Chinese without reporting an error. Thinking about it now, this kind of mistake is actually quite naive. Unfortunately, I had no development experience at that time, and only focused on the correct implementation of mainstream logic.

Regarding the second point, my evaluation is that compared with the documents in the real production environment, the documents we created before can only be called electronic waste. In comparison, there is really no design at all. Therefore, I think I still need to read more documents written by everyone, so that I can complete the transition from demo to project.

01 | Using the concurrency tool class library, is thread safety safe?

ThreadLocal
pays attention to the problem of repeated use of threads in the thread pool. You should use try finally to manually clear the ThreadLocal at the end of the operation to prevent the previous data from being read when the thread is reused next time.
ConcorrentHashMap
often asks questions about this aspect in interviews, such as is the HashMap thread thread-safe, or you talk about the thread-safe data structures you know, etc. . .
ConcorrentHashMap is a thread-safe data structure, but it is limited to thread-safety when it performs atomic read and write. That is to say, when we want to use ConcurrentHashMap's size, isEmpty and other methods, it may return its middle state, resulting in concurrency issues.
Both computeIfAbsent
HashMap and ConcurrentHashMap have this method. My understanding is to first look for the key, and if not, insert it. It doesn't feel very accurate, and I will fill in the hole later. This method is implemented through the cas at the bottom of Java, which ensures the atomicity of writing data at the virtual machine level, and the efficiency is much higher than that of locks.
ForkJoinPool
has never been seen before, it looks like an upgraded version of the thread pool, leave a hole and fill it later.

02 | Code lock: Don't let the "lock" thing become an annoyance

Before locking, it is necessary to know whether the lock and the protected object are at the same level.
Only lock the necessary code blocks, which can improve the efficiency of the program
. To avoid deadlock, you can first consider avoiding circular waiting

03 | Thread pool: the most commonly used and most error-prone component of business code

Why is it not recommended to use the Executors class to create a thread pool

  1. We need to evaluate several core parameters of the thread pool according to our own scenarios and concurrency, including the number of core threads, the maximum number of threads, the thread recycling strategy, the type of work queue, and the rejection strategy to ensure that the working behavior of the thread pool meets the requirements. Generally, it is necessary to set a bounded work queue and a controllable number of threads (the original words in the book)
  2. Whenever possible, you should give your custom thread pool a meaningful name to facilitate troubleshooting. When there are problems such as a sudden increase in the number of threads, thread deadlocks, threads occupying a large amount of CPU, and abnormal thread execution, we often grab the thread stack. At this time, a meaningful thread name can facilitate us to locate the problem (I don’t understand too much, why can’t the shortcut thread pool be named by itself)

Choice of thread pool

Reusing the thread pool does not mean that the application always uses the same thread pool. We should choose different thread pools according to the nature of the task . Pay special attention to the preference of IO-bound tasks and CPU-
bound tasks for thread pool properties. If you want to reduce mutual interference between tasks, consider using isolated thread pools on demand.

It can be seen that this sentence is very reasonable, but I have not encountered such a problem in the development I have come into contact with, leaving a pit.

When using the thread pool, it is necessary to do a good job of monitoring to avoid serious losses caused by the collapse of the thread pool.

###04 | Connection pool: Don't let the connection pool help you. I
haven't seen it and I'm confused. I
skipped
###05 | HTTP call: Have you considered timeout, retry, and concurrency?
After watching for a long time, I have never encountered this kind of problem, and I really can’t empathize with it.

06 | Spring declarative transactions of 20% of business codes may not be handled correctly

@Transactional annotations open configuration issues for declarative transactions

  1. Only the @Transactional defined on the public method can take effect, because the annotation is implemented by dynamic proxy by default, and the dynamic proxy cannot be proxied to the private method. (Except for some special configurations, for example: using AspectJ static weaving to achieve AOP, I didn't understand what I said)
  2. The target method must be called from the outside through the proxied class to take effect

Problems caused by incorrect exception handling

The method annotated with @Transactional rolls back when RuntimeException and Error occur. If our method catches an exception, we need to manually code the transaction rollback

The Propagation attribute of the @Transactional annotation
has multiple database operations and wants to commit or roll back them as independent transactions. It is necessary to consider the propagation of the transaction. If it has not been used, leave a pit.

07 | Database Indexes: Indexes Are Not a Panacea

Additional cost of secondary indexing

  1. Maintenance cost, every time new data is inserted or deleted, all involved indexes will change correspondingly
  2. Space cost, the secondary index will also take up some extra space
  3. The cost of returning to the table, the secondary index does not save all the data, so in some cases it is necessary to return to the clustered index to query the original data according to the primary key

Index failure
4. In fuzzy search, the index cannot be used for suffix matching.
5. When the query involves function operations, it cannot be queried through the index.
6. The leftmost matching principle. When using a joint index, the leftmost field of the index must be included. The existence of the optimizer, the condition of the leftmost field in the where clause is not important

The database selects the index
according to the IO cost of the table (the number of pages occupied by the clustered index, which is used to calculate the IO cost of reading data) and the CPU cost (the number of records in the table, which is used to calculate the CPU cost of searching) The choice of index, of course, the IO cost and CPU cost are not real-time, but the information of the table will be maintained. When the information statistics on the table are wrong or the estimation is inaccurate, we can also force the use of an index

08 | Judgment and other issues: how to determine that you are you in the program?

Pay attention to the difference between equals and ==
== compare basic types
equals compare reference types (pointers)
JVM's [-128,127]
JVM string constant pool mechanism
When using custom types, pay attention to ensure the consistency of equals, hashcode, and compareTo
Lombok When the @EqualsAndHashCode annotation implements equals and hashCode, all non-static and non-transient fields of the type are used by default, regardless of the parent class. You can use @EqualsAndHashCode.Exclude to exclude some fields and set callSuper = true to make equals of subclasses and hashCode call the corresponding method of the parent class. (haven't met yet)

09 | Numerical Computing: Beware of Precision, Rounding, and Overflow Issues

Accurate expression of floating-point numbers—BigDecimal
When we use float and double, the result will inevitably lack precision. If we want to accurately represent and operate floating-point numbers, we need to use BigDecimal objects.
How does BigDecimal guarantee accuracy? In fact, it is very simple and simple. Converting floating-point numbers to binary will cause loss of precision, but decimal integers will not. For indexing, we only need to expand the floating-point number by a certain multiple and perform operations in the integer category to effectively avoid the problem of lack of precision.
Floating-point numbers avoid pitfalls - matters needing attention

  1. Use BigDecimal to represent and calculate floating-point numbers, and be sure to use the string constructor to initialize BigDecimal.
    Why? I don't know, I didn't find it, I left it +1
System.out.println(new BigDecimal(0.1).add(new BigDecimal(0.2)));//不精确
System.out.println(new BigDecimal("0.1").add(new BigDecimal("0.2")));//精确

If there is only one Double object, how to convert it to BigDecimal?
——Directly use Double.toString (×)
BigDecimal has the concept of scale and precision, scale indicates the number of digits to the right of the decimal point, and precision indicates precision, that is, the length of effective figures.
The values ​​of scale and precision obtained by using Double.toString and directly passing in the string are different, so the precision of the two cases will be different. There are specific examples and explanations in the book, so I won’t copy them here.
If you must use Double to initialize BigDecimal, you can use the BigDecimal.valueOf method to ensure that its performance is consistent with the construction method of the string form.
2. The string formatting of floating-point numbers should also be done through BigDecimal.
Simply put, due to the lack of precision, sometimes the rounding of floating-point numbers will not meet the expected results, so you need to use BigDecimal, which also comes with some rounding Way

BigDecimal num1 = new BigDecimal("3.35");
BigDecimal num2 = num1.setScale(1, BigDecimal.ROUND_DOWN);
System.out.println(num2);//3.3
BigDecimal num3 = num1.setScale(1, BigDecimal.ROUND_HALF_UP);
System.out.println(num3);//3.4

3. Be careful of the overflow problem when performing numerical calculations. Although there will be no exception after overflow, the calculation results obtained are completely wrong.
Looking at it this way, it is actually a bit dizzy. Anyway, in a word, in financial, scientific computing and other scenarios, please use BigDecimal and BigInteger as much as possible

10 | Collection class: List list operation full of pits

Use Arrays.asList to convert data into three pits of List

  1. Arrays of primitive types cannot be converted directly using Arrays.asList
//失败案例
int[] arr = {
    
    1, 2, 3};
List list = Arrays.asList(arr);

//正确方法1:使用Arrays.stream,jdk版本>8
int[] arr1 = {
    
    1, 2, 3};
List list1 = Arrays.stream(arr1).boxed().collect(Collectors.toList());

//正确方式2:声明为Integer数组
Integer[] arr2 = {
    
    1, 2, 3};
List list2 = Arrays.asList(arr2);
  1. The List returned by Arrays.asList does not support addition and deletion operations
    because the List returned here is the internal class ArrayList of Arrays rather than the familiar java.util.ArrayList

  2. Modifications to the original array will affect the List we obtained. The solution is also very simple. Create a new ArrayList to complete the separation.

List list = new ArrayList(Arrays.asList(arr));

The OOM caused by using List.subList for slicing operations
starts with the implementation logic of subList. SubList does not really copy the linked list, but is just a view of the original linked list. For example, a linked list with 100,000 elements is divided into a small part. In fact, 100,000 elements are still saved. At the same time, the operation on the split linked list will also interact with the original linked list.

Make sure the right data structure does the right thing

  • HashMap
    HashMap has a feature that its retrieval efficiency is O(1), so if you need to retrieve data in a very large ArrayList, you can consider using HashMap to optimize performance, but at the same time you will pay a price in space.
  • LinkedList
    is not recommended to be used
    in linked lists that are used more for insertion and deletion. Some students may think of using LinkedList, but the premise that the insertion time complexity is O(1) is that we have obtained the pointer to the place to be inserted, and the acquisition The time complexity of the pointer will be O(n) again. After a comprehensive test, the performance of LinkedList in various environments is difficult to compare with ArrayList, even its creators do not recommend LinkedList.

Guess you like

Origin blog.csdn.net/m0_51561690/article/details/131411033
Recommended