Java optimization - heavy code, making the code more beautiful and concise

In short

In project work, there are often optimizations, including SQL optimization, project structure optimization, business layer optimization, code structure optimization, etc. These optimizations are all for the system to be easy to maintain, understand, and expand. Below are some of my personal experiences to share with you. I feel like every program needs to be the go-to architect.

In the past, I felt that I just needed to spend more time on business, complete functional development, pass the self-test, and then test the classmates. If the product acceptance is OK, it will be OK. Slowly I discovered that I began to pursue better, think about problems from a higher point, and slowly began to become a veteran. How to improve code quality, how to regulate the following teams to form a certain awareness, spontaneously consider the scalability of the code, and build Zhuangxing and so on.

1. Code optimization is painful

Why do you say this?

Often code optimization is based on optimizing other people's or your own code based on previous business, and putting forward optimization ideas. It must be that the readability, maintainability, and design direction of the current code are not very friendly, and it needs to be restructured on the existing business. Code, or architecture. Before optimizing, you need to understand the existing code logic, as well as previous designers or business development, and combine them to reconstruct the code.

2. How to Refactor

  • Small refactoring
    is the refactoring of code details, mainly for code-level refactoring of classes, functions, variables, etc. For example, common standardized naming (renaming variables that do not convey the meaning), eliminating overly large functions, eliminating duplicate codes, etc. Generally, this type of reconstruction and modification is relatively concentrated, relatively simple, has relatively small impact, and takes a short time. Therefore, the difficulty is relatively low, and we can completely carry out daily development with the version.

  • Large-scale refactoring
    is the reconstruction of the top level of the code, including the reconstruction of the system structure, module structure, code structure, and class relationships. The methods generally adopted are service layering, business modularization, componentization, code abstraction reuse, etc. This type of refactoring may require redefining principles, redefining patterns, or even redefining business. It involves a lot of code adjustments and modifications, so it has a relatively large impact, takes a long time, and brings relatively high risks (project suspension risk, code bug risk, and business vulnerability risk). This requires us to have experience in large-scale project reconstruction, otherwise it is easy to make mistakes, and in the end the gains outweigh the losses.

In fact, most people don't like refactoring work, just like no one wants to "wipe the butt" of others. They may have the following concerns:

  • If you don’t know how to refactor and lack the experience and methodology for refactoring, you will easily make mistakes during refactoring.
  • It’s hard to see short-term benefits, so why put in the effort now if the benefits are long-term? In the long run, maybe when the project reaps these benefits, you will no longer be responsible for this work.
  • Refactoring will destroy existing programs and bring unexpected bugs, and you don't want to bear these unexpected bugs.
  • Refactoring requires extra work on your part, not to mention that the code that needs to be refactored may not be written by you.

3. Why reconstruction is needed

Programs have two values: "What they can do for you today" and "What they can do for you tomorrow." Most of the time, we just focus on what we want our program to do today. Whether it's fixing bugs or adding features, it's all about making the program stronger and making it more valuable today. But why I still advocate that everyone should do code refactoring at the right time? The main reasons are as follows:

  • Let the software architecture always maintain good design. Improve our software design, let the software architecture develop in a favorable direction, be able to always provide stable services to the outside world, and calmly face various unexpected problems.
  • Increasing maintainability and reducing maintenance costs are a positive virtuous cycle for the team and individuals, making the software easier to understand. Whether future generations read the code written by predecessors or review their own code afterwards, they can quickly understand the entire logic, clarify the business, and easily maintain the system.
  • Increase research and development speed and reduce labor costs. You may have a deep understanding that when a system is initially launched, when adding functions to the system, the completion speed is very fast. However, if you do not pay attention to the code quality, it may take a week or more to add a small function to the system later. time. Code refactoring is an effective means to ensure code quality, and good design is fundamental to maintaining software development speed. Refactoring can help you develop software faster because it prevents system decay and can even improve the quality of your design.
    Insert image description here
    Insert image description here

3. How to Refactor

small refactoring

Most of the small refactorings are carried out in daily development. The general reference standards are our development specifications and guidelines. The purpose is to solve the bad smells in the code. Let's take a look at the common bad smells?

Too many if-else

Scale code:

if(){
    
    
	if(){
    
    
		if(){
    
    
			} else if(){
    
    
			}
	} else if(){
    
    
	}
} else if(){
    
    
}

if else it is recommended not to exceed three layers

The for loop has many levels, the same as above if else example
Duplicate code

There are multiple places in the project where the remainder value is calculated. We can consider encapsulating the results through functions and calling methods uniformly, or similar code snippets can extract method calls uniformly.

Function too long

A good function must satisfy the single responsibility principle, be short and concise, and do only one thing. Overly long function bodies and multi-tasking methods are not conducive to reading and code reuse.

Naming conventions

A good naming needs to be "worthy of the name and understandable by the name", straightforward and without ambiguity.

Unreasonable comments

Comments are a double-edged sword. Good comments can give us good guidance, but bad comments can only mislead people. Regarding comments, we need to modify the comments together when integrating the code, otherwise there will be inconsistencies between comments and logic. In addition, if the code has clearly expressed its intention, then the comments will be redundant.

useless code

There are two ways of using useless code. One is that there is no usage scenario. If this type of code is not a tool method or tool class, but some useless business code, it needs to be deleted and cleaned up in time. The other is a code block wrapped in comments. These codes should be deleted when they are marked with comments.

Class that is too large

If a class does too many things and maintains too many functions, its readability will become worse and its performance will also decrease. For example, you put order-related functions in a class A, product inventory-related functions are also placed in class A, points-related functions are also placed in class A... Just imagine, all the messy code blocks are put into one class. Damn, let’s talk about readability. Code should be divided into different classes according to single responsibilities.

These are relatively common code "bad smells". Of course, there are other "bad smells" in actual development, such as confusing code, unclear logic, and complicated class relationships. When you smell these different "bad smells" , we should try to solve it instead of just letting it go.

Large refactoring

Compared with small-scale refactoring, large-scale refactoring requires more things to be considered, and it is necessary to set a good rhythm and execute it step by step, because in large-scale refactoring, the situation is changeable.

The steps to put the elephant into the refrigerator can generally be divided into three steps: 1) open the refrigerator door (before the event); 2) push the elephant in (during the event); 3) close the refrigerator door (after the event). All everyday things can be solved using a three-step approach, and refactoring is no exception.
Insert image description here

advance

Preparation is the first step in refactoring. The things involved in this part are relatively complex and also the most important. If the preparation is not sufficient beforehand, it is very likely that the results produced during the execution or after the refactoring is launched will be inconsistent with expectations. Phenomenon.

This stage can be roughly divided into three steps:

  • Clarify the content, purpose, direction, and goals of reconstruction
    . In this step, the most important thing is to make the direction clear, and this direction can withstand everyone's doubts and can satisfy the direction of the next three to five years at least. The other is the goal of this reconstruction. Due to technical limitations, historical baggage and other reasons, this goal may not be the final goal. Then it is necessary to clarify what the final goal is. There is still a long way to go from the goal of this reconstruction to the final goal. It is best to be clear about what needs to be done.

  • The step of organizing the data
    requires sorting out the existing businesses and architectures involved in the reconstruction part, and clarifying which service level of the system the reconstructed content belongs to, which business module it belongs to, who are the relying parties and dependent parties, and what business scenarios are there. What is the data input and output of each scene? There will be outputs at this stage, which generally include project deployment, business architecture, technical architecture, service upstream and downstream dependencies, strong and weak dependencies, internal service layering model of the project, content and function dependency model, input and output data flows, etc. Design drawings and documents.

  • Project establishment
    Project establishment is generally carried out through meetings. All departments or groups involved in reconstruction are briefed on the reconstruction work, the approximate time plan is known (a rough approximate time), and the main responsible persons of each group are clearly defined. In addition, it is also necessary to know which businesses and scenarios are involved in refactoring, the approximate refactoring method, what the business impacts may be, the difficulties, and the steps where bottlenecks may occur.

in progress

The tasks and tasks involved in executing this step are relatively arduous and require a relatively large amount of time.

  • Architecture design and review
    Architecture design review mainly involves the design and review of standard business architecture, technical architecture, and data architecture. Discover architectural and business problems through review. This review is generally an in-team review. If after a review, it is found that the architecture design cannot be determined, then adjustments need to be made until the team reaches an agreement on the solution architecture design. Only then can you proceed to the next step, and the review results will also need to be notified to the participants via email after passing the review.

The output of this stage: reconstructed service deployment, system architecture, business architecture, standard data flow, service layering model, functional module UML diagram, etc.

  • Detailed implementation design plan and review
    . This implementation design plan is the most important plan in the implementation. It is related to the subsequent R&D coding, self-test and joint debugging, relying party docking, QA testing, offline release and implementation plan, and online Release and implementation plan, specific workload, difficulty, work bottlenecks, etc. This detailed implementation plan needs to go deep into the entire research and development, offline testing, online process, and grayscale scene details, including AB grayscale program and AB verification program.

The most important part in the solution design is the AB verification program and AB verification switch, which are the standard basis for evaluating and testing whether our reconstruction is completed. The general AB verification procedure is roughly as follows:
Insert image description here

At the data entrance, use the same data to initiate processing requests to both the new and old processes. After the processing is completed, the processing results are printed to the log respectively. Finally, an offline program is used to compare whether the results of the new and old processes are consistent. The principle to follow is that with the same input parameters, the response results should also be consistent.

In the AB program, two switches are involved. Grayscale switch (only if it is turned on, the request will be sent to a new process for code execution). Execution switch (if the new process involves writing operations, you need to use a switch to control whether to write in the new process or in the old process). Before forwarding, the grayscale switch and execution switch (generally configured in the configuration center and can be adjusted at any time) need to be written into the thread context to avoid inconsistent results when modifying the configuration center switch.

  • Code writing, testing, and offline implementation
    This step is to carry out coding, unit testing, joint debugging, functional testing, business testing, and QA testing according to the detailed design plan. After passing, simulate the online process and online switch implementation process offline, verify the AB program, check whether it meets expectations, and whether the code coverage of the new process meets the online requirements. If there are relatively few offline data samples and cannot cover all scenarios, you need to construct traffic to cover all scenarios to ensure that all scenarios can meet expectations. When the offline coverage reaches the expectation and the AB verification program does not detect any abnormalities, the online operation can be performed.

afterwards

This stage needs to be implemented online according to the implementation process of offline simulation, which is divided into several stages: online, large-scale, repair, offline old logic, and review. The most important and energy-consuming one is the volumetric process.

  • The grayscale switching process
    is gradually expanded to the new process for observation. The volume can be increased according to the progress of 1%, 5%, 10%, 20%, 40%, 80%, and 100%, so that the new process can gradually cover the code logic. , note that this stage will not turn on the switch to actually perform the write operation. When the logic coverage of the new process reaches the requirements and the AB verification results are in line with expectations, the write operation switch can be gradually turned on to execute the real business.

  • After the business execution switch process
    meets expectations in the new grayscale process, the business execution write operation switch process can be gradually turned on, and the volume can still be gradually increased according to a certain proportion. After the write operation is turned on, only the new logic performs the write operation, and the old logic Write operations will be closed. At this stage, it is necessary to observe online errors, indicator anomalies, user feedback and other issues to ensure that there are no problems with the new process.

After the large-scale work is completed and a certain version is stabilized, the old logic and AB verification program can be taken offline, and the reconstruction work is completed. If possible, you can hold a reconstruction review meeting to check whether each participant has met the standards required for reconstruction, review the problems encountered during the reconstruction, and what the solutions are, and use precipitation methodology to avoid subsequent problems. Similar problems arise at work.

Summarize

Coding skills

  • Follow some basic principles when writing code, such as the single principle, relying on interfaces/abstractions rather than relying on specific implementations.
  • Strictly follow the coding standards and use TODO, FIXME, and XXX for special comments.
  • Unit testing, functional testing, interface testing, and integration testing are essential tools for writing code.
  • We are the authors of the code, and future generations are the readers of the code. When writing code, you must always review it, do what the predecessors planted trees for future generations to enjoy, and do not do what the predecessors did by digging holes for future generations to bury with them.
  • If you are the first person to avoid the broken window effect, don’t think that the code is already bad now and there is no need to change it anymore, and just continue to pile up the code. If this is the case, one day you will be disgusted by other people's code, and "sooner or later you will have to pay back".

Refactoring skills

  • Modeling and analyzing from top to bottom, from outside to inside, and clarifying various relationships are the top priorities of reconstruction.
  • Refine classes, reuse functions, and focus on core capabilities to make module responsibilities clear.
  • Dependence on interface is better than dependence on abstraction, and dependence on abstraction is better than dependence on implementation. If class relationships can be combined, there is no need to inherit.
  • When designing classes, interfaces, and abstract interfaces, consider scope qualifiers, which ones can be rewritten, which ones cannot be rewritten, and whether the generic qualification is accurate.
  • Make various designs and plans for large-scale refactoring, and simulate various scenarios offline. AB verification procedures must be required to go online, and switching between new and old can be done at any time.

Guess you like

Origin blog.csdn.net/weixin_43829047/article/details/128532537