It is indeed the "Java performance optimization practice document" recommended by Ali P8 Daniel, which is comprehensive and comprehensive.

performance optimization

As we all know, Alibaba’s performance optimization is super strong, and they also have many sets of their own unique optimization schemes. After this boss left Alibaba P8 (9 years of development experience), he brought out this Alibaba’s internal Java performance optimization practice notes , the content covers design optimization, Java program optimization, parallel program development and optimization, JVM tuning, Java performance tuning tools, etc., and these optimization solutions are also compiled into a book by this P8 boss! Free for everyone for a limited time!

The PDF shared this time was written by three foreigners. They worked hard to show you a more complete knowledge framework about Java performance optimization.

To borrow a phrase from the PDF: "You don't have to be an engineer to be a racing driver, but you do have to have mechanical resonance." The phrase "mechanical resonance" comes from the great racing driver Jackie Stewart, who won the 3-time World Automobile Federation Champion of the Formula One Championship. He believes that the best drivers have enough understanding of how the mechanics work so they can work in harmony with the car. You don't have to be familiar with  The Java Language Specification , and you don't have to be a major in R (R major is a god among us), but for the JVM, you must know how it compiles, runs, and collects garbage.

It's an exciting time for Java developers, and there have never been more opportunities to build efficient, responsive applications on the Java platform. let's start.

Ps: Due to space limitations, all the notes cannot be displayed for everyone, so we will refer to you in the form of screenshots of the main content. Those who need the full version can get it for free at the end of the article

Look at the directory first

Second, look at the main content

Chapter 1 addresses optimization and performance; optimizing Java or other language code for performance is often considered a dark art. There's something mystical about performance analysis, and it's often seen as a craft honed by a solitary hacker after a lot of thought and thought. (The lone hacker is also one of Hollywood's favorite movie sequences about computers and human operators.) The picture goes like this: A person can gain insight into a system and come up with magical solutions to make the computer run faster.

Often caught in the picture is this unfortunate but common situation: software teams don't take performance that seriously. The scenario that then emerges is that the team analyzes the system only when it is already in trouble. So performance "heroes" are needed to save the day. But the reality is a little different.

The truth is, performance analysis is an odd mix of solid empiricism and soft human psychology. The point is, on the one hand, the absolute numbers of observables, and on the other, how those numbers are perceived by end users and stakeholders. How to resolve this apparent paradox is the subject of the remainder of this article.

This chapter begins with a discussion of what Java performance is and isn't; then introduces the basic topics of empirical science and measurement, and the basic vocabulary and observations that a good performance practice will use; case. Next, we'll start discussing some of the major aspects of the JVM, and get ready to understand what makes JVM-based performance optimization so complex.

Chapter 2: An Overview of the JVM; this chapter introduces how the JVM executes Java, laying the groundwork for later chapters that explore these topics in more depth. In particular, Chapter 9 covers bytecode in depth. Readers can choose to read this chapter now, or reread it in conjunction with Chapter 9 after understanding the other topics.

This chapter briefly introduces the overall structure of the JVM. While we've only been able to touch on some of the most important topics, the fact is that almost every topic mentioned here has rich and complete content behind it that deserves further study.

Chapter 3 discusses some details of how the operating system and hardware work. This provides the necessary context for Java performance analysts to understand the observations. We'll also look at the timing subsystem in more detail, which will serve as a complete example of how the virtual machine and native subsystems interact.

Chapter 3 Hardware and Operating Systems; In the Java world, the design of the JVM allows it to use additional processor cores, even for single-threaded application code. This means that Java applications have gained a significant performance advantage from hardware trends compared to other environments.

With Moore's Law dead, attention has once again turned to the relative performance of software. Performance-conscious engineers need to understand at least the basics of modern hardware and operating systems to ensure they get the most out of the hardware, not the other way around.

The next chapter will introduce the core methodology of performance testing, and discuss the main types of performance testing, the tasks that need to be undertaken, and the entire life cycle of performance work. We'll also enumerate some common best practices and anti-patterns in the performance analysis world.

Chapter 4 Performance Testing Patterns and Antipatterns; the second half of this chapter outlines some common antipatterns that can plague performance testing or teams, and illustrates solutions for how to refactor to prevent them from becoming problems for your team.

When evaluating performance results, be sure to process the data in an appropriate manner and avoid getting bogged down in unscientific, subjective thinking. This chapter describes some types of testing, testing best practices, and accompanying antipatterns in performance analysis.

The next chapter examines low-level performance measurements, pitfalls of microbenchmarking, and some statistical techniques for processing raw results from the JVM.

Chapter 5 Microbenchmarking and Statistics; this chapter considers the specifics of directly measuring Java performance numbers. The dynamic nature of the JVM means that performance numbers are often more difficult to work with than many developers expect. As a result, many inaccurate or misleading performance numbers appear on the Internet.

A major goal of this chapter is to make sure you are aware of these possible pitfalls and only generate performance numbers that you and others can rely on. In particular, measuring small chunks of Java code (microbenchmarking) is subtle and difficult to get right, and that's what this chapter explores, along with how performance engineers should use it correctly.

Chapter 6, Understanding Garbage Collection; Garbage collection has been a hot topic of discussion within the Java community since the platform's inception. This chapter introduces the key concepts a performance engineer needs to understand in order to work effectively with the JVM's garbage collection subsystem. These concepts include:

  • mark and clear collection;
  • The runtime representation of the object inside HotSpot;
  • weak generational hypothesis;
  • HotSpot's memory subsystem instance;
  • parallel collector;
  • Allocation and its central role.

The next chapter discusses tuning, monitoring, and profiling garbage collection. Some topics have already appeared in this chapter, especially distribution, and special effects such as early promotion. These topics are also particularly important for the goals and topics that follow, and it may be helpful to refer back to this chapter often.

Chapter 7 Advanced Topics of Garbage Collection; The previous chapter introduced the basic theory of Java Garbage Collection. Using this as a starting point, this chapter further examines the theory of modern Java garbage collectors. There are many unavoidable tradeoffs in this area that can guide an engineer in how to choose a collector.

First, this chapter introduces and provides insight into the other collectors provided by the HotSpot JVM, including the ultra-short-pause, generally concurrent collector (CMS) and the modern general-purpose collector (G1).

In addition, some less common collectors will be considered, including:

  • Shenandoah
  • C4
  • balanced collector
  • Legacy HotSpot collector

Not all of these collectors are used in the HotSpot virtual machine, we will also discuss the collectors of two other virtual machines: IBM J9 (a JVM from IBM, which was previously closed source and is gradually being open sourced) and Azul Zing ( A proprietary JVM), which we introduced in Section 2.6.

Chapter 8 Garbage Collection Logging, Monitoring, Tuning, and Tools; this chapter scratches the surface of the art of garbage collection tuning. Most of the techniques demonstrated here are specific to individual collectors, but there are some basic techniques that are generally applicable. This chapter also introduces some basic principles and some useful tools for dealing with garbage collection logs.

The next chapter discusses another major subsystem of the JVM: the execution of application code. We'll start with an overview of interpreters, and then build on that to discuss JIT compilation, including its relationship to standard (or AOT) compilation.

Chapter 9, Code Execution on the JVM; for many applications, the simple tuning techniques for code caching demonstrated in this chapter are sufficient. But for applications that are particularly performance-sensitive, a deeper exploration of JIT behavior may be required. The next chapter describes some tools and techniques for tuning more demanding applications.

Chapter 10, Understanding Just-in-Time Compilation; this chapter provides an in-depth look at the inner workings of the JIT compiler in the JVM. Most of the content applies directly to HotSpot, but is not guaranteed to be consistent with other JVM implementations.

We have mentioned that the scientific research related to JIT compilation has been quite in-depth. Not only the JVM, but also many modern programming environments use JIT. Therefore, many JIT techniques are applicable to other JIT compilers as well.

Chapter 11 Java Language Performance Techniques; this chapter discusses some of the performance issues of the standard Java Collections API, as well as key concerns when dealing with domain objects.

Finally, we explored two other application performance considerations that are closely related to the platform level: finalization and method handles. Although many developers do not encounter these two concepts in their daily work, for engineers concerned about performance, understanding and understanding them can enrich their technical toolbox.

The next chapter continues with a discussion of several important open source libraries, including those that provide alternatives to the standard collection classes, as well as logging and related issues.

Chapter 12 Concurrency Performance Techniques; Throughout the history of computing so far, software developers have typically written code in a sequential format. Programming languages ​​and hardware generally only provide the ability to process instructions at a time. In many cases, people enjoy the so-called "free lunch" by buying the latest hardware to improve application performance. The increase in the number of transistors available on a chip has led to better, more powerful processors for processing instructions.

Many readers have encountered situations where moving software to a larger or newer machine can solve a capacity problem without spending money to find the underlying problem or consider a different programming paradigm.

Chapter 13, Profiling; the term profiling is not very uniformly used in the programmer community. In fact, there are many possible profiling methods, the most common of which are the following two:

  • implement
  • distribute

This chapter will cover both topics. Focusing first on execution profiling, we will use this topic to introduce the tools available for profiling programs. Afterwards, memory profiling will be introduced to see how various tools provide this capability.

Chapter 14, High-Performance Logging and Messaging Systems; This chapter begins by asking the question to what extent Java and the JVM can be used for high-throughput applications. Writing low-latency, high-throughput applications in any language is very difficult, but of all the languages ​​available, Java offers the best tooling and productivity. Also, Java and the JVM do add another level of abstraction that we need to manage and in some cases circumvent. Also, it's important to consider hardware, JVM performance, and lower-level issues.

Chapter 15 Java9 and the future direction of Java; Java/JVM performance is a very dynamic field, and in this chapter we saw that performance is still making progress in many areas. There are many other projects that we don't have time to mention, including Java/native code interaction (project panama) and new garbage collectors (such as Oracle's ZGC).

Therefore, this article is not exhaustive, as performance engineers still have a lot to learn. Nevertheless, we hope that it can help readers understand the world of Java performance, and also provide some signposts for readers' performance journey.

Ps: Due to space limitations, all the notes cannot be displayed for everyone, so we will refer to you in the form of screenshots of the main content. Friends who need the full version can scan the code below to get it for free

Guess you like

Origin blog.csdn.net/Trouvailless/article/details/131187100