[Quarkus Technology Series] Practice of Building a Quarkus-Based Cloud Native Microservice Framework (1)

Prerequisites

This series of articles mainly explains how to build and develop "a Java microservice framework optimized for Kubernetes" based on Quarkus technology. You will learn how to build a Quarkus microservice environment and scaffolding, and develop Quarkus endpoint services. System and application-level configuration introduction and Quarkus programming model analysis, creation of Quarkus application Uber-jar files and integration into the Kubernetes environment.

  1. Learn the zero-based construction and development practice of Quarkus' cloud-native microservices
  2. Analyze the programming model of Quarkus and integrate with the Kubernetes environment

For the crowd

Java software developers, system architects, microservice development enthusiasts, operation and maintenance deployment personnel, etc.

current condition

In recent years, due to the popularization of cloud-native technology, more and more users have begun to use containers to run microservice applications. With the rapid development of microservices, the spring family bucket has become the de facto standard of the java framework, including the use of single applications The spring Framework and springboot, the service governance framework spring cloud between microservices, the ecosystem is complete, and various components emerge in endlessly.

Java Cloud Native Pain Points

  • The birth of lightweight container technology makes JVM services more bloated
    • The introduction of the microservice architecture has made our service granularity smaller and smaller, and lightweight and fast-starting applications can better adapt to the containerized environment. Taking our current regular Spring Boot application as an example, the jar package of the general Restful service is about 30M. If we package the JDK and related applications into a docker image file, it is about 140M.
    • The image package generated by the executable program of the conventional Go language generally does not exceed 50M. How to slim down a bloated Java application and make it easy to containerize has become a problem that needs to be solved in the cloud nativeization of Java applications.
  • The birth of lightweight container technology has led to excessive usage of JVM service content
    • The JVM's usage of memory is getting bigger and bigger, which will cause too much FullGC or even OOM.
  • The startup speed of SpringBoot's microservice application is getting slower and slower (JVM startup speed)
    • From JVM startup to real application execution needs to go through VM loading, bytecode file loading, and in order to improve efficiency, JVM uses JIT (just in time) just-in-time compilation technology to partially optimize the interpreted and executed bytecode, and through the compiler The process of generating local execution code also needs to add the time spent by JVM internal garbage collection.

The loading time of a typical Java application usually starts at the second level, and it is normal for a relatively large application to take a few minutes initially. In the past, because we seldom restarted Java applications, the problem of long Java application startup time was generally rarely exposed.

  • However, in cloud-native application scenarios, as the granularity becomes very fine, the deployment frequency is too frequent
    • We often restart the application continuously to implement rolling upgrades or serverless application scenarios. The problem of long Java application startup time has become an urgent problem to be solved in the cloud nativeization of Java applications.

Introduction to Quarkus

  • Quarkus is positioned as a Kubernetes Native Java framework tailored for GraalVM and OpenJDK HotSpot.
  • Quarkus is an open source project of Red Hat. With the help of the open source community, it provides an end-to-end Java cloud native application solution by adapting the framework widely used in the industry and combining the characteristics of cloud native applications.
  • Although the open source time is relatively short, the ecological aspect has also reached a usable state. It includes an extension framework and supports frameworks such as Netty, Undertow, Hibernate, and JWT, which are sufficient for developing enterprise-level applications. Users can also expand themselves based on the extension framework. .

Towards the original

For applications that need to run for a long time, due to sufficient warm-up, hotspot codes will be accurately located and captured by HotSpot's detection mechanism, and compiled into machine codes that can be directly executed by physical hardware. In such applications, Java runs Efficiency largely depends on the quality of code output by the just-in-time compiler.

The HotSpot virtual machine contains two just-in-time compilers, namely the client compiler (referred to as C1) with shorter compilation time but lower output code optimization, and the server side with longer compilation time but higher output code optimization quality. Compilers (referred to as C2), usually they will cooperate with interpreters under the layered compilation mechanism to form the execution subsystem of the HotSpot virtual machine.

New generation just-in-time compiler (Graal VM)

Since JDK 10, a new just-in-time compiler has been added to HotSpot: the Graal compiler. You can think of it from the name of the Graal VM mentioned in the previous section. The Graal compiler is used as a replacement for the C2 compiler. The role debuted.

Problems with the C2 compiler

C2 has a very long history, which can be traced back to the work of Cliff Click when he was studying for a doctorate. Although this compiler written in C++ is still effective, it is so complicated that even Cliff Click himself is unwilling to continue to maintain it.

The Graal compiler itself is written in the Java language. When implemented, it deliberately uses the same high-level intermediate representation (High IR) form called "Sea-of-Nodes" as C2, making it easier to learn from the advantages of C2.

The Graal compiler came out 20 years later than the C2 compiler, and has an extremely abundant latecomer advantage. While maintaining the ability to output compiled code with similar quality, its development efficiency and scalability are significantly better than C2 compilation. This determines that the excellent code optimization technology in the C2 compiler can be easily transplanted to the Graal compiler, but conversely, the effective optimization in the Graal compiler is extremely difficult to implement in the C2 compiler.

Graal Compiler

The compilation effect of Graal quickly caught up with C2 in just a few years, and even gradually surpassed the C2 compiler in some test items.

Graal can do more complex optimizations than C2, such as "Partial Escape Analysis" (Partial Escape Analysis), and also has a strategy of "Aggressive Speculative Optimization" (Aggressive Speculative Optimization) that is easier to use than C2, and supports custom predictive assumptions etc.

The Graal compiler is still young and has not been verified by enough practice, so it still has the label of "experimental status" and needs to be activated with switch parameters, which reminds me of the JDK 1.3 era, when the HotSpot virtual machine just turned out The scene at the time of birth also needs to be activated with a switch, and it is also a history as a substitute for the Classic virtual machine.

The future of the Graal compiler is promising. As the latest engine for Java virtual machine code execution, its continuous improvement will inject faster and stronger driving force into HotSpot and Graal VM at the same time.

Summary analysis of GraalVM

GraalVM: In order to improve efficiency, JVM uses JIT just-in-time compilation technology to partially optimize the interpreted and executed bytecode, and generates local execution code through the compiler to improve application execution efficiency.

GraalVM is a new generation of multi-language JVM just-in-time compiler developed by Oracle Labs. It has better performance and multi-language interoperability. Compared with Java HotSpot VM, Graal can improve performance by 2 to 5 times with the help of inlining, escape analysis, and rollout optimization techniques.

The static compilation function provided by GraalVM can only be optimized for the closed world that can be seen when it is compiled, and it is powerless for those codes that use reflection, dynamic loading, and dynamic proxy.

  • In order to allow our daily Java applications to run normally, we need to make relevant modifications and adaptations to the frameworks and class libraries used by the applications.
  • Due to the large number of class libraries used by Java code, the workload of this part is still quite huge. Although GraalVM has been launched for more than a year, it is still rare to see large-scale Java applications transferred to this platform.

share resources

Information sharing
To obtain the above resources, please visit the open source project and click to jump

Guess you like

Origin blog.csdn.net/star20100906/article/details/132257726