Spring Boot (XIV): reactive programming and Spring Boot Webflux Quick Start

1. What is reactive programming

In the computer, in response to programming or reactive programming (English: Reactive programming) is a stream-oriented programming paradigm and the changes in the propagation. This means that the expression can be easily static or dynamic programming languages ​​in the data stream, and the associated computational model will automatically change the value of the data stream spread.

For example, in the imperative programming environment, a = b + c shows a result of the expression is assigned to a, and then change the value of b or c does not affect a. However, in response to the programming, a value will be updated with the updated b or c.

Reactive programming is based on the non-blocking asynchronous and event-driven program, only need a small amount to start the thread extensions within the program, rather than horizontally through the cluster expansion.

Imagine a scene, driven from the underlying database, through persistence layer, the service layer, the MVC layer Model, the element front-end interface users, all using declarative programming paradigm to build a pipeline can be transmitted change, so that we just update about the data in the database, corresponding changes will occur on the user interface, eliminating the need for front-end in order to obtain the latest polling data.

Simply put, we used to write the program is blocking when a request task over thread is blocked, wait until the task is completed to return out. The reactive programming when the task is a request to come, there will be other threads to do processing, when the notification of asynchronous task execution after the end of back.

2. Why use reactive programming

In the context of today's age of the Internet, Web applications often have to face high concurrency, the core elements of massive data challenge, the performance always have to consider the.

Blocking performance is one of the killer.

Most people do not think blocking is a big problem, at least that in addition to the network I / O, reading and writing files and databases or quickly, many people have been writing the blocking code.

Then the I / O operations specifically how slow?

2.1 CPU in the eyes of time

The following content sources https://blog.csdn.net/get_set/article/details/79466402

CPU definitely be called "Lightning Man" because they work has its own set of clock. A hero of our story is clocked at 2.5GHz of CPU, if it also has the world's "second" concept, and its clock jump about one second, then the CPU (a CPU core) is valid in the eyes of the concept of time like?

Mr. CPU where the group is a hardware unit Computing Group. For its part, together with the close cooperation of several small partners but also with its rhythm:

  • Mr. CPU is very agile, just one second to complete a command, complex actions may require multiple instructions.
  • Fortunately, "personal secretary" first-level cache is faster reaction can sec understand Mr. CPU meant.
  • Secondary cache from the "group secretary" Although ten seconds in order to "get" Mr. CPU to the point, but not too slow.
  • And memory group cooperation have become accustomed to, with data memory requests typically 4-5 minutes to find (memory addressability), but also not so bad, after all, a cache where to get the data you want 80% of the rest the secondary cache can get a majority, less delay thing.

Mr. CPU is a typical workaholic, tasks and more, all through the night without complaint, but there is something to make it and so on, then that would be his fate. Other groups (especially I / O group of disks and network cards) just worked with that efficiency is relatively low ridiculously ah:

  • About colleague I / O group, Mr. CPU has been complaining for a long time, and every time find something to SSD, should take 4-5 days to find (addressing), until the data transfer over a few weeks have passed. Mechanical disk is overly outrageous, with the data he wants, even to spend an average of 10 months to find, if you want to read the data 1M, even to 20 months! This is not how employees laid off? !
  • About NIC, Mr. CPU know they tried, after all Gigabit network costs high. The engine room and other small partners with a gigabit network talk to each other can be considered smooth, to another machine's CPU friend sent letters 1K, the fastest seventy-eight hours can be sent over. But after 1K letters wrapped in layers, actually can not write many words. Even worse, the card's communication complicated procedures before each network communication "Hello can you hear me? - I can hear, you can hear me over there? - I can hear you, that here we go! "this handshake confirmation must spend a very long time, but can not communicate face to face, can only be the case. This Fortunately, most frightening is the junior partner to communicate with other cities, sometimes it takes several years to deliver a message it!

Thus, for Mr. CPU, you want to make work to enrich themselves is not easy, but thanks to a small group of partners to help batch cache memory data to and from I / O group, contradictions began to ease.

This figure only relates to the time bar is apparent I / O, the conversion to a logarithmic scale we look at the map:

This figure is not intuitive ratio, each of the scale on the horizontal axis is an order of magnitude, showing that I / O, CPU and memory speed and to the difference of several orders of magnitude compared. Thus, for Web applications in large, high concurrency scenarios, how important cache, higher cache hit ratio means that performance.

  1. Parallelization: using more threads and hardware resources;
  2. Asynchronized: to improve the efficiency based on existing resources.

3. The basic concepts

Before introducing the topic to the popularity of several concepts:

3.1 Backpressure (back pressure)

Back pressure is a common strategy, so that the publisher has unlimited buffer storage elements, used to ensure that publishers release element too fast, not to suppress subscribers.

3.2 Reactive Streams (responsive flow)

Generally consists of:

  • Publishers: elements to subscribers
  • Subscribers: Consumer element
  • Subscriptions: In Publisher, a subscription is created, will be shared with subscribers
  • Processor: processing data between publishers and subscribers

3.3 Mono sum Flux

  • Mono: to achieve the publisher, and returns 0 or 1 element
  • Flux: to achieve the publisher, and return N elements

4. Spring Webflux

Spring Boot Webflux based Reactor is achieved. Spring Boot 2.0 comprises a spring-webflux new module. The module comprises a support and responsive WebSocket HTTP client, and support for REST, HTML, and the like WebSocket interactive program. Generally, Spring MVC for synchronization, Spring Webflux for asynchronous processing.

Spring Boot Webflux There are two programming models to achieve a similar manner Spring MVC annotations, and the other is to use its functional endpoints way.

4.1 Applicability

A map is very clear, WebFlux and MVC intersect. But be careful:

  • MVC to meet the scene, there is no need to change WebFlux.
  • Pay attention to support the container, you can see the following support inline container.
  • Micro-service architecture, WebFlux and MVC can be mixed. In particular, the development of IO-intensive services, the choice WebFlux to achieve.

4.2 embedded container

Boot with the Spring framework of the same launch applications, but WebFlux default by Netty start, and automatically sets the default port 8080. It also provides support for Jetty, Undertow and other containers. Self developer container Starter adding the corresponding component depends, to configure and use the corresponding embedded container instance.

Note, however, must be 3.1+ Servlet containers, such as Tomcat, Jetty; or a non Servlet container, such as Netty and Undertow.

4.3 Database

Support reactive programming only database MongoDB, redis, Cassandra, Couchbase.

4.4 Quick Start

Engineering dependence

Listing: the Spring-the Boot-webflux / pom.xml
***

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-webflux</artifactId>
</dependency>

Service class

Listing: springboot-webflux / src / main / the Java / COM / springboot / springbootwebflux / Service / impl / UserServiceImpl.java
***

@Service
public class UserServiceImpl implements UserSerivice {

    private static Map<Long, User> map = new HashMap<>();

    static {
        map.put(1L, new User(1L, "www.geekdigging.com", 18));
        map.put(2L, new User(2L, "极客挖掘机", 28));
    }

    @Override
    public Mono<User> getUserById(Long id) {

        return Mono.just(map.get(id));
    }
}

Controller class

Listing: springboot-webflux / src / main / the Java / COM / springboot / springbootwebflux / the Controller / UserController.java
***

@RestController
public class UserController {

    @Autowired
    UserSerivice userSerivice;

    @GetMapping("/getUserById/{id}")
    public Mono<User> getUserById(@PathVariable Long id) {
        return userSerivice.getUserById(id);
    }
}

Can be found through the above example, the development mode and differential mode before the Spring MVC is not great, but differ in the method of the return value.

5. Sample Code

Sample Code -Github

Sample Code -Gitee

6. References

https://blog.csdn.net/get_set/article/details/79466402

http://www.ityouknow.com/springboot/2019/02/12/spring-boot-webflux.html

https://www.cnblogs.com/limuma/p/9315442.html

Guess you like

Origin www.cnblogs.com/babycomeon/p/11683324.html