Java Advanced 3 - Error-prone knowledge points sorting (to be updated)
This chapter is the continuation of Java Advanced 2- Error-prone Knowledge Points ; in the previous chapter, the framework and middleware -related interview questions were
introduced , and in this chapter, it mainly records questions about project deployment middleware and network performance optimization Common interview questions.ORM
Article directory
15、Docker
Refer to the common commands of Docker (take Anaconda as an example to build the environment) ,
-
[Q] How does docker pull the image?
Note :docker pull 镜像名:tags eg: docker pull continuumio/anaconda3 #默认拉取最新版本
-
[Q] How does docker view, find, and delete local images?
Note :docker images -a #查看镜像列表 docker search 镜像名 #查找镜像 docker rmi 镜像名 #删除镜像
-
[Q] How does docker update the local image?
Note :docker run -t -i 镜像名:版本号 /bin/bash eg: docker run -t -i ubuntu:15.10 /bin/bash
-
[Q] How does docker instantiate containers through images? (This step is omitted during the process of creating the container
docker pull 镜像名:版本号
) , refer to Note on modifying the mapped IP address domain port number of the Docker container :
docker run -it --name 容器名 镜像名:版本号 /bin/bash eg: docker run -it --name="anaconda" -p 8888:8888 continuumio/anaconda3 /bin/bash ---
The meanings of each parameter are as follows:
-i
: Interactive operation.-t
: terminal.--name="anaconda"
: is to give the container a name-p 8888:8888
: It is to0.0.0.0:8888
map the port of the container to the local8888
port (note that there is no docker containerip
, itip
is the same as the host)/bin/bash
: After the image name is the command, here we hope to have an interactive shell, so we use/bin/bash
.
-
[Q] How does docker start and stop the container?
Note :启动容器:docker start 容器名/容器id 停止容器:docker stop 容器名/容器id
-
docker exec -it 容器id /bin/bash
-
[Q] How does docker implement data transfer between the host machine and the docker container?
Note :- Copy
docker
the files in the container to the host# 将容器b7200c1b6150的文件test.json传到主机/tmp/,在Ubuntu命令行中输入 $ docker cp b7200c1b6150:/opt/gopath/src/github.com/hyperledger/fabric/test.json /tmp/
- Copy files from the host to
docker
the container# 将主机requirements.txt传到容器9cf7b3c196f3的/home目录下,在宿主机命令行中输入 $ docker cp /home/wangxiaoxi/Desktop/requirements.txt 9cf7b3c196f3:/home/
- Copy
-
[Q] How does docker view and delete local containers?
Note :删除容器:docker rm -f 容器id 查看所有的容器:docker ps -a
-
[Q] How does docker package containers into images?
Note :docker commit [OPTIONS] CONTAINER [REPOSITORY[:TAG]] #OPTIONS说明: # -a :提交的镜像作者; # -c :使用Dockerfile指令来创建镜像; # -m :提交时的说明文字; # -p :在commit时,将容器暂停。 docker commit -a "wangxiaoxi" -m "fallDetection_toolkit_env" 2b1ad7022d19 fall_detection_env:v1
-
[Q] How does docker export containers as tar and import them from tar?
Note :导出容器快照:docker export 1e560fca3906 > ubuntu.tar 导入容器快照:cat docker/ubuntu.tar | docker import - test/ubuntu:v1
docker import
It refers toubuntu.tar
importing into the imagetest/ubuntu:v1
. -
[Q] How to write docker
Dockerfile
, and how to use itDockerfile
to build a local image? , refer to the creation commit and dockerfile of the docker image , and the detailed explanation of the Dockerfile
Note :- Common commands:
FROM
: base image, which image is the current new image based on
MAINTAINER
: the name and email address of the image maintainer
RUN
: the command that needs to be run when the container is built
EXPOSE
: the port that the current container exposes to the outside world
WORKDIR
: after the container is created, the default login working directory of the terminal is specified , a foothold
ENV
: used to set environment variables during the process of building the image
ADD
: copy the files in the host directory into the image and the ADD command will automatically process the URL and decompress the tar archive
COPY
: similar to ADD, copying files and directories to the image. (COPY src dest or COPY ["src", "dest"])
VOLUME
: container data volume, used for data storage and persistence
CMD
: specify a command to run when a container starts, there can be multiple CMD instructions in the Dockerfile, but Only the last one takes effect, CMD will be replaced by the parameters after docker run
ENTRYPOINT
: specify a command to run when the container starts, the purpose of ENTRYPOINT is the same as CMD, it is to specify the container startup program and parameters
ONBUILD
: when building an inherited Dockerfile Run the command, and the onbuild of the parent image will be triggered after the parent image is inherited by the child. - By
Dockerfile
building the mirror# . 表示当前路径(一定不要在最后忘了'.',否则会报"docker build" requires exactly 1 argument.) docker build -f mydockerfile -t mycentos:0.1 . #-f表示dockerfile路径 -t表示镜像标签
- Common commands:
-
[Q] How does docker package the image into a tar package?
Note :# docker save -o 文件名.tar 镜像名 docker save -o /media/wangxiaoxi/新加卷/docker_dir/docker/test.tar hello-world #恢复镜像 docker load -i [docker备份文件.tar]
16. Netty (core: channelPipeline
doubly linked list (responsibility chain), used by each node of the linked list promise
( wait/notify
event listener))
Refer to Dark Horse Netty Notes , Shang Silicon Valley Netty Notes , [Hard Core] Live Netty Knowledge Points for a Month , [Reading Notes] Java Game Server Architecture Actual Combat , Code Reference: kebukeYi / book-code
-
Note:
-
Basic concept :
-
Blocking : Wait for the data in the buffer to be ready before processing other things, otherwise wait there all the time.
-
Non-blocking : When our process accesses our data buffer, if the data is not ready, it will return directly without waiting . If the data is ready, return directly.
-
Synchronization : When a process/thread is executing a certain request , if the request takes a while to return information , then the process/thread will wait until the return information is received before continuing to execute.
-
Asynchronous : The process does not need to wait for the processing result of a certain request , but continues to perform the following operations. When the request is processed, the process can be notified to process through the callback function .
Blocking and synchronous (non-blocking and asynchronous) descriptions are the same, but the emphasis is different: blocking emphasizes state, while synchronization emphasizes process .
-
-
Blocking IO and non-blocking IO: ( BIO vs NIO )
-
BIO(Blocking IO):
-
The traditional java.io package , which is implemented based on the stream model, provides some of the IO functions we are most familiar with, such as File abstraction, input and output streams, and so on. The interactive mode is synchronous and blocking . That is to say, when reading the input stream or writing the output stream, the thread will be blocked there until the reading and writing actions are completed, and the calls between them are in a reliable linear order.
-
In
Java
network communication,Socket
sockets andServerSocket
sockets are implemented based on blocking mode.
-
-
NIO(Non-Blocking IO):
-
Java 1.4
The correspondingjava.nio
package , providing abstractions such asChannel
,Selector
,Buffer
etc., which support buffer-oriented, channel-based I/O operation methods. -
NIO
Socket
provides two different socket channel implementationsServerSocket
correspondingSocketChannel
to and in the traditional BIO model, supporting both blocking and non-blocking modes . For high-load, high-concurrency (network) applications, NIO's non-blocking mode should be used for development.ServerSocketChannel
-
-
BIO
Compare withNIO
:IO model BIO NIO communication flow-oriented buffer -oriented deal with blocking IO non-blocking IO trigger none Selector
-
-
-
Note:
-
The management of the number of threads in the game business processing framework needs to consider the type of task: I/O-intensive, computing-intensive, or both; if there are multiple types of tasks, you need to consider using multiple thread pools :
-
Business processing is computationally intensive (such as the calculation of total combat power in the game, battle report inspection, and business logic processing. If there are N processors, it is recommended to allocate N+1 threads)
-
Database operations are IO-intensive , such as database and Redis read and write, network I/O operations (communication between different processes), disk I/O operations (log file writing), etc.
Allocating two independent thread pools can make business processing unaffected by database operations
-
-
In the use of threads, it is necessary to use the corresponding thread pool strictly according to different task types . In game service development, it is strictly regulated that developers cannot create new threads at will. If there are special circumstances, special instructions are required, and the evaluation of its usability should be done to prevent too many places from creating threads, which will eventually become uncontrollable .
-
-
[Q] What is Netty? Why learn Netty? (asynchronous, event-driven network framework)
Note:
-
Netty is an asynchronous, event-driven network application framework , which
java.nio
is encapsulated on the basis (the clientSocketChannel
is encapsulatedNioSocketChannel
, and the server isServerSocketChannel
encapsulatedNioServerSocketChannel
), which is used to quickly develop maintainable and high-performance network servers and clients .Netty
The status of aJava
web application framework is like the status ofSpring
a framework inJavaEE
development . -
In order to ensure the needs of network communication, the following frameworks use Netty:
Cassandra
- nosql databaseSpark
- Big data distributed computing frameworkHadoop
- Big data distributed storage frameworkRocketMQ
- ali open source message queueElasticSearch
- search enginegRPC
- rpc frameworkDubbo
- rpc frameworkSpring 5.x
- The flux api completely abandons tomcat and uses netty as the server sideZookeeper
- Distributed coordination framework
-
-
[Q] What are the core components of Netty? (thread pool + selector + channel (the bottom layer is file cache) + task queue + channelPipeline (responsibility chain, including multiple handlers to handle different events)), refer to how Netty encapsulates Socket client Channel, what are the types of Netty Channel? , Netty's core components , netty execution process and detailed explanation of core modules (must-see series for beginners)
Note:
-
Basic concepts of core components :
-
Event loop group (EventLoopGroup) : The event loop group can be simply understood as a thread pool , which contains multiple event loop threads (that is
EventLoop
), when initializing the event loop group, you can specify the number of event loops to create . -
Each event loop thread is bound to a task queue , which is used to process non-IO events, such as channel registration, port binding,
EventLoop
etc. All threads in the event loop group are active , and eachEventLoop
thread is bound to a selector (Selector
), a selector (Selector
) registers multiple channels (client connections), when the channel generates an event , the event loop thread bound to the selector will activate and process the event . -
For
BossGroup
the event loop group, the event loop inside only listens to the connection event of the channel (ieaccept()
. -
For
WorkerGroup
the event loop group, the event loop inside only listens to the read event (read()
) . If the connection event (accept()
) of the channel is monitored, it will be handed over toBossGroup
an event loop in the event loop group for processing . After processing, the client channel (channel) will be generated and registered to an event loop in the WorkerGroup event loop group, and bound to read Event, this event loop will monitor the read event, when the client initiates a read and write request, this event loop will monitor and process it .-
Selector (selector) :
Selector
bind an event loop thread (EventLoop
), on which multiple channels can be registered (which can be simply understood as client connection ),Selector
responsible for monitoring channel events (connection, read and write) , when the client initiates a read When writing a request,Selector
the bound event thread (EventLoop
) will wake up and read events from the channel for processing. -
Task queue and tail task queue : An event loop binds a task queue and tail queue for storing channel events.
-
Channel (channel) : When a Linux program performs any form of IO operations, it is manipulating files (for example, you can
sed|awk
view the process status through commands, and the content of the process is actually a file). Since the TCP/IP protocol stack is supported in the UNIX system, it is equivalent to introducing a new IO operation, that is, Socket IO, which is dedicated to network transmission. Therefore, the Linux system regards Socket as a kind of file .When we use Socket IO to send data, we are actually manipulating files:
-
First open the file, write the data into the file (the upper layer of the file also has a layer of cache, called file cache ), and then copy the data in the file cache to the sending buffer of the network card ;
-
Then send the data in the buffer to the receiving buffer of the other party's network card through the network card. After the other party's network card receives the data , it opens the file, copies the data to the file, and then copies the data in the file cache to the user cache , and then processes data.
Channel
It isSocket
the right package, so its bottom layer is also operating files , soChannel
what is operating is operatingSocket
, and the operationSocket
(itself a kind of file) is operating files.Netty
Re-encapsulate the clientSocketChannel
and serverServerSocketChannel
in the JDK respectively to obtainNioSocketChannel
andNioServerSocketChannel
. -
-
-
Pipeline (ChannelPipeline) : The pipeline is a linked list with a group of encoders as nodes, which is used to process client requests, and it is also the place where business logic is actually processed .
-
Processor (ChannelHandler) : A processor is a node of the pipeline, and a client request is usually processed one by one by all handlers in the pipeline.
-
Event KEY (selectionKey) : When the channel (
channel
) generates an event ,Selector
it will generate anselectionKey
event and wake up the event thread to process the event . -
Buffer : NIO is a block-oriented IO. After reading data from the channel, it will be put into the buffer (Buffer). When writing data to the channel, it needs to be written into the buffer first. In short, it is impossible to read data directly from the channel. , and cannot directly write data to the channel.
-
Buffer Pool (BufferPool) : This is
Netty
an optimization method for memory, which manages fixed-size memory through a pooling technology . (When a thread needs to store data, it can directly obtain the memory from the buffer pool , and put it back when it is not needed, so that there is no need to re-apply for memory frequently, because applying for memory takes time and affects performance) -
ServerBootstrap and Bootstrap :
Bootstrap
andServerBootstrap
are called the bootstrap class, which refers to the process of configuring the application and making it run.Netty
Bootstrapping is handled by isolating your application from the network layer.-
Bootstrap
It is the bootstrap class of the client . When the (connect UDP) and (connect TCP) methodsBootstrap
are called , a new one will be created , and only a single one without a parent Channel will be created to realize all network exchanges.bind()
connect()
Channel
Channel
-
ServerBootstrap
It is the bootstrap class of the server .ServerBootstrap
Whenbind()
the method , a will be createdServerChannel
to accept the connection from the client , and the ServerChannel manages multiple sub-Channels for communication with the client .
-
-
ChannelFuture : All I/O operations in
Netty , that is, the operation will not return results immediately, so an object as the "spokesman" of this asynchronous operation, representing the asynchronous operation itself. If you want to get the return value of the asynchronous operation, you can add the NIO network programming framework Netty listener to the asynchronous operation through the method of the asynchronous operation object, and register a callback for it: call and execute immediately after the result comes out .ChannelFuture
addListener()
Netty
The asynchronous programming model is builtFuture
oncall_back()
top of the callback concept.
-
-
The relationship between components and components is as follows:
- An event loop group (
EventLoopGroup
) contains multiple event loops (EventLoop
) -1 ... *
; - A selector (
selector
) can only be registered into one event loop (EventLoop
) -1 ... 1
; - An event loop (
EventLoop
) contains a task queue and a tail task queue -1 ... 1
; - A channel (
channel
) can only be registered in one selector (selector
) -1 ... 1
; - A channel (
channel
) can only be bound to one pipe (channelPipeline
) -1 ... 1
; - A pipeline (
channelPipeline
) contains multiple service orchestration processors (channelHandler
) - Netty channel (
NioSocketChannel/NioServerSocketChannel
) and native NIO channel (SocketChannel/SocketServerChannel
) correspond one-to-one and bind -1 ... 1
; - A channel can focus on multiple IO events;
- An event loop group (
-
-
[Q] What is the execution process of Netty? (Top-down analysis/client-server analysis) , refer to an article to understand the overall process of Netty , netty execution process and detailed explanation of core modules (must-see series for beginners)
Note:
-
Top-down analysis process : NioEventLoopGroup -> NioEventLoop -> selector -> channel, NioEventLoop listens to different channels (NioEventLoop in BossGroup listens to accept, NioEventLoop in work Group listens to read/write events)
-
Netty abstracts two sets of thread pools,
BossGroup
which are responsible for receiving connections from clients andWorkerGroup
reading and writing the network ;BossGroup
and areWorkerGroup
of typeNioEventLoopGroup
-
NioEventLoopGroup
Equivalent to an event loop group , this group contains multiple event loops, each event loop isNioEventLoop
;-
NioEventLoop
Represents a thread that executes processing tasks in a continuous cycle (selector
monitoring whether the binding event occurs), eachNioEventLoop
has oneSelector
, used to monitor the network communicationsocket
of the , such asNioServerSocketChannel
bound to the server , bound to On the client side , and then each of them will continuously monitor related events in a loop .bossgroup
NioEventLoop
selector
NioSocketChannel
NioEventLoop
selector
selector
-
NioEventLoopGroup
There can be multiple threads, that is, there can be multipleNioEventLoop
-
-
Each of
BossGroup
the followingNioEventLoop
loops executes 3 steps-
polling
accept
events -
Handles
accept
events ,client
establishes a connection with , generatesNioScocketChannel
, and registers it withworkerGroup NIOEventLoop
aSelector
-
Continue to process the tasks of the task queue , i.e.
runAllTasks
-
-
Each of
WorkerGroup
the followingNIOEventLoop
loops executes the steps-
polling
read
,write
event -
Handling
I/O
the event , that is,read,write
the event,NioScocketChannel
is processed . -
Process the tasks of the task queue , i.e.
runAllTasks
-
-
Each of
Worker
the followingNIOEventLoop
will usepipeline
(pipeline) when processing business,pipeline
which containschannel
(channel), that is, the corresponding channelpipeline
can obtained through , and many processors are maintained in the pipeline . -
NioEventLoop
The serial design is adopted internally , from message reading -> decoding -> processing -> encoding -> sending , the IO thread is alwaysNioEventLoop
responsible -
NioEventLoopGroup
NioEventLoop
Each containsNioEventLoop
oneSelector
, andtaskQueue
each can register to listen to multiple, and each can only be bound toNioEventLoop
a unique one . Each is bound to have its own. You can get the corresponding one , and you can also get the corresponding one. .Selector
NioChannel
NioChannel
NioEventLoop
NioChannel
ChannelPipeline
NioChannel
ChannelPipeline
ChannelPipeline
NioChannel
-
-
The client-server analysis process is as follows:
-
Server
Start, select oneNetty
fromParentGroup
( ) to monitor the specified .BossGroup
NioEventLoop
port
-
Client
Start, select a connectionNetty
fromParentGroup
(BossGroup
) .NioEventLoop
Server
-
Client
connected , createdServer
_port
Channel
-
Netty
Select a binding from ( ) to handleChildGroup
all operations in it .WorkGroup
NioEventLoop
channel
Channel
-
Client
By sending packetsChannel
to .Server
-
Pipeline
The processor in the process uses the chain of responsibility mode toChannel
process the data packets in the -
Server needs to send data to Client . Then the data needs to be processed by the processor
pipeline
in the computer and processed into data packets for transmission .ByteBuf
-
Server
channel
Send the packet viaClient
-
Pipeline
The processor in the process uses the chain of responsibility mode tochannel
process the data packets in the
-
-
-
[Q] How to use Netty to easily realize the communication between multiple clients and servers?
Note:
-
Referring to the Netty execution process of the client/server in the previous question , the following code is given
-
Server:
package org.example.code001_helloworld; import io.netty.bootstrap.ServerBootstrap; import io.netty.channel.Channel; import io.netty.channel.ChannelHandlerContext; import io.netty.channel.ChannelInitializer; import io.netty.channel.SimpleChannelInboundHandler; import io.netty.channel.nio.NioEventLoopGroup; import io.netty.channel.socket.nio.NioServerSocketChannel; import io.netty.channel.socket.nio.NioSocketChannel; import io.netty.handler.codec.string.StringDecoder; public class HelloServer { public static void main(String[] args) throws InterruptedException{ //通过ServerBootStrap引导类创建channel ServerBootstrap sb = new ServerBootstrap() .group(new NioEventLoopGroup()) //2、选择事件循环组为NioEventLoopGroup,返回ServerBootstrap .channel(NioServerSocketChannel.class) //3、选择通道实现类为NioServerSocketChannel,返回ServerBootstrap .childHandler( //为channel添加处理器,返回返回ServerBootstrap new ChannelInitializer<NioSocketChannel>(){ //4、初始化处理器,用来监听客户端创建的SocketChannel protected void initChannel(NioSocketChannel ch) throws Exception { ch.pipeline().addLast(new StringDecoder()); // 5、处理器1用于将ByteBuf解码为String ch.pipeline().addLast(new SimpleChannelInboundHandler<String>() { // 6、处理器2即业务处理器,用于处理上一个处理器的处理结果 @Override protected void channelRead0(ChannelHandlerContext ctx, String msg) { System.out.println(msg); //输出客户端往NioSocketChannel中发送端数据 } }); } } ); ); // sb.bind("127.0.0.1",8080); //监听客户端的socket端口 ChannelFuture channelFuture = sb.bind("127.0.0.1",8080); //监听客户端的socket端口(默认127.0.0.1) //设置监听器 channelFuture.addListener(new GenericFutureListener<Future<? super Void>>() { @Override public void operationComplete(Future<? super Void> future) throws Exception { if(future.isSuccess()){ System.out.println("端口绑定成功"); }else{ System.out.println("端口绑定失败"); } } }); while(true){ Thread.sleep(1000); //睡眠5s System.out.println("我干我的事情"); } } }
-
client:
package org.example.code001_helloworld; import io.netty.bootstrap.Bootstrap; import io.netty.channel.Channel; import io.netty.channel.ChannelInitializer; import io.netty.channel.nio.NioEventLoopGroup; import io.netty.channel.socket.nio.NioSocketChannel; import io.netty.handler.codec.string.StringEncoder; import java.util.Date; public class HelloClient{ public static void main(String[] args) throws InterruptedException { int i = 3; while(i > 0) { Channel channel = new Bootstrap() //客户端启动类,用于引导创建channel;其中Bootstrap继承于AbstractBootstrap<Bootstrap, Channel>,即一个map集合 .group(new NioEventLoopGroup()) // 1、选择事件循环组类为NioEventLoopGroup,返回Bootstrap .channel(NioSocketChannel.class) // 2、选择socket实现类为NioSocketChannel,返回Bootstrap .handler(new ChannelInitializer<Channel>() { // 3、添加处理器,返回Bootstrap:创建ChannelInitializer抽象类的匿名内部类,重写initChannel,处理器是Channel的集合 @Override //在连接建立后被调用 protected void initChannel(Channel ch) { ch.pipeline().addLast(new StringEncoder()); } }) .connect("127.0.0.1", 8080) // 4、与server建立连接,返回ChannelFuture .sync() // 5、同步阻塞,等待连接建立,返回ChannelFuture .channel(); // 6、成功创建通道,返回Channel,通道即为socket文件 channel.writeAndFlush(new Date() + ": hello world! My name is wang" + i); //7、向channel中写入数据,发送给server i--; } } }
-
Start the server first, and then open the three clients; the server listens to the data sent by the client from port 8080 through a custom processor, and prints it to the console .
我干我的事情 我干我的事情 我干我的事情 我干我的事情 Thu Nov 10 23:17:03 CST 2022: hello world! My name is wang1 Thu Nov 10 23:17:03 CST 2022: hello world! My name is wang3 Thu Nov 10 23:17:03 CST 2022: hello world! My name is wang2 我干我的事情 我干我的事情 我干我的事情 我干我的事情
-
-
A simple analysis of the above code flow:
-
The server first selects a thread from the thread pool to listen to the server-bound
ip
and port (ie127.0.0.1
and8080
).The port here is the only entrance for the client to access the server . When multiple clients send a large number of requests to the server at the same time, if the server receives each client's request one by one, there will be a blocking waiting problem .
-
In order to solve the blocking problem, different requests from the client are stored in the port monitored by the server in the form of files through different
channel
files (ie, file cache) , so here the server opens a thread to listen to this port, and the thread is boundselector
to allowselector
completion Event handling works. -
After
channel
the transfer is complete (the file cache is full and written to the file), the data will be processedselector
throughchannelPipeline
the user-defined one .channelHandler
-
-
-
Note:
Here
juc.Future
,Consumer
the functional interface and nettypromise
are used respectively to simulate the database data query process :-
While waiting for
juc.Future
the return result, the main thread is blocked .package org.example.code000_JUC_test; import java.util.concurrent.Callable; import java.util.concurrent.CancellationException; import java.util.concurrent.FutureTask; public class code001_future_test { //模拟数据库查询操作 public String getUsernameById(Integer id) throws InterruptedException{ Thread.sleep(1000); //模拟IO过程 return "wangxiaoxi"; } public static void main(String[] args) { final code001_future_test obj = new code001_future_test(); //FutureTask处理的数据类型即为Callable异步返回的数据类型 FutureTask<String> futureTask = new FutureTask<String>(new Callable<String>(){ public String call() throws Exception { System.out.println("thread1正在异步执行中"); String username = obj.getUsernameById(1); System.out.println("callable over"); return username; } }); //创建线程并异步执行 Thread thread = new Thread(futureTask); thread.start(); try{ System.out.println("mainThread开始操作"); String res = futureTask.get(); //主线程会阻塞,同步等待线程1执行结束,并返回值 System.out.println("thread1处理完毕," + "用户名为:" + res); int i = 5; while(i > 0){ System.out.println("mainThread正在执行中"); Thread.sleep(1000); i--; } System.out.println("mainThread结束操作"); }catch (InterruptedException e){ e.printStackTrace(); }catch (Exception e){ e.printStackTrace(); } } } --- mainThread开始操作 thread1正在异步执行中 callable over thread1处理完毕,用户名为:wangxiaoxi mainThread正在执行中 mainThread正在执行中 mainThread正在执行中 mainThread正在执行中 mainThread正在执行中 mainThread结束操作
-
The return result is processed through the callback function (
Consumer<String> consumer
+lambda8
expression), and the main thread can still complete other operations at this time without blocking and waiting for the return result of other threads. But there will be a concurrency problemconsumer
of being used by multiple threads at the same time .package org.example.code000_JUC_test; import java.util.function.Consumer; public class code002_consumer_test { //模拟数据库查询操作, 其中consumer是回调函数, 所以该函数无返回值 public void getUsernameById(Integer id, Consumer<String> consumer) throws InterruptedException{ Thread.sleep(1000); //模拟IO过程 String username = "wangxiaoxi"; consumer.accept(username); } public static void main(String[] args) throws InterruptedException{ final code002_consumer_test obj = new code002_consumer_test(); Consumer<String> consumer = ((s) -> { System.out.println("thread1处理完毕,用户名为:" + s); }); //通过回调函数异步执行 Thread thread = new Thread(new Runnable(){ public void run() { try { System.out.println("thread1正在异步执行中"); obj.getUsernameById(1,consumer); //函数式编程: consumer有入参,无返回值 } catch (InterruptedException e) { throw new RuntimeException(e); } } }); thread.start(); System.out.println("mainThread开始操作"); int i = 5; while(i > 0){ System.out.println("mainThread正在执行中"); Thread.sleep(1000); i--; } System.out.println("mainThread结束操作"); } } --- mainThread开始操作 mainThread正在执行中 thread1正在异步执行中 mainThread正在执行中 thread1处理完毕,用户名为:wangxiaoxi mainThread正在执行中 mainThread正在执行中 mainThread正在执行中 mainThread结束操作
-
netty
The interface is rewrittenjuc.Future
, and a sub-interface is derived on this basispromise
. You can monitor whether it has been processed by other threadspromise
by setting a listener ( at this time , by blocking and waiting for the processing result, when it has been processed by other threads, the thread will wake up by , notify it to process the result ; if it means the processing is successful, if it means the processing failed).promise
listener
promise.await
promise
promise
promise.notify
listener
future.isSuccess()
future.Cancelled()
package org.example.code000_JUC_test; import io.netty.util.concurrent.DefaultEventExecutor; import io.netty.util.concurrent.DefaultEventExecutorGroup; import io.netty.util.concurrent.DefaultPromise; import io.netty.util.concurrent.EventExecutor; import io.netty.util.concurrent.EventExecutorGroup; import io.netty.util.concurrent.Future; import io.netty.util.concurrent.GenericFutureListener; import io.netty.util.concurrent.Promise; import java.util.function.Consumer; public class code003_netty_test { public Future<String> getUsernameById(Integer id,Promise<String> promise) throws InterruptedException{ //模拟从数据库线程池中取出某一个线程进行操作 new Thread(new Runnable() { @Override public void run() { System.out.println("thread2正在异步执行中"); try { Thread.sleep(1000); //模拟IO过程 } catch (InterruptedException e) { throw new RuntimeException(e); } String username = "wangxiaoxi"; System.out.println("thread2处理完毕"); promise.setSuccess(username); } }).start(); return promise; //返回promise的线程和处理promise线程并不是同一个线程 } public static void main(String[] args) throws InterruptedException{ code003_netty_test obj = new code003_netty_test(); EventExecutor executor = new DefaultEventExecutor(); //通过netty创建线程 executor.execute(new Runnable() { @Override public void run() { System.out.println("thread1正在异步执行中"); //异步调用返回值(继承于netty.Future,可用于设置监听器) Promise<String> promise = new DefaultPromise<String>(executor); //设置监听器,阻塞等待(object.await)直到promise返回结果并对其进行处理 try { obj.getUsernameById(1,promise).addListener(new GenericFutureListener<Future<? super String>>() { @Override public void operationComplete(Future<? super String> future) throws Exception { System.out.println("thread1.listener监听完毕"); if(future.isSuccess()){ System.out.println("thread1.listener监听到promise的返回值"); String username = (String)future.get(); System.out.println("thread1处理完毕,用户名为:" + username); } } }); } catch (InterruptedException e) { throw new RuntimeException(e); } } }); System.out.println("mainThread开始操作"); int i = 5; while(i > 0){ System.out.println("mainThread正在执行中"); Thread.sleep(1000); i--; } System.out.println("mainThread结束操作"); } } --- mainThread开始操作 mainThread正在执行中 thread1正在异步执行中 thread2正在异步执行中 mainThread正在执行中 thread2处理完毕 thread1.listener监听完毕 thread1.listener监听到promise的返回值 thread1处理完毕,用户名为:wangxiaoxi mainThread正在执行中 mainThread正在执行中 mainThread正在执行中 mainThread结束操作
promise
CombiningFuture
the advantages of the callback function and the callback function , the creation and processing of the callback function may not be in the same thread (thread 1 is createdpromise
, and the child thread 2 of thread 1 is used for processingpromise
, so there is no concurrency problem)
-
-
[Q] What can ChannelFuture and Promise be used for? What is the difference between the two? (
ChannelFuture
SamePromise
as both, inherited from nettyFuture
, can be used to return asynchronous processing results) , refer to Netty asynchronous callback mode - Future and Promise analysisNote:
-
Netty
InheritedFuture
from JDKFuture
, through the Objectwait/notify
mechanism , the synchronization between threads is realized ; using the observer design pattern , asynchronous non-blocking callback processing is realized . in:-
ChannelFuture
andPromise
both areNetty
subinterfacesFuture
; -
ChannelFuture
andChannel
binding for asynchronous processingChannel
of events ; butFuture
the return value cannot be set according to the execution status of -
Promise
On the basis of Netty ,Future
further encapsulation is carried out , and the function of setting return value and exception message is added , and the return result is customizedFuture
according to the return result of different data processing , such as:@Override public void channelRegister(AbstractGameChannelHandlerContext ctx, long playerId, GameChannelPromise promise) { // 在用户GameChannel注册的时候,对用户的数据进行初始化 playerDao.findPlayer(playerId, new DefaultPromise<>(ctx.executor())).addListener(new GenericFutureListener<Future<Optional<Player>>>() { @Override public void operationComplete(Future<Optional<Player>> future) throws Exception { Optional<Player> playerOp = future.get(); if (playerOp.isPresent()) { player = playerOp.get(); playerManager = new PlayerManager(player); promise.setSuccess(); fixTimerFlushPlayer(ctx);// 启动定时持久化数据到数据库 } else { logger.error("player {} 不存在", playerId); promise.setFailure(new IllegalArgumentException("找不到Player数据,playerId:" + playerId)); } } }); }
When the message is set successfully,
listener
the processing result will be notified immediately; oncesetSuccess(V result)
orsetFailure(V result)
, thoseawait()
orsync()
threads will return from waiting. -
ChannelPromise
inheritsChannelFuture
andPromise
is a writableChannelFuture
interface
-
-
ChannelFuture
Interface:Netty
AllI/O
operations are asynchronous, for examplebind
,connect
,write
and other operations will return anChannelFuture
interface. The asynchronous callback processing mode is widely used in the Netty source code . For example, when the interface is bound to a task, the callback of the asynchronous processing result can be realized by setting the Listener . This process is called passive callback .... ChannelFuture channelFuture = sb.bind("127.0.0.1",8080); //监听客户端的socket端口 //设置监听器 channelFuture.addListener(new GenericFutureListener<Future<? super Void>>() { @Override public void operationComplete(Future<? super Void> future) throws Exception { if(future.isSuccess()){ System.out.println("端口绑定成功"); }else{ System.out.println("端口绑定失败"); } } }); ...
ChannelFuture
channel
It is associated with the channel in the IO operation and is used to process channel events asynchronously . This interface is used most in practice.ChannelFuture
Compared with the parent class interfaceFuture
, the interface addschannel()
andisVoid()
two methodsChannelFuture
The methods defined by the interface are as follows:public interface ChannelFuture extends Future<Void> { // 获取channel通道 Channel channel(); @Override ChannelFuture addListener(GenericFutureListener<? extends Future<? super Void>> listener); @Override ChannelFuture addListeners(GenericFutureListener<? extends Future<? super Void>>... listeners); @Override ChannelFuture removeListener(GenericFutureListener<? extends Future<? super Void>> listener); @Override ChannelFuture removeListeners(GenericFutureListener<? extends Future<? super Void>>... listeners); @Override ChannelFuture sync() throws InterruptedException; @Override ChannelFuture syncUninterruptibly(); @Override ChannelFuture await() throws InterruptedException; @Override ChannelFuture awaitUninterruptibly(); // 标记Futrue是否为Void,如果ChannelFuture是一个void的Future,不允许调// 用addListener(),await(),sync()相关方法 boolean isVoid(); }
ChannelFuture
There are two states Uncompleted (unfinished) and Completed (completed) ,Completed
including three types, execution success, execution failure and task cancellation . Note: Both execution failure and task cancellation belong to Completed . -
Promise
Interface:Promise
It is writable ,Future
and the interface definition is as followspublic interface Promise<V> extends Future<V> { // 执行成功,设置返回值,并通知所有listener,如果已经设置,则会抛出异常 Promise<V> setSuccess(V result); // 设置返回值,如果已经设置,返回false boolean trySuccess(V result); // 执行失败,设置异常,并通知所有listener Promise<V> setFailure(Throwable cause); boolean tryFailure(Throwable cause); // 标记Future不可取消 boolean setUncancellable(); @Override Promise<V> addListener(GenericFutureListener<? extends Future<? super V>> listener); @Override Promise<V> addListeners(GenericFutureListener<? extends Future<? super V>>... listeners); @Override Promise<V> removeListener(GenericFutureListener<? extends Future<? super V>> listener); @Override Promise<V> removeListeners(GenericFutureListener<? extends Future<? super V>>... listeners); @Override Promise<V> await() throws InterruptedException; @Override Promise<V> awaitUninterruptibly(); @Override Promise<V> sync() throws InterruptedException; @Override Promise<V> syncUninterruptibly(); }
-
Future
The interface only providesget()
methods to get the return value , and the return value cannot be set . -
Promise
Based on the interfaceFuture
, it also provides setting return value and exception information, and immediately notifies listeners . Moreover, oncesetSuccess(...)
orsetFailure(...)
after, thoseawait()
orsync()
threads will return from waiting.There are two ways of synchronous blocking: sync() and await() , the difference:
sync()
the method will ;await()
the method does not do any processing on the exception information , which can be used when we don't care about the exception informationawait()
.By reading the source code, we can see that
sync()
the method is actually adjusted inawait()
the method.// DefaultPromise 类 @Override public Promise<V> sync() throws InterruptedException { await(); rethrowIfFailed(); return this; }
-
-
By inheriting
Promise
the interface, get aboutChannelFuture
the writable sub-interfaceChannelPromise
;Promise
The implementation class is to implement thread synchronizationDefaultPromise
through Object , and to ensure the visibility between threads through keywords.wait/notify
volatile
ChannelPromise
The implementation class isDefaultChannelPromise
, and its inheritance relationship is as follows:
-
-
[Ask] The execution process of ChannelPipeline (ChannelHandler is encapsulated into ChannelHandlerContext in ChannelPipeline, and read and write processing is realized through tail and head identification) , refer to Netty's TCP sticky packet and unpacking solution , Dark Horse Netty Tutorial
Note:
-
Selector
After polling the network IO event , it will call theChannel
corresponding oneChannelPipeline
to execute the corresponding ones in turnChannelHandler
. The event-drivenNetty
framework is as follows:
-
We already
ChannelPipeline
knowChannelHandler
the relationship between and above :ChannelPipeline
it is aChannelHandler
container for storing various pipes.ChannelPipeline
The execution flow is as follows ( it isChannelHandler
also divided into two categories:ChannelInboundHandler
it is used to handle link read eventsHandler
,ChannelOutboundHandler
and it is used to handle link write eventsHandler
):NioEventLoop
When a read event is triggered ,SocketChannel
the associatedChannelPipline
- The messages read in the previous step will
ChannelPipline
be processed by multiple in turnChannelInboundHandler
. ChannelHandlerContext
After the message is processedwrite
, the method will be called to send the message. At this time , the write event is triggered , and the sent message will also go throughChannelPipline
multiple processes inChannelOutboundHandler
the process.
-
One
channel
is bound to onechannelPipeline
, and can be added bychannel
obtaining ; the initialization code is as follows:channelPipeline
channelHandler
channelPipeline
eventGroup = new NioEventLoopGroup(gameClientConfig.getWorkThreads());// 从配置中获取处理业务的线程数 bootStrap = new Bootstrap(); bootStrap.group(eventGroup).channel(NioSocketChannel.class).option(ChannelOption.TCP_NODELAY, true).option(ChannelOption.SO_KEEPALIVE, true).option(ChannelOption.CONNECT_TIMEOUT_MILLIS, gameClientConfig.getConnectTimeout() * 1000).handler(new ChannelInitializer<Channel>() { @Override protected void initChannel(Channel ch) throws Exception { ch.pipeline().addLast("EncodeHandler", new EncodeHandler(gameClientConfig));// 添加编码 ch.pipeline().addLast(new LengthFieldBasedFrameDecoder(1024 * 1024 * 4, 0, 4, -4, 0));// 添加解码 ch.pipeline().addLast("DecodeHandler", new DecodeHandler());// 添加解码 ch.pipeline().addLast("responseHandler", new ResponseHandler(gameMessageService));//将响应消息转化为对应的响应对象 // ch.pipeline().addLast(new TestGameMessageHandler());//测试handler ch.pipeline().addLast(new IdleStateHandler(150, 60, 200));//如果6秒之内没有消息写出,发送写出空闲事件,触发心跳 ch.pipeline().addLast("HeartbeatHandler",new HeartbeatHandler());//心跳Handler ch.pipeline().addLast(new DispatchGameMessageHandler(dispatchGameMessageService));// 添加逻辑处理 } }); ChannelFuture future = bootStrap.connect(gameClientConfig.getDefaultGameGatewayHost(), gameClientConfig.getDefaultGameGatewayPort()); channel = future.channel();
-
ChannelHandler
ChannelPipline
Structure in :ChannelHandler
Before joiningChannelPipline
, it will be encapsulated into aChannelHandlerContext
node class and added to a doubly linked list structure. Except for the two specialChannelHandlerContext
implementation classes at the head and the tail , everything we customizeChannelHandler
will eventually be encapsulated into aDefaultChannelHandlerContext
class.
-
When a read event is triggered, the triggering sequence
ChannelHandler
(which will filter the type of Handler) is ->ChannelInboundHandler
HeaderContext
TailContext
-
When a write event is triggered, the trigger sequence
ChannelHandler
(which will filter the type of Handler) is opposite to that of the read event ->ChannelOutboundHandler
TailContext
HeaderContext
It can be seen that nio workers and non-nio workers are also bound to channels respectively (LoggingHandler is executed by nio workers, while its own handler is executed by non-nio workers)
-
-
-
Note:
-
Custom event class -
GetPlayerInfoEvent
as follows , can be used to identify the same type of I/O event operation, for example, the same event ID will be triggered whengetPlayerName()
and , at this time, the thread listening to the event ID will process the monitored result (as shown in the above figure Processor nodes in different channels can execute with the same event thread ):getPlayerLevel()
EventLoop
public class GetPlayerInfoEvent { private Long playerId; public GetPlayerInfoEvent(Long playerId) { super(); this.playerId = playerId; } public Long getPlayerId() { return playerId; } }
-
The core method of sending events to is not based on annotations
channelPipeline
is as follows:@Override public void userEventTriggered(AbstractGameChannelHandlerContext ctx, Object evt, Promise<Object> promise) throws Exception { if (evt instanceof IdleStateEvent) { logger.debug("收到空闲事件:{}", evt.getClass().getName()); ctx.close(); } else if (evt instanceof GetPlayerInfoEvent) { GetPlayerByIdMsgResponse response = new GetPlayerByIdMsgResponse(); response.getBodyObj().setPlayerId(this.player.getPlayerId()); response.getBodyObj().setNickName(this.player.getNickName()); Map<String, String> heros = new HashMap<>(); this.player.getHeros().forEach((k,v)->{ //复制处理一下,防止对象安全溢出。 heros.put(k, v); }); //response.getBodyObj().setHeros(this.player.getHeros());不要使用这种方式,它会把这个map传递到其它线程 response.getBodyObj().setHeros(heros); promise.setSuccess(response); } UserEventContext<PlayerManager> utx = new UserEventContext<>(playerManager, ctx); dispatchUserEventService.callMethod(utx, evt, promise); }
in:
UserEventContext
is forAbstractGameChannelHandlerContext
further encapsulationAbstractGameChannelHandlerContext
It is a custom doubly linked list node (containspre
,next
pointer)DefaultGameChannelHandlerContext
for implementation, where each node encapsulates an event handlerChannelHandler
;DefaultGameChannelHandlerContext
Add the linked list nodesGameChannelPipeline
to get a doubly linked list , and different processing directions represent different operations (read/write).GameChannelPipeline
Allocate executable threads for the processors in turn for event monitoring and callback.- Among them
Step1~Step4
, it is the encapsulation of ChannelHandler,Step5
and the thread setting listener is assigned to ChannelHandler
-
Step1 : Yes
UserEventContext
processing class:AbstractGameChannelHandlerContext
public class UserEventContext<T> { private T dataManager; private AbstractGameChannelHandlerContext ctx; public UserEventContext(T dataManager, AbstractGameChannelHandlerContext ctx) { super(); this.dataManager= dataManager; this.ctx = ctx; } public T getDataManager() { return dataManager; } public AbstractGameChannelHandlerContext getCtx() { return ctx; } }
-
Step2 :
AbstractGameChannelHandlerContext
The constructor of the event handler nodepublic AbstractGameChannelHandlerContext(GameChannelPipeline pipeline, EventExecutor executor, String name, boolean inbound, boolean outbound) { this.name = ObjectUtil.checkNotNull(name, "name"); this.pipeline = pipeline; this.executor = executor; this.inbound = inbound; this.outbound = outbound; }
-
Step3 :
DefaultGameChannelHandlerContext
It isAbstractGameChannelHandlerContext
the implementation class, which encapsulateschannelHandler
public class DefaultGameChannelHandlerContext extends AbstractGameChannelHandlerContext{ private final GameChannelHandler handler; public DefaultGameChannelHandlerContext(GameChannelPipeline pipeline, EventExecutor executor, String name, GameChannelHandler channelHandler) { super(pipeline, executor, name,isInbound(channelHandler), isOutbound(channelHandler));//判断一下这个channelHandler是处理接收消息的Handler还是处理发出消息的Handler this.handler = channelHandler; } private static boolean isInbound(GameChannelHandler handler) { return handler instanceof GameChannelInboundHandler; } private static boolean isOutbound(GameChannelHandler handler) { return handler instanceof GameChannelOutboundHandler; } @Override public GameChannelHandler handler() { return this.handler; } }
-
Step4 :
GameChannelPipeline
Doubly linked list about the processorpublic class GameChannelPipeline { static final InternalLogger logger = InternalLoggerFactory.getInstance(DefaultChannelPipeline.class); private static final String HEAD_NAME = generateName0(HeadContext.class); private static final String TAIL_NAME = generateName0(TailContext.class); private final GameChannel channel; private Map<EventExecutorGroup, EventExecutor> childExecutors; //GameChannelPipeline构造器 protected GameChannelPipeline(GameChannel channel) { this.channel = ObjectUtil.checkNotNull(channel, "channel"); tail = new TailContext(this); head = new HeadContext(this); head.next = tail; tail.prev = head; } ... //生成处理器节点 private AbstractGameChannelHandlerContext newContext(GameEventExecutorGroup group, boolean singleEventExecutorPerGroup, String name, GameChannelHandler handler) { return new DefaultGameChannelHandlerContext(this, childExecutor(group, singleEventExecutorPerGroup), name, handler); } ... //将处理器节点添加到channelPipeline上 public final GameChannelPipeline addFirst(GameEventExecutorGroup group, boolean singleEventExecutorPerGroup, String name, GameChannelHandler handler) { final AbstractGameChannelHandlerContext newCtx; synchronized (this) { name = filterName(name, handler); newCtx = newContext(group, singleEventExecutorPerGroup, name, handler); addFirst0(newCtx); } return this; } }
-
Step5 : Set the listener for each
channelHandler
, the methodGameChannelPipeline
inchildExecutor
is as follows:private EventExecutor childExecutor(GameEventExecutorGroup group, boolean singleEventExecutorPerGroup) { if (group == null) { return null; } if (!singleEventExecutorPerGroup) { return group.next(); } Map<EventExecutorGroup, EventExecutor> childExecutors = this.childExecutors; if (childExecutors == null) { // Use size of 4 as most people only use one extra EventExecutor. childExecutors = this.childExecutors = new IdentityHashMap<EventExecutorGroup, EventExecutor>(4); } // Pin one of the child executors once and remember it so that the same child executor // is used to fire events for the same channel. EventExecutor childExecutor = childExecutors.get(group); if (childExecutor == null) { childExecutor = group.next(); childExecutors.put(group, childExecutor); } return childExecutor; }
-
Based on annotations
GetPlayerInfoEvent
, you only need to identify the event class on the object method on the current event method, and the object methodgetPlayerInfoEvent
will send the event to the above, and there will be a special event listener to monitorchannelPipeline
during the processing :@UserEvent(GetPlayerInfoEvent.class) public void getPlayerInfoEvent(UserEventContext<PlayerManager> ctx, GetPlayerInfoEvent event, Promise<Object> promise) { GetPlayerByIdMsgResponse response = new GetPlayerByIdMsgResponse(); Player player = ctx.getDataManager().getPlayer(); response.getBodyObj().setPlayerId(player.getPlayerId()); response.getBodyObj().setNickName(player.getNickName()); Map<String, String> heros = new HashMap<>(); player.getHeros().forEach((k, v) -> { // 复制处理一下,防止对象安全溢出。 heros.put(k, v); }); // response.getBodyObj().setHeros(this.player.getHeros());不要使用这种方式,它会把这个map传递到其它线程 response.getBodyObj().setHeros(heros); promise.setSuccess(response); }
UserEventContext
The function is the same as above, encapsulatedChannelHandler
, and will beChannelHandler
insertedGamePipeline
into it.
-
-
[Q] How to understand the process of the event system? (event firing - listener handles the event)
Note :- When the service is started, the function module needs to register the interface for event monitoring , and the monitoring content includes the event and the event source (the object that the event generates) . When an event is triggered, these listening interfaces will be called , and the event and event source will be passed to the parameters of the interface, and then the received event will be processed in the listening interface .
- Events can only flow from event publishers to event listeners , not back propagation.
17、WebSocket
Refer to WebSocket knowledge point arrangement , polling/long polling (Comet)/long connection (SSE)/WebSocket (full-duplex) , simply build Websocket (java+vue)
-
[Q] What is websocket? What is the principle? (The technology used in HTML5 is a
tcp
full-duplex communication protocol that supports real-time communication) , refer to WebSocket Baidu Encyclopedia , HTML5_ Baidu EncyclopediaNote:
-
Before
websocket
it appeared,web
the interaction was generally ahttp
protocol- based short or long connection ; -
HTML5
It is the abbreviation of HyperText Markup Language 5.HTML5
The technologyHTML4.01
combines relevant standards and innovations to meet the requirements of modern network development . It was officially released in 2008 . HTML5 is composed of different technologies, which have been widely used in the Internet, providing more standards to enhance web applications .HTML5
A stable version was formed in 2012. On October 28, 2014,W3C
the final version of HTML5 was released. -
HTML5
The protocol was defined in 2011WebSocket
(WebSocket
the communication protocol was regulated by IETF 6455 in 2011 and supplemented by RFC7936.WebSocket
API is alsoW3C
defined as a standard), which is essentially a tcp-based protocol , which can be achieved by handshaking through the status code ofHTTP/1.1
the protocol The full-duplex communication between the browser and the server can better save server resources and bandwidth , and enable more real-time communication.101
-
websocket
It is a brand-new persistence protocol, which is nothttp
a stateless protocol , and the protocol name is "ws
";
-
-
Note:
-
socket
is not a protocol:-
Http
The protocol is a simple object access protocol, corresponding to the application layer.Http
The protocol isTCP
connection-based, mainly solving how to package data ;TCP
The protocol corresponds to the transport layer, which mainly solves how data is transmitted in the network ; -
Socket
It isTCP/IP
the encapsulation of the protocol.Socket
It is not a protocol itself , but a calling interface (API) , through which the protocolSocket
is implementedTCP/IP
.
-
-
socket
Usually it is a long connection:-
Http
Connection:http
The connection is the so-called short connection , and the client sends a request to the server, and the connection will be broken after the server responds. -
socket
Connection:socket
The connection is a so-called long connection . In theory, once the connection between the client and the server is established, it will not be disconnected actively ; however, the connection may be disconnected due to various environmental factors, such as: the server or client host isdown
down , network failure, or there is no data transmission between the two for a long time , the network firewall may disconnect the link to release network resources.Therefore, when there is no data transmission in a connection, heartbeat messages need to be sent
socket
in order to maintain a continuous connection . The specific heartbeat message format is defined by the developer.
-
-
-
Note:
-
Same point :
-
They are all based
tcp
on , and they are all reliable transmission protocols ; -
Both are application layer protocols ;
-
-
Differences :
-
WebSocket
It is a two-way communication protocol, an analogSocket
protocol, which can send or receive information in two directions;but
HTTP
one-way; -
WebSocket
It requires the browser and the server to shake hands to establish a connectionInstead,
http
the browser initiates a connection to the server, which the server does not know about in advance.
-
-
Contact :
WebSocket
When establishing a handshake, data isHTTP
transmitted via . But after the establishment, the protocol is not required for actual transmissionHTTP
; -
Summary (overall process):
-
First, the client initiates
http
a request, and after three handshakes , a connection is establishedTCP
; the supported version number and other informationhttp
are stored in the request , such as: , , etc.;WebSocket
Upgrade
Connection
WebSocket-Version
-
Then, after receiving the handshake request from the client , the server also uses
HTTP
the protocol to feed back data; -
Finally, after receiving the message that the connection is successful, the client starts full-duplex communication by means of the TCP transmission channel .
-
-
-
[Q] The connection and difference between websocket and webrtc technology? (webrtc uses the websocket protocol in video streaming) , refer to WebRTC_Baidu Encyclopedia
Note:
-
Same point :
-
They are all based on
socket
programming and are used for front-end and back-end real-time communication technologies; they are all browser-based protocols; -
The principle is that the data stream is transmitted to the server , and then the server distributes it . Both connections are long links ;
-
These two protocols put a lot of pressure on the server when they are used , because they will
websocket
only be closed or closed when the other party closes the browser or the server actively closeswebrtc
;
-
-
Differences :
-
websocket
Ensure that the two parties can send data to each other in real time, and you can decide what to send -
webrtc
The camera is mainly obtained from the browser (the webpage test, the question brushing system is generally based on this technology) generalwebrtc
technology (audio and video collection, codec, network transmission and rendering, audio and video synchronization, etc.), is a protocol about the camera , transmitted in the network You need to cooperate withwebsocket
technology to use it . After all, it is useless to just get a camera, you have to send it to the server.
-
-
-
[Q] What is the problem with http? What methods of connection maintenance are included in instant messaging? (Problems in http: stateless protocol, time-consuming parsing of request headers (such as identity authentication information), one-way message sending) , refer to polling, long polling (comet), long connection (SSE), WebSocket
Note:
-
http
Existing problems:-
http
It is a stateless protocol . Every time a session is completed, the server does not know who the client is next time . It needs to know who the other party is before making a corresponding response. Therefore, it is extremely useful for real-time communication. big obstacle -
http
The protocol uses one request and one response, and each request and response carries a large number ofheader
headers . For real-time communication, parsing the request header also takes a certain amount of time, so the efficiency is also lower -
The most important thing is that
http
the protocol requires the client to actively send and the server to send passively , that is, one request and one response, and active sending cannot be achieved .
-
-
There are four common ways to implement instant messaging , namely: polling, long polling (comet), long connection (SSE), and WebSocket .
-
Polling (the client initiates a request within a time interval , and the client closes the connection after receiving the data):
In order to implement push technology, many websites use polling technology. Polling is at a specific time interval (such as every 1 second), the client browser sends
HTTP
a request to the server , and then the server returns the latest data to the client browser.-
Advantages : the back-end coding is relatively simple
-
Disadvantages : This traditional mode brings obvious disadvantages, that is, the client's browser needs to continuously send requests to the server. However, the HTTP request may contain a long header , and the real valid data may be only a small part. Obviously, this will waste a lot of bandwidth and other resources .
-
-
Long polling (the client initiates a request , the server maintains the connection , and the client closes the connection after receiving the data):
The client initiates a request to the server, and the server keeps the connection open until data is sent to the client .
-
Advantages : Avoid frequent requests from the server when there is no information update, saving traffic
-
Disadvantages : Keeping the server connected all the time will consume resources , and it needs to maintain multiple threads at the same time , and the TCP connection that the server can carry is limited, so this kind of polling can easily lead to the upper limit of the connection .
-
-
Long connection (the connection is maintained through the channel , the client can disconnect, but the server cannot)
After the connection between the client and the server is established, it will not be disconnected. When the client accesses the content on the server again, it will continue to use this connection channel
-
Advantages : messages arrive instantly, no useless requests are sent
-
Disadvantages : Like long polling, keeping the server connected will consume resources . If there are a large number of long connections, the consumption of the server will be huge, and there is an upper limit to the server's capacity, it is impossible to maintain unlimited long connections .
-
-
WebSocket (supports two-way real-time communication , if one of the client and server is disconnected, the connection will be interrupted)
The client sends a request header ( ) carrying special information
Upgrade:WebSocket
to the server to establish a connection. After the connection is established, the two parties can realize free real-time two-way communication .Advantages :
- Less control overhead. After the connection is established, when data is exchanged between the server and the client, the packet headers used for protocol control are relatively small .
- Stronger real-time performance. Since the protocol is full-duplex , the server can actively send data to the client at any time. Compared with
HTTP
the request that needs to wait for the client to initiate a request for the server to respond , the delay is significantly less; even compared withComet
similar long polling, it can deliver data more times in a short period of time. - Stay connected . The difference
HTTP
is thatWebsocket
a connection needs to be established first, which makes it a stateful protocol , and part of the state information can be omitted during subsequent communication.HTTP
The request may need to carry state information (such as identity authentication, etc.) in each request.
Disadvantages : Relatively speaking, the development cost and difficulty are higher
-
WebSocket
Summary comparison of polling, long polling, long connection and :Polling Long polling (Long-Polling) WebSocket Long connection (SSE) letter of agreement http http tcp http trigger method client (client) client (client) client, server (client, server) client, server (client, server) advantage Good compatibility, strong fault tolerance, simple implementation Conserves resources compared to short polling Full-duplex communication protocol, low performance overhead, high security, and strong scalability Easy to implement and low development cost shortcoming Poor security, occupying more memory resources and the number of requests Poor security, occupying more memory resources and the number of requests Secondary analysis is required to transmit data, which increases development cost and difficulty only for advanced browsers Delay Not real-time, the delay depends on the request interval Not real-time, the delay depends on the request interval real time Non-real-time, default 3 seconds delay, delay can be customized
-
-