Netty encoding process and WriteAndFlush () implementation

The timing of the implementation of the encoder

First of all, we think, to the client to send data through the server, usually we call ctx.writeAndFlush(数据)the way data into the reference position may be the basic data types, may also be subject

Secondly, the encoder also belong to the handler, but he is dedicated to the specialized handler coding role before when our message is actually written ByteBuffer jdk underlying data need to go through the encoding process, it does not mean to send encoded do not go out, but encoded, the client may not receive a garbled

Then, we know that ctx.writeAndFlush(数据)it is actually an outbound processor-specific behavior, and therefore doomed it needs to be passed in the pipeline, make the transfer from where? Tail node from the start, has been spread before we have added to the header 自定义的解码器in

WriteAndFlush()Logic

We follow the source WriteAndFlush()relative to Write()its flush field is true

private void write(Object msg, boolean flush, ChannelPromise promise) {
    AbstractChannelHandlerContext next = findContextOutbound();
    final Object m = pipeline.touch(msg, next);
    EventExecutor executor = next.executor();
    if (executor.inEventLoop()) {
        if (flush) {
            //todo 因为flush 为 true
            next.invokeWriteAndFlush(m, promise);
        } else {
            next.invokeWrite(m, promise);
        }

So will this

  • One by calling the handlerwrite()
  • One by calling the handlerflush()

It is important to know, which means that we know, spread the event is divided into two waves, wave write, wave flush, the general flow of events both wave propagation I wrote here, in the following

write

  • ByteBuf converted into DirctBuffer
  • The message (DirctBuffer) encapsulated into the write queue entry is inserted
  • Set the write state

flush

  • Refresh flag is set to write status
  • Variable buffer queue, filter Buffer
  • Call jdk underlying api, the ByteBuf write native jdkByteBuffer

A simple custom encoder

/**
 * @Author: Changwu
 * @Date: 2019/7/21 20:49
 */
public class MyPersonEncoder extends MessageToByteEncoder<PersonProtocol> {

    // todo write动作会传播到 MyPersonEncoder的write方法, 但是我们没有重写, 于是就执行 父类 MessageToByteEncoder的write, 我们进去看
    @Override
    protected void encode(ChannelHandlerContext ctx, PersonProtocol msg, ByteBuf out) throws Exception {
        System.out.println("MyPersonEncoder....");
        // 消息头  长度
        out.writeInt(msg.getLength());
        // 消息体
        out.writeBytes(msg.getContent());
    }
}

Choose to inherit MessageToByteEncoder<T>from the encoder to the message bytes

Continue to follow up

ok, now we come to our custom decoder MyPersonEncoder,

However, I did not see being spread writeAndFlush(), it does not matter, our own decoder inherited MessageToByteEncoder, the parent class implements writeAndFlush(), source code as follows: parsing source code written on the back

// todo 看他的write方法
@Override
public void write(ChannelHandlerContext ctx, Object msg, ChannelPromise promise) throws Exception {
    ByteBuf buf = null;
    try {
        if (acceptOutboundMessage(msg)) {// todo 1  判断当前是否可以处理这个对象
            @SuppressWarnings("unchecked")
            I cast = (I) msg;
            // todo 2 内存分配
            buf = allocateBuffer(ctx, cast, preferDirect);
            try {
                // todo 3 调用本类的encode(), 这个方法就是我们自己实现的方法
                encode(ctx, cast, buf);
            } finally {
                // todo 4 释放
                ReferenceCountUtil.release(cast);
            }

            if (buf.isReadable()) {
                // todo 5. 往前传递
                ctx.write(buf, promise);
            } else {
                buf.release();
                ctx.write(Unpooled.EMPTY_BUFFER, promise);
            }
            buf = null;
        } else {
            ctx.write(msg, promise);
        }
    } catch (EncoderException e) {
        throw e;
    } catch (Throwable e) {
        throw new EncoderException(e);
    } finally {
        if (buf != null) {
            // todo 释放
            buf.release();
        }
    }
  • We send the message msg, encapsulated in the ByteBuf
  • Code: execution encode()method, which is an abstract method, implemented by our custom encoder
    • Our implementation is very simple, Buf which were written twice to the data below
      • Length of message type int
      • Message Body
  • The msg release
  • Continue onward transfer write()events
  • Ultimately, the first step in creating a release ByteBuf

summary

Up to this point, the encoder implementation process has been completed, we can see that the architecture and logic decoder is similar, similar to the template design pattern for us, just made a fill-in


In fact, the final step to the top 释放第一步创建的ByteBufbefore the message has been written in the underlying ByteBuffer jdk, how to do it? Do not forget it's a step forward to continue to deliver write()the event, and then move is actually HeaderContexta, and HeaderContextdirectly related to class is unsafe, it is not surprising, as we all know, netty in both read and write data channel underlying client or server, rely on unsafe

The following analysis began, WriteAndFlush()two waves task level details

The first wave event delivery write()

We follow the HenderContext write(), and HenderContext in the dependent is unsafe.wirte()so directly to AbstractChannelthe Unsafe source code as follows:

@Override
    public final void write(Object msg, ChannelPromise promise) {
        assertEventLoop();
        ChannelOutboundBuffer outboundBuffer = this.outboundBuffer;
        if (outboundBuffer == null) { // todo 缓存 写进来的 buffer
            ReferenceCountUtil.release(msg);
            return;
        }

        int size;
        try {
            // todo buffer  Dirct化 , (我们查看 AbstractNioByteBuf的实现)
            msg = filterOutboundMessage(msg);

            size = pipeline.estimatorHandle().size(msg);
            if (size < 0) {
                size = 0;
            }
        } catch (Throwable t) {
            safeSetFailure(promise, t);
            ReferenceCountUtil.release(msg);
            return;
        }
        // todo 插入写队列  将 msg 插入到 outboundBuffer
        // todo outboundBuffer 这个对象是 ChannelOutBoundBuf类型的,它的作用就是起到一个容器的作用
        // todo 下面看, 是如何将 msg 添加进 ChannelOutBoundBuf中的
        outboundBuffer.addMessage(msg, size, promise);
    }

Msg position parameters, is through the decoder our custom packaging a superclass of ByteBufthe type of message

This method is mainly to do three things

  • First: filterOutboundMessage(msg);the converted ByteBufDirctByteBuf

When we entered to see his realization, idea will prompt its subclasses override this method, who rewrite it? Is AbstractNioByteChannelthis class which are of client camp classes, and server AbstractNioMessageChannelpar

Source as follows:

protected final Object filterOutboundMessage(Object msg) {
    if (msg instanceof ByteBuf) {
        ByteBuf buf = (ByteBuf) msg;
        if (buf.isDirect()) {
            return msg;
        }

        return newDirectBuffer(buf);
    }

    if (msg instanceof FileRegion) {
        return msg;
    }

    throw new UnsupportedOperationException(
            "unsupported message type: " + StringUtil.simpleClassName(msg) + EXPECTED_TYPES);
}
  • The second thing: the converted DirectBufferinserted into the write queue

What is the write queue? What is the role?

It is actually a container netty custom, one-way linked list structure used, why we need this container it? Recall that the server needs to send a message to the client, then the message is encapsulated into ByteBuf, but then, write to the client there are two ways

  • write()
  • writeAndFlush()

The difference between this approach is there, but the former was written, (wrote ByteBuf) was not content to refresh ByteBuffer, not to refresh the cache, there is no way to write it further jdk native ByteBuffer, whereas writeAndFlush()it is more convenient, msg first write ByteBuf, then brush directly into the socket, a set of take away, so after knock

But if the client happens is not in use writeAndFlush(), and the use of the former, then bloom the message ByteBufis delivered to the location of the beginning of the handler, how do? Unsafe can not put it wrote to clients, do not discard?

So write queue to solve this problem, it lists as data structures, new communication over ByteBufhe will be packaged into a one node (entry) for maintenance, in order to distinguish this list, which node is being used, which node is not used, he will be labeled with a labeled three pointers, as follows:

  • flushedEntry be flushed off entry
  • tailEntry tail node
  • unflushedEntry not brush entry

Here we look at how it will be a new node is added to the write queue

addMessage(Object msg, int size, ChannelPromise promise) Add the write queue

public void addMessage(Object msg, int size, ChannelPromise promise) {
    // todo 将上面的三者封装成实体
    // todo 调用工厂方法, 创建  Entry  , 在 当前的ChannelOutboundBuffer 中每一个单位都是一个 Entry, 用它进一步包装 msg
    Entry entry = Entry.newInstance(msg, size, total(msg), promise);

    // todo 调整三个指针, 去上面查看这三个指针的定义
    if (tailEntry == null) {
        flushedEntry = null;
        tailEntry = entry;
    } else {
        Entry tail = tailEntry;
        tail.next = entry;
        tailEntry = entry;
    }
    if (unflushedEntry == null) {
        unflushedEntry = entry;
    }

    // increment pending bytes after adding message to the unflushed arrays.
    // See https://github.com/netty/netty/issues/1619
    // todo 跟进这个方法
    incrementPendingOutboundBytes(entry.pendingSize, false);
}

See his source code, in fact, a simple list operation is performed for the insertion of tail insertion method, has been inserted into the final position, the head of the list is marked unflushedEntryentry between the two nodes, a node may be represented by a flush

After each adding a new node calls the incrementPendingOutboundBytes(entry.pendingSize, false)method, the role of this method is to set the write state, how to set the state? We look at its source code, you can see, it will record the cumulative ByteBufcapacity, once beyond the threshold will spread channel can not write events

  • This is the write()third thing
private void incrementPendingOutboundBytes(long size, boolean invokeLater) {
    if (size == 0) {
        return;
    }
    // todo TOTAL_PENDING_SIZE_UPDATER 当前缓存中 存在的代写的 字节
    // todo 累加
    long newWriteBufferSize = TOTAL_PENDING_SIZE_UPDATER.addAndGet(this, size);
    // todo 判断 新的将被写的 buffer的容量不能超过  getWriteBufferHighWaterMark() 默认是 64*1024  64字节
    if (newWriteBufferSize > channel.config().getWriteBufferHighWaterMark()) {
        // todo 超过64 字节,进入这个方法
        setUnwritable(invokeLater);
    }
}

summary:

So far, the first wave of write()the event has been completed, we can see, the function of this event is to use ChannelOutBoundBufto write a single event propagation in the past ByteBufto maintain them, waiting for the spread of flush events

The second wave event delivery flush()

Us back, AbstractChannelin the spread state to see his second wave flush event source as follows: it is mainly to do the following three things

  • Add refresh flag is set to write status
  • Buffer queue traversal, filtering can flush the buffer
  • Call jdk underlying api, were spin-write
// todo 最终传递到 这里
@Override
public final void flush() {
    assertEventLoop();

    ChannelOutboundBuffer outboundBuffer = this.outboundBuffer;
    if (outboundBuffer == null) {
        return;
    }
    // todo 添加刷新标志, 设置写状态
    outboundBuffer.addFlush();

    // todo  遍历buffer队列, 过滤byteBuf
    flush0();
}

Add refresh flag is set to write status

What is the added refresh sign it? In fact, is to change the cursor position in the list, it can perfect the three pointer between entrydivided over once flush and non-flush node

ok, continue

The following look at how to set up state, addflush () source code as follows:

 * todo 给 ChannelOutboundBuffer 添加缓存, 这意味着, 原来添加进 ChannelOutboundBuffer 中的所有 Entry, 全部会被标记为 flushed 过
 */
public void addFlush() {
// todo 默认让 entry 指向了 unflushedEntry ==> 其实链表中的最左边的 未被使用过的 entry
// todo
Entry entry = unflushedEntry;

if (entry != null) {
    if (flushedEntry == null) {
        // there is no flushedEntry yet, so start with the entry
        flushedEntry = entry;
    }
    do {
        flushed ++;
        if (!entry.promise.setUncancellable()) {
            // Was cancelled so make sure we free up memory and notify about the freed bytes
            int pending = entry.cancel();
            // todo 跟进这个方法
            decrementPendingOutboundBytes(pending, false, true);
        }
        entry = entry.next;
    } while (entry != null);

    // All flushed so reset unflushedEntry
    unflushedEntry = null;
}
}

The goal is to move the pointer to change the status of each node, which a pointer? Yes flushedEntry, it points to read nodes are flush, that is, to its left, are processed through the

The following code, a start position is selected, because if flushedEntry == null, no explanation had been flush through a node, then the position will be positioned to the left started beginning,

if (flushedEntry == null) {
    // there is no flushedEntry yet, so start with the entry
    flushedEntry = entry;
}

Followed by a do-while loop, the last being a flushedEntryplace to tail, one by one through each node, because these nodes are to be flush into the cache, we need to accumulate during write their capacity to lose, as the source

private void decrementPendingOutboundBytes(long size, boolean invokeLater, boolean notifyWritability) {
    if (size == 0) {
        return;
    }
    // todo 每次 减去 -size
    long newWriteBufferSize = TOTAL_PENDING_SIZE_UPDATER.addAndGet(this, -size);
    // todo   默认 getWriteBufferLowWaterMark() -32kb
    // todo   newWriteBufferSize<32 就把不可写状态改为可写状态
    if (notifyWritability && newWriteBufferSize < channel.config().getWriteBufferLowWaterMark()) {
        setWritable(invokeLater);
    }
}

Also using atomic classes do this, in addition, after the reduction of capacity, if less than 32kb will spread the event channel writable

Buffer queue traversal, filtering byteBuf

This is the highlight of flush, which implements the operation to write data to the socket

We follow up its source code, doWrite(ChannelOutboundBuffer in)it is this kind AbstractChannelof abstract methods, such as writing a logical approach is designed to abstract to the concrete channel write, and implementation-dependent, we would like to present to the client to write its implementation Shi AbstractNioByteChannel, we enter its implementation, the source code is as follows

 boolean setOpWrite = false;
        // todo 整体是无限循环, 过滤ByteBuf
for (;;) {
    // todo 获取第一个 flushedEntity, 这个entity中 有我们需要的 byteBuf
    Object msg = in.current();
    if (msg == null) {
        // Wrote all messages.
        clearOpWrite();
        // Directly return here so incompleteWrite(...) is not called.
        return;
    }

    if (msg instanceof ByteBuf) {
        // todo 第三部分,jdk底层, 进行自旋的写
        ByteBuf buf = (ByteBuf) msg;
        int readableBytes = buf.readableBytes();
        if (readableBytes == 0) {
            // todo 当前的 ByteBuf 中,没有可写的, 直接remove掉
            in.remove();
            continue;
        }

        boolean done = false;
        long flushedAmount = 0;
        if (writeSpinCount == -1) {
            // todo 获取自旋锁, netty使用它进行
            writeSpinCount = config().getWriteSpinCount();
        }
        // todo 这个for循环是在自旋尝试往 jdk底层的 ByteBuf写入数据
        for (int i = writeSpinCount - 1; i >= 0; i --) {

            // todo  把 对应的 buf , 写到socket中
            // todo localFlushedAmount就是 本次 往jdk底层的 ByteBuffer 中写入了多少字节
            int localFlushedAmount = doWriteBytes(buf);

            if (localFlushedAmount == 0) {
                setOpWrite = true;
                break;
            }
            // todo 累加一共写了多少字节
            flushedAmount += localFlushedAmount;
            // todo 如果buf中的数据全部写完了, 设置完成的状态, 退出循环
            if (!buf.isReadable()) {
                done = true;
                break;
            }
        }

        in.progress(flushedAmount);

        // todo 自旋结束,写完了  done = true
        if (done) {
            // todo 跟进去
            in.remove();
        } else {
            // Break the loop and so incompleteWrite(...) is called.
            break;
        }
    ....

This piece of code is very long, its main logic is as follows:

By an infinite loop, you can get a guarantee on all nodes ByteBuf, nodes get through this function, Object msg = in.current();
we further look at its implementation, as it will only remove the node labeled us

 public Object current() {
        Entry entry = flushedEntry;
        if (entry == null) {
            return null;
        }

        return entry.msg;
    }

Next, using the jdk spin lock loop 16, to try the ByteBuffer jdk underlying write data, calling function doWriteBytes(buf);he class are abstract methods, a specific implementation, the client chanel wrapper class NioSocketChannelimplementation source code is as follows:

// todo
@Override
protected int doWriteBytes(ByteBuf buf) throws Exception {
    final int expectedWrittenBytes = buf.readableBytes();
    // todo 将字节数据, 写入到 java 原生的 channel中
    return buf.readBytes(javaChannel(), expectedWrittenBytes);
}

This readBytes()is still an abstract method, because we have put the front ByteBufbecame Dirct type, so its implementation class is transformed PooledDirctByteBufto follow up as follows: finally saw the intimate scene

 // todo
    @Override
    public int readBytes(GatheringByteChannel out, int length) throws IOException {
        checkReadableBytes(length);
        //todo  关键的就是 getBytes()  跟进去
        int readBytes = getBytes(readerIndex, out, length, true);
        readerIndex += readBytes;
        return readBytes;
    }
    
    跟进getBytes(){
        index = idx(index);
        // todo 将netty 的 ByteBuf 塞进 jdk的    ByteBuffer tmpBuf;
        tmpBuf.clear().position(index).limit(index + length);
        // todo 调用jdk的write()方法
        return out.write(tmpBuf);
    }

Further, the node will be used remove () away, as the source, but also for the operation of the list

private void removeEntry(Entry e) {
        if (-- flushed == 0) { // todo 如果是最后一个节点, 把所有的指针全部设为 null
            // processed everything
            flushedEntry = null;
            if (e == tailEntry) {
                tailEntry = null;
                unflushedEntry = null;
            }
        } else { //todo 如果 不是最后一个节点, 把当前节点,移动到最后的 节点
            flushedEntry = e.next;
        }
    }

summary

Here, the second wave propagation task is complete

write

  • The buffer converts into DirctBuffer
  • The message is inserted into the write queue entry
  • Set the write state

flush

  • Refresh flag is set to write status
  • Variable buffer queue, filter Buffer
  • Call jdk underlying api, the ByteBuf write native jdkByteBuffer

Guess you like

Origin www.cnblogs.com/ZhuChangwu/p/11228433.html