Muduo network library source code reproduction notes (25): Buffer class

Introduction to Muduo Network Library

Muduo is a modern C++ network library based on Reactor mode, author Chen Shuo. It uses a non-blocking IO model, based on event-driven and callbacks, natively supports multi-core and multi-threading, and is suitable for writing Linux server-side multi-threaded network applications.
The core code of muduo network library is only a few thousand lines. In the advanced stage of network programming technology learning, muduo is an open source library that is very worth learning. At present, I have just started to learn the source code of this network library, hoping to record this learning process. The source code of this web library has been published on GitHub, you can click here to read it. At present, the source code on Github has been rewritten in C++11 by the author, and the version I am studying does not use the C++11 version. However, the two are similar, and the core idea remains unchanged. Click here to see my source code . Starting from Note 17 to record the implementation process of muduo's net library. If you need to see the reproduction process of the base library , you can click here: The implementation process of muduo's base library . The notes of the network library are here:
Muduo network library source code reproduction notes (17): EventLoop that does nothing.
Muduo network library source code reproduction notes (18): Reactor key structure
muduo network library source code reproduction notes ( 19): TimeQueue timer
muduo network library source code reproduction notes (20): EventLoop::runInloop() function and EventLoopThread class
muduo network library source code reproduction notes (21): Acceptor class, InetAddress class, Sockets class , SocketsOps.cc
muduo network library source code reproduction notes (22): TcpServer class and TcpConnection preliminary
Muduo network library source code reproduction notes (twenty-three): TcpConnection disconnected
muduo network library source code reproduction notes (twenty-four): realization of multi-threaded server

Buffer class

1 The role of the Buffer class

The Buffer class described in this section implements the user application layer buffer. Muduo is a non-blocking IO model, that is, it does not block when calling read and write. Consider this scenario: the program wants to send 100kb of data through a TCP connection, but when writing data to the kernel buffer via write, the system only accepts 80kb. What about the remaining 20kb data? At this time, the user buffer implemented this time comes in handy. The user buffer stores the remaining 20kb of data, registers for the POLLOUT event, waits for the opportunity to send the data, and then cancels the attention to the POLLOUT event. Such a buffer, we call it an output buffer.
Since there is an output buffer, there is also an input buffer. Since TCP is an unbounded byte stream protocol, this situation often occurs: the sender sends a 2kb message, and the receiver receives it twice, receiving 500B and 1500B respectively; or the receiver receives it in three times ,Respectively received 1000B, 500B, 500B; the network library must read these data once, and then notify the business logic of the program when a complete message is received.

2 Buffer operation

As mentioned earlier, a Tcp connection requires two buffers, input buffer and output buffer. As shown in the private members of Buffer below, vector is used to store data and is initialized to a length of 1024 bytes. ReaderIndex_ and writeIndex_ are two subscripts respectively, and their values ​​are equal in the initial state (that is, there is no data to be sent at this time), and 8 bytes are reserved before readerIndex, which belongs to the reserved space. The reader and write of the subscript here are relative to the user. For the kernel, the operation is just the opposite. This requires readers to understand and experience.

private:
	std::vector<char> buffer_; //replace array
	size_t readerIndex_;  //position of data to read(will write to kernel)
	size_t writerIndex_;  //position of data write(read fron kernel)

As shown in the picture, when the kernel writes 200 bytes to the buffer, writeIndex moves backward by 200 bytes. The area between readIndex_ and writeIndex_ is the output buffer, and the data in the area is what the user reads. Data can be written to the kernel and sent. The area after writeIndex_ is the input buffer, waiting to be read and written from the kernel.
Basic operation
If 50 bytes are read into the buffer, the readIndex moves backward by 50 bytes, indicating that the 50 bytes have been read by the user.
Buffer can also be expanded. If you want to write 1000 bytes to the buffer at this time, the buffer will resize if it judges that the current space is not enough.

3 readFd function

The user actually only uses the readFd function when using Buffer to receive data. This function first prepares a 64k stack buffer extrabuf, and then uses the readv function to read the data to iovec (iovec specifies buffer and extrabuffer respectively), and judges if If the buffer capacity is sufficient, you only need to move writerIndex_, otherwise use the append member function to add the remaining data to the buffer (the expansion operation will be performed).

ssize_t Buffer::readFd(int fd,int* savedErrno)
{
    
    
	char extrabuf[65536];
	struct iovec vec[2];
	const size_t writable = writableBytes();
	//first buffer
	vec[0].iov_base = begin() + writerIndex_;
	vec[0].iov_len = writable;
	//second buffer
	vec[1].iov_base = extrabuf;
	vec[1].iov_len = sizeof extrabuf;

	const ssize_t n = sockets::readv(fd,vec,2);
	if(n < 0)
	{
    
    
		*savedErrno = errno;
	}

	else if(implicit_cast<size_t>(n) <= writable)
	{
    
    
		writerIndex_ += n;
	}

	else
	{
    
    
		writerIndex_ = buffer_.size();
		append(extrabuf,n - writable);	
	}
	return n;
}

Guess you like

Origin blog.csdn.net/MoonWisher_liang/article/details/107700456