Muduo network library source code reproduction notes (twenty-four): realization of multi-threaded server

Introduction to Muduo Network Library

Muduo is a modern C++ network library based on Reactor mode, author Chen Shuo. It uses a non-blocking IO model, based on event-driven and callbacks, natively supports multi-core and multi-threading, and is suitable for writing Linux server-side multi-threaded network applications.
The core code of muduo network library is only a few thousand lines. In the advanced stage of network programming technology learning, muduo is an open source library that is very worth learning. At present, I have just started to learn the source code of this network library, hoping to record this learning process. The source code of this web library has been published on GitHub, you can click here to read it. At present, the source code on Github has been rewritten in C++11 by the author, and the version I am studying does not use the C++11 version. However, the two are similar, and the core idea remains unchanged. Click here to see my source code . Starting from Note 17 to record the implementation process of muduo's net library. If you need to see the reproduction process of the base library , you can click here: The implementation process of muduo's base library . The notes of the network library are here:
Muduo network library source code reproduction notes (17): EventLoop that does nothing.
Muduo network library source code reproduction notes (18): Reactor key structure
muduo network library source code reproduction notes ( 19): TimeQueue timer
muduo network library source code reproduction notes (20): EventLoop::runInloop() function and EventLoopThread class
muduo network library source code reproduction notes (21): Acceptor class, InetAddress class, Sockets class , SocketsOps.cc
muduo network library source code reproduction notes (22): TcpServer class and TcpConnection preliminary
Muduo network library source code reproduction notes (twenty-three): TcpConnection disconnected

1 Implement a multi-threaded server

As mentioned in the previous blog, the multi-threaded server in the muduo library uses a one loop per thread approach. As shown in the figure below, we call the I/O thread that owns the Acceptor as the mainReator. When the client initiates a connection, the acceptor forwards it to the subReator to connect to the client. The subReactor is in a separate thread, thus realizing a multi-threaded server. Earlier we also talked about using the EventLoopThead class to create an EventLoop in a thread. Combining the previous knowledge, this section discusses how to use the thread pool to implement a multi-threaded server.
Multithreaded server

2 EventLoopThreadPool类

EventLoopThreadPool, as its name implies, is an EventLoop thread pool. Note that its two member variables threads_ and loops_ are the pointer array of EventLoopThread and the array of EventLopp pointers respectively. baseLoop_ is the EventLoop with the acceptor, and numThreads_ is the number of threads in the thread pool. The meaning of next will be discussed later.

class EventLoopThreadPool : boost::noncopyable
{
    
    
public:
	typedef boost::function<void(EventLoop*)> ThreadInitCallback;
	
	EventLoopThreadPool(EventLoop* baseLoop);
	~EventLoopThreadPool();
	void setThreadNum(int numThreads)
	{
    
      numThreads_ = numThreads;  }
	void start(const ThreadInitCallback& cb = ThreadInitCallback());
	EventLoop* getNextLoop();
	
private:
	EventLoop* baseLoop_; //same EventLoop as acceptor
	bool started_;
	int numThreads_;
	int next_;  //index of the chosen EventLoop when new connection arrives
	boost::ptr_vector<EventLoopThread> threads_;
	std::vector<EventLoop*> loops_;
};

The start function is also easy to understand, it starts the thread creation EventLoop once according to the number of numThreads_, and waits for the task to execute. The function of getNextLoop() is this: when the client initiates a connection, mainReactor will use the getNextLoop function to return a subReator to establish a connection with the client. The use mechanism of getNextLoop is robin-round, which in turn takes out a thread's EventLoop from the thread pool to establish a connection. So the next_ mentioned earlier is the subscript of the thread to be taken out of the thread pool. Set to zero when next_ reaches the size of loops_.

EventLoop* EventLoopThreadPool::getNextLoop()
{
    
    
	baseLoop_ -> assertInLoopThread();
	EventLoop* loop = baseLoop_; //avoid if loops_ is empty
	//round robin
	if(!loops_.empty())
	{
    
    
		loop = loops_[next_];
		++next_;
		if(implicit_cast<size_t>(next_) >= loops_.size())
		{
    
    
			next_ = 0;
		}
	}
	return loop;
}

3 TcpServer modification

3.1 Establish a connection

The newConnection function needs to modify the following three points. First, replace the IO thread that originally owned the acceptor with the thread ioLoop in the thread pool, bind the ioLoop when establishing a connection, and add connectEstablished to queueInLoop.

void TcpServer::newConnection(int sockfd,const InetAddress& peerAddr)
{
    
    
	loop_ -> assertInLoopThread();
	EventLoop* ioLoop = threadPool_ -> getNextLoop();
	TcpConnectionPtr conn(new TcpConnection(ioLoop,
											connName,
											sockfd,
											localAddr,
											peerAddr));
	ioLoop -> runInLoop(boost::bind(&TcpConnection::connectEstablished,conn));
}

3.2 Disconnect

When disconnecting, divide the removeConnection function into two. Because you need to call removeConnection in TcpConnection's own IO thread.

void TcpServer::removeConnection(const TcpConnectionPtr& conn)
{
    
    
	loop_ -> runInLoop(boost::bind(&TcpServer::removeConnectionInLoop,this,conn));
}

void TcpServer::removeConnectionInLoop(const TcpConnectionPtr& conn)
{
    
    
	loop_ -> assertInLoopThread();
	
	LOG_INFO << "TcpServer::removeConnctionInLoop [" << name_
		<< "] - connection " << conn -> name();

	LOG_TRACE << " [8] usecount = " << conn.use_count();
	size_t n = connections_.erase(conn->name());
	LOG_TRACE << " [9] usecount = " << conn.use_count();
	
	(void)n;
	assert(n == 1);
	EventLoop* ioLoop = conn->getLoop();
	ioLoop -> queueInLoop(boost::bind(&TcpConnection::connectDestroyed,conn));
	//loop_ -> queueInLoop(
		//boost::bind(&TcpConnection::connectDestroyed,conn));
	LOG_TRACE << " [10] usecount = " << conn.use_count();
}

Guess you like

Origin blog.csdn.net/MoonWisher_liang/article/details/107667948