Design and implementation of muduo library learning 10-multi-threaded TcpServer

Dongyang's study notes

The multi-threaded TcpServer uses the EventLoopThreadPoll class.

一、EventLoopThreadPoll

The key steps to achieve multi-threaded TcpServer with one loop per thread of thought is in the new TcpConnection from event loop pollthe selection in a loop to TcpConnection use. In other words:

  • Multithreaded TcpServer's own ( 与TcpConnection共享的) EventLoop is only used to accept new connections, and new connections will use other ( 来自线程池) EventLoops to perform IO.
  • The EventLoop of single-threaded TcpServer is shared with TcpConnection.

The event loop tool of muduo is represented by the EventLoopThreadPool class, and the interface is as follows:

class EventLoopThreadPool : boost::noncopyable
{
    
    
 public:
  EventLoopThreadPool(EventLoop* baseLoop);
  ~EventLoopThreadPool();
  void setThreadNum(int numThreads) {
    
     numThreads_ = numThreads; }
  void start();
  EventLoop* getNextLoop();

 private:
  EventLoop* baseLoop_;
  bool started_;
  int numThreads_;
  int next_;  // always in loop thread
  boost::ptr_vector<EventLoopThread> threads_;
  std::vector<EventLoop*> loops_;
};

2. TcpServer creates a new TcpConnection each time

Every time TcpServer creates a new TcpConnection, it will call getNextLoop() to get EventLoop. If it is a single-threaded service, it will return baseLoop_ each time, which is the loop_ used by TcpServer itself.

  • For the meaning of the parameter of setThreadNum(), see TcpServer code comment.
  /// Set the number of threads for handling input.
  ///
  /// Always accepts new connection in loop's thread.
  /// Must be called before @c start
  /// @param numThreads
  /// - 0 means all I/O in loop's thread, no thread will created.
  ///   this is the default value.
  /// - 1 means all I/O in another thread.
  /// - N means a thread pool with N threads, new connections
  ///   are assigned on a round-robin basis.
  void setThreadNum(int numThreads);

TcpServer only needs to add a member function and a member variable.

 private:
  /// Not thread safe, but in loop
  void newConnection(int sockfd, const InetAddress& peerAddr);
+ /// Thread safe.
  void removeConnection(const TcpConnectionPtr& conn);
+ /// Not thread safe, but in loop
+ void removeConnectionInLoop(const TcpConnectionPtr& conn);

  typedef std::map<std::string, TcpConnectionPtr> ConnectionMap;

  EventLoop* loop_;  // the acceptor loop
  const std::string name_;
  boost::scoped_ptr<Acceptor> acceptor_; // avoid revealing Acceptor
+ boost::scoped_ptr<EventLoopThreadPool> threadPool_;

2.1 TcpServer::newConnection()

The change of the multi-threaded TcpServer is very simple, only 3 lines of code have been changed for the new connection.

  • When single-threaded, it passes the loop_ used by itself to TcpConnection;
  • Multithreading is to obtain ioLoop from EventLoopThreadPool every time
void TcpServer::newConnection(int sockfd, const InetAddress& peerAddr)
{
    
    
  loop_->assertInLoopThread();
  char buf[32];
  snprintf(buf, sizeof buf, "#%d", nextConnId_);
  ++nextConnId_;
  std::string connName = name_ + buf;

  LOG_INFO << "TcpServer::newConnection [" << name_
           << "] - new connection [" << connName
           << "] from " << peerAddr.toHostPort();
  InetAddress localAddr(sockets::getLocalAddr(sockfd));
  // FIXME poll with zero timeout to double confirm the new connection
+ EventLoop* ioLoop = threadPool_->getNextLoop();                            // 每次从线程池中获取线程,loops_[next_]
  TcpConnectionPtr conn(
!      new TcpConnection(ioLoop, connName, sockfd, localAddr, peerAddr));
  connections_[connName] = conn;
  conn->setConnectionCallback(connectionCallback_);
  conn->setMessageCallback(messageCallback_);
  conn->setWriteCompleteCallback(writeCompleteCallback_);
  conn->setCloseCallback(
      boost::bind(&TcpServer::removeConnection, this, _1)); // FIXME: unsafe
!  ioLoop->runInLoop(boost::bind(&TcpConnection::connectEstablished, conn));
}

2.2 TcpServer::removeConnection()

The destruction of multi-threaded connections is not complicated. The original removeConnection() is split into two functions, because TcpConnection will call removeConnection() in its own ioLoop_, 所以需要把他移到 TcpServerthe loop_ thread( 因为TcpServer 是无锁的)

  • &14 Move connectDestroyed() to the ioLoop_ thread of TcpConnection again to ensure that the ConnectionCallback of TcpConnection is always called in its ioLoop, which is convenient for the client to write
void TcpServer::removeConnection(const TcpConnectionPtr& conn)
{
    
    
+  // FIXME: unsafe
+  loop_->runInLoop(boost::bind(&TcpServer::removeConnectionInLoop, this, +conn));
}

void TcpServer::removeConnectionInLoop(const TcpConnectionPtr& conn)
+{
    
    
  loop_->assertInLoopThread();
!  LOG_INFO << "TcpServer::removeConnectionInLoop [" << name_
           << "] - connection " << conn->name();
  size_t n = connections_.erase(conn->name());
  assert(n == 1); (void)n;
+  EventLoop* ioLoop = conn->getLoop();
!  ioLoop->queueInLoop(
      boost::bind(&TcpConnection::connectDestroyed, conn));
}

All in all, the code of TcpServer and TcpConnection only deal with single-threaded situations (not even mutex), and with the help of EventLoop::runInLoop() and the introduction of EventLoopThreadPool, the implementation of multi-threaded TcpServer is a breeze.

  • Note: The thread switching between ioLoop and loop_ occurs at the time of reconnection establishment and disconnection, which does not affect the performance of normal services.

Three, scheduling method

Muduo currently uses the simplest round-robin algorithm to select EventLoop in the pool.

The Round Robin Scheduling algorithm is to schedule requests to different servers in turn in a round-robin manner, that is, each time the scheduling is executed i = (i + 1) mod n, and the i-th server is selected. The advantage of the algorithm is its simplicity, it does not need to record the current state of all connections, so it is a stateless scheduling.

  • TcpConnection is not allowed to replace EventLoop during operation, which is applicable to both long-connection and short-connection services, and it is not easy to cause partial load.

3.1 Extension: pool sharing

The current design of muduo is that each TcpServer has its own pool, which is not shared between different Tcpservers.

  • Multiple TcpServers can share an EventLoopThreadPool.
  • Another possibility is an EventLoop aLoop for two TcpServers (a and b), where a is a single-threaded server program.

Guess you like

Origin blog.csdn.net/qq_22473333/article/details/113755556