Analysis of muduo framework kernel

Analysis of muduo network framework:

First of all, we must analyze the core skeleton that can support the muduo framework. The core skeleton of moduo reactor is three classes, channel, Eventpool and Poller.

First look at Channel. Channel is used to allow events to travel freely in the reactor. Since multiple threads can share memory between each thread, the communication between threads generally uses memory replication as the main means of communication. Muduo multithreading One thread per loop is used, which means that every thread has a main loop.

The muduo source code is simple and easy to understand, and the design is small. Let's first look at the running process of the entire code from the use of it. The use of muduo code:

int main()
{
    //初始化mqtt全局容器
    ::signal(SIGPIPE, SIG_IGN);
    MQTTContainer.globalInit();
    //LOG_INFO << "pid = " << getpid() << ", tid = " << CurrentThread::tid();
    muduo::net::EventLoop loop;
    muduo::net::InetAddress listenAddr(9500);
    DeviceServer::MQTTServer server(&loop, listenAddr);
    server.start();
    loop.loop();
    return 0;
}

There are two very core classes here, EventLoop and muduo::net::TcpServer. Follow this clue and continue to look at TcpServer. What does start do? Seeing here, I will actually think about two places, what does the TcpServer constructor do ? What did start do?

Code directly

TcpServer::TcpServer(EventLoop* loop,
                     const InetAddress& listenAddr,
                     const string& nameArg,
                     Option option)
  : loop_(CHECK_NOTNULL(loop)),
    ipPort_(listenAddr.toIpPort()),
    name_(nameArg),
    acceptor_(new Acceptor(loop, listenAddr, option == kReusePort)),
    threadPool_(new EventLoopThreadPool(loop, name_)),
    connectionCallback_(defaultConnectionCallback),
    messageCallback_(defaultMessageCallback),
    nextConnId_(1)
{
  acceptor_->setNewConnectionCallback(
      std::bind(&TcpServer::newConnection, this, _1, _2));
}

The core steps

Initialized Eventloop.

loop_(CHECK_NOTNULL(loop))

Thread pool initialized

threadPool_(new EventLoopThreadPool(loop, name_))

Tcp receiver initialized

acceptor_(new Acceptor(loop, listenAddr, option == kReusePort))

Ok, let’s continue to see what start does next

void TcpServer::start()
{
  if (started_.getAndSet(1) == 0)
  {
    threadPool_->start(threadInitCallback_);

    assert(!acceptor_->listenning());
    loop_->runInLoop(
        std::bind(&Acceptor::listen, get_pointer(acceptor_)));
  }
}

After start, we did an important thing, that is, we started the thread pool. We continue to think along the lines of thought? What does the thread pool constructor do? What did the thread pool start do?

First look at the thread pool constructor

EventLoopThreadPool::EventLoopThreadPool(EventLoop* baseLoop, const string& nameArg)
  : baseLoop_(baseLoop),
    name_(nameArg),
    started_(false),
    numThreads_(0),
    next_(0)
{
}

The core function is to initialize the baseLoop, another important attribute is the number of numThreads threads

Good to keep watching start

void EventLoopThreadPool::start(const ThreadInitCallback& cb)
{
  assert(!started_);
  baseLoop_->assertInLoopThread();

  started_ = true;

  for (int i = 0; i < numThreads_; ++i)
  {
    char buf[name_.size() + 32];
    snprintf(buf, sizeof buf, "%s%d", name_.c_str(), i);
    EventLoopThread* t = new EventLoopThread(cb, buf);
    threads_.push_back(std::unique_ptr<EventLoopThread>(t));
    loops_.push_back(t->startLoop());
  }
  if (numThreads_ == 0 && cb)
  {
    cb(baseLoop_);
  }
}

See the code is easy to understand, muduo launched numThreads threads, saving objects in the thread pool threads in

Pay attention to what cb is, cb

const ThreadInitCallback& cb

Is the thread initialization function, which is the callback address of setThreadInitCallback in muduo

There is also a very core operation in EventLoopThreadPool->start, EventLoopThread. We still need to pay attention to what he initializes and what startLoop does. We can see that if the number of threads is 0,

  if (numThreads_ == 0 && cb)
  {
    cb(baseLoop_);
  }

It will run the thread initialization function directly, if not, it will enter EventLoopThread when it is initialized. Let's take a look at the constructor code of EventLoopThread.

EventLoopThread::EventLoopThread(const ThreadInitCallback& cb,
                                 const string& name)
  : loop_(NULL),
    exiting_(false),
    thread_(std::bind(&EventLoopThread::threadFunc, this), name),
    mutex_(),
    cond_(mutex_),
    callback_(cb)
{
}

Initialized thread_ mutex_ cond_ callback_ (thread initialization callback), initialized some important thread components and thread instances, and then we directly look at startLoop

EventLoop* EventLoopThread::startLoop()
{
  assert(!thread_.started());
  thread_.start();

  EventLoop* loop = NULL;
  {
    MutexLockGuard lock(mutex_);
    while (loop_ == NULL)
    {
      cond_.wait();
    }
    loop = loop_;
  }

  return loop;
}

Here is really a very interesting piece of code. Start the thread in start, and then use the condition variable to wait outside until the EventLoop is initialized in the thread and then assign it to the loop. This code uses the condition variables of glibc very cleverly. It shows that condition variables are a powerful tool in multithreaded programming. pthread_create is called in start, don't you believe it? Continue to look at the code in Thread->start!

void Thread::start()
{
  assert(!started_);
  started_ = true;
  // FIXME: move(func_)
  detail::ThreadData* data = new detail::ThreadData(func_, name_, &tid_, &latch_);
  if (pthread_create(&pthreadId_, NULL, &detail::startThread, data))
  {
    started_ = false;
    delete data; // or no delete?
    LOG_SYSFATAL << "Failed in pthread_create";
  }
  else
  {
    latch_.wait();
    assert(tid_ > 0);
  }
}

After the thread is created, the thread's startup code is in startThread, and we continue to think about what is done in startThread? , I have to say muduo here, the Thread class is designed with his ingenuity

int Thread::join()
{
  assert(started_);
  assert(!joined_);
  joined_ = true;
  return pthread_join(pthreadId_, NULL);
}

Thread::~Thread()
{
  if (started_ && !joined_)
  {
    pthread_detach(pthreadId_);
  }
}

You can use join for synchronization monitoring. Of course, if you don't do this, there is no problem. If the Thread destructor finds that the thread is not closed when you destroy the Thread again, it will call pthread_detach to prevent your thread from leaking. Isn't the design clever?

Well, let's think about what is done in startThread!

void* startThread(void* obj)
{
  ThreadData* data = static_cast<ThreadData*>(obj);
  data->runInThread();
  delete data;
  return NULL;
}

Actually called in the thread

 thread_(std::bind(&EventLoopThread::threadFunc, this), name),

threadFunc

void EventLoopThread::threadFunc()
{
  EventLoop loop;

  if (callback_)
  {
    callback_(&loop);
  }

  {
    MutexLockGuard lock(mutex_);
    loop_ = &loop;
    cond_.notify();
  }

  loop.loop();
  //assert(exiting_);
  MutexLockGuard lock(mutex_);
  loop_ = NULL;
}

Trigger the thread initialization function, initialize the loop , and notify the main thread that the initialization is complete cond.notify();

A very clever scope narrows the scope of the lock, and then starts the event loop. Well, we use a picture to analyze the entire process. Note that the key point is that the EventLoop in each thread will be stored in the thread after the initialization of each thread is completed. In the std::vector _loops of the pool, it is very exposed to TcpServer to facilitate the communication between TcpServer and thread pool multi-threads

 

 

What is done in each thread in the thread pool and what is done in the Tcpserver? How to carry out event communication between threads is the question that should be considered!

 

Acceptor has two important attributes, Channel and sockfd. When Acceptor is initialized, sockfd is given to acceptChannel_

Acceptor::Acceptor(EventLoop* loop, const InetAddress& listenAddr, bool reuseport)
  : loop_(loop),
    acceptSocket_(sockets::createNonblockingOrDie(listenAddr.family())),
    acceptChannel_(loop, acceptSocket_.fd()),
    listenning_(false),
    idleFd_(::open("/dev/null", O_RDONLY | O_CLOEXEC))
{
  assert(idleFd_ >= 0);
  acceptSocket_.setReuseAddr(true);
  acceptSocket_.setReusePort(reuseport);
  acceptSocket_.bindAddress(listenAddr);
  acceptChannel_.setReadCallback(
      std::bind(&Acceptor::handleRead, this));
}

We see that handleRead is called when the pipeline receives a readable event, let’s look at handleRead again

void Acceptor::handleRead()
{
  loop_->assertInLoopThread();
  InetAddress peerAddr;
  //FIXME loop until no more
  int connfd = acceptSocket_.accept(&peerAddr);
  if (connfd >= 0)
  {
    // string hostport = peerAddr.toIpPort();
    // LOG_TRACE << "Accepts of " << hostport;
    if (newConnectionCallback_)
    {
      newConnectionCallback_(connfd, peerAddr);
    }
    else
    {
      sockets::close(connfd);
    }
  }
  else
  {
    LOG_SYSERR << "in Acceptor::handleRead";
    // Read the section named "The special problem of
    // accept()ing when you can't" in libev's doc.
    // By Marc Lehmann, author of libev.
    if (errno == EMFILE)
    {
      ::close(idleFd_);
      idleFd_ = ::accept(acceptSocket_.fd(), NULL, NULL);
      ::close(idleFd_);
      idleFd_ = ::open("/dev/null", O_RDONLY | O_CLOEXEC);
    }
  }
}

If a new connection comes in, then call newConnectionCallback , what is newConnectionCallback ? It turned out to be a callback bound in TcpServer

acceptor_->setNewConnectionCallback(
      std::bind(&TcpServer::newConnection, this, _1, _2));

Take a closer look at what newConnection does

void TcpServer::newConnection(int sockfd, const InetAddress& peerAddr)
{
  loop_->assertInLoopThread();
  EventLoop* ioLoop = threadPool_->getNextLoop();
  char buf[64];
  snprintf(buf, sizeof buf, "-%s#%d", ipPort_.c_str(), nextConnId_);
  ++nextConnId_;
  string connName = name_ + buf;

  LOG_INFO << "TcpServer::newConnection [" << name_
           << "] - new connection [" << connName
           << "] from " << peerAddr.toIpPort();
  InetAddress localAddr(sockets::getLocalAddr(sockfd));
  // FIXME poll with zero timeout to double confirm the new connection
  // FIXME use make_shared if necessary
  TcpConnectionPtr conn(new TcpConnection(ioLoop,
                                          connName,
                                          sockfd,
                                          localAddr,
                                          peerAddr));
  connections_[connName] = conn;
  conn->setConnectionCallback(connectionCallback_);
  conn->setMessageCallback(messageCallback_);
  conn->setWriteCompleteCallback(writeCompleteCallback_);
  conn-setCloseCallback(
      std::bind(&TcpServer::removeConnection, this, _1)); // FIXME: unsafe
  ioLoop->runInLoop(std::bind(&TcpConnection::connectEstablished, conn));
}

Abstractly generate a TcpConnectionPtr, then call runInLoop, continue to observe the code runInLoop

void EventLoop::runInLoop(Functor cb)
{
  if (isInLoopThread())
  {
    cb();
  }
  else
  {
    queueInLoop(std::move(cb));
  }
}

Continue to view queueInLoop

void EventLoop::queueInLoop(Functor cb)
{
  {
  MutexLockGuard lock(mutex_);
  pendingFunctors_.push_back(std::move(cb));
  }

  if (!isInLoopThread() || callingPendingFunctors_)
  {
    wakeup();
  }
}

Suspend the function to be started to pendingFunctors_, and then call wakeup to wake up the corresponding thread

void EventLoop::wakeup()
{
  uint64_t one = 1;
  ssize_t n = sockets::write(wakeupFd_, &one, sizeof one);
  if (n != sizeof one)
  {
    LOG_ERROR << "EventLoop::wakeup() writes " << n << " bytes instead of 8";
  }
}

Think about how to find the corresponding thread when the connection occurs, see here

EventLoop* ioLoop = threadPool_->getNextLoop();

Combined with the initialization of EventPool

EventLoop::EventLoop()
  : looping_(false),
    quit_(false),
    eventHandling_(false),
    callingPendingFunctors_(false),
    iteration_(0),
    threadId_(CurrentThread::tid()),
    poller_(Poller::newDefaultPoller(this)),
    timerQueue_(new TimerQueue(this)),
    wakeupFd_(createEventfd()),
    wakeupChannel_(new Channel(this, wakeupFd_)),
    currentActiveChannel_(NULL)
{
  LOG_DEBUG << "EventLoop created " << this << " in thread " << threadId_;
  if (t_loopInThisThread)
  {
    LOG_FATAL << "Another EventLoop " << t_loopInThisThread
              << " exists in this thread " << threadId_;
  }
  else
  {
    t_loopInThisThread = this;
  }
  wakeupChannel_->setReadCallback(
      std::bind(&EventLoop::handleRead, this));
  // we are always reading the wakeupfd
  wakeupChannel_->enableReading();
}

Well, it’s basically clear, and finally use a summary flowchart

 

 

Finally, the mutual wake-up between the pipes is through eventfd, and each pipe uses wakefd in Eventloop, which is eventfd to wake up each other.

Guess you like

Origin blog.csdn.net/qq_32783703/article/details/108014372