WebRTC thread management learning

WebRTC thread management

Why cut into the entire WebRTC source code from the thread? I believe that as long as you have a certain understanding of WebRTC, you will know that WebRTC has its own thread management mechanism. Through this thread management mechanism, WebRTC can easily achieve the purpose of multi-thread safe coding, and divide each thread into its own thread. Responsibilities to facilitate subsequent maintenance and code reading (of course, WebRTC’s thread management is very similar to Chromium and Flutter), if you don’t understand the thread management mechanism of WebRTC, reading WebRTC code will be very confusing, and because thread management does not It will not involve some professional knowledge, which is very suitable as a starting point for cutting into the WebRTC source code.

WebRTC code logic is mainly managed through three threads (some codec threads are not introduced here):

  • network_thread: network thread, all time-consuming network operations are processed in this thread
  • worker_thread: Worker thread, mainly responsible for logic processing, such as some initialization codes, and for example, receiving data in the network thread and then passing it to the worker thread for some data processing and then passing it to the decoder thread
  • signal_thread: Signaling thread, signaling thread usually works at the PeerConnect layer, that is, most of the APIs we call must be in the signaling thread, such as AddCandidate, CreateOffer, etc., in order for most of the APIs to run in WebRTC In the signaling thread, a layer of Proxy layer is also specially made to force the API call to be allocated to the signaling thread (if you have a chance later, you can analyze the following WebRTC Proxy layer implementation principle)

Task delivery between WebRTC threads

There are two main ways to deliver tasks between WebRTC threads (the tasks here mainly refer to functions)

  • Synchronous Invoke mechanism, through which the task can be assigned to a certain thread to run, and the thread that calls the Invoke API will wait for the task to be executed synchronously
  • Asynchronous Post mechanism, through this mechanism, tasks can also be assigned to a certain thread to run, but the thread calling the PostTask API will not wait synchronously


Invoke mechanism, the code is as follows:

// 比如NeedsIceRestart函数是在工作者线程被调用,那么network_thread()->Invoke将会将
// lambda匿名函数从工作者线程派遣到网络线程,并等待执行完成
bool PeerConnection::NeedsIceRestart(const std::string& content_name) const {
  return network_thread()->Invoke<bool>(RTC_FROM_HERE, [this, &content_name] {
    RTC_DCHECK_RUN_ON(network_thread());
    return transport_controller_->NeedsIceRestart(content_name);
  });
}

PostTask mechanism, the code is as follows:

// 同Invoke机制不同的是,调用完PostTask之后不用等待任务执行完成
void EmulatedNetworkManager::EnableEndpoint(EmulatedEndpointImpl* endpoint) {
  network_thread_->PostTask(RTC_FROM_HERE, [this, endpoint]() {
    endpoint->Enable();
    UpdateNetworksOnce();
  });
}

WebRTC thread implementation details analysis- Thread

Note: source version M92

Thread startup process

Start with the creation of WebRTC signaling, workers, and network threads

file://src/ http://pc_connection_context.cc:81
ConnectionContext::ConnectionContext(
    PeerConnectionFactoryDependencies* dependencies)
    : network_thread_(MaybeStartThread(dependencies->network_thread,
                                       "pc_network_thread",
                                       true,
                                       owned_network_thread_)),
      worker_thread_(MaybeStartThread(dependencies->worker_thread,
                                      "pc_worker_thread",
                                      false,
                                      owned_worker_thread_)),
      signaling_thread_(MaybeWrapThread(dependencies->signaling_thread,
                                        wraps_current_thread_)) {

}

The worker and network thread are initialized through the MabeStartThread function. The signaling thread is special because the signaling thread can directly host the main thread in the process (accurately speaking, it should be the current calling thread), so the function called is MaybeWrapThread

MaybeStartThread

file://src/http://pc_connection_context.cc:27
 

rtc::Thread* MaybeStartThread(rtc::Thread* old_thread,
                              const std::string& thread_name,
                              bool with_socket_server,
                              std::unique_ptr<rtc::Thread>& thread_holder) {
  if (old_thread) {
    return old_thread;
  }
  if (with_socket_server) {
    thread_holder = rtc::Thread::CreateWithSocketServer();
  } else {
    thread_holder = rtc::Thread::Create();
  }
  thread_holder->SetName(thread_name, nullptr);
  thread_holder->Start();
  return thread_holder.get();
}

Ignore with_socket_server for the time being, and the overall process of CreateWithSocketServer and MaybeStartThread will be explained later

  1. If old_thread is not empty, return directly. Since the three threads of WebRTC can be customized by the outside, if there is a custom thread imported from the outside, the subsequent thread creation operation will not be performed.
  2. Call rtc::Thread::Create
  3. Call rtc::Thread::SetName
  4. Call rtc::Thread::Start
  5. thread start complete

MaybeWrapThread

file://src/ http://pc_connection_context.cc:44
rtc::Thread* MaybeWrapThread(rtc::Thread* signaling_thread,
                             bool& wraps_current_thread) {
  wraps_current_thread = false;
  if (signaling_thread) {
    return signaling_thread;
  }
  auto this_thread = rtc::Thread::Current();
  if (!this_thread) {
    // If this thread isn't already wrapped by an rtc::Thread, create a
    // wrapper and own it in this class.
    this_thread = rtc::ThreadManager::Instance()->WrapCurrentThread();
    wraps_current_thread = true;
  }
  return this_thread;
}

If the signaling_thread is not passed in from the outside, the current thread will be obtained internally as the signaling_thread
rtc::Thread::Start process

  1. Call ThreadManager::Instance() to initialize the ThreadManager object
  2. Call CreateThread on windows, and call pthread_create on linux to create threads
  3. Enter the thread processing function Thread::PreRun
  4. Call the Thread::Run function
  5. Thread::Run function calls ProcessMessage function


ProcessMessage

file://src/rtc_base/ http://thread.cc:1132
bool Thread::ProcessMessages(int cmsLoop) {
  //...
  int64_t msEnd = (kForever == cmsLoop) ? 0 : TimeAfter(cmsLoop);
  int cmsNext = cmsLoop;

  while (true) {
#if defined(WEBRTC_MAC)
    ScopedAutoReleasePool pool;
#endif
    Message msg;
    if (!Get(&msg, cmsNext))
      return !IsQuitting();
    Dispatch(&msg);

    if (cmsLoop != kForever) {
      cmsNext = static_cast<int>(TimeUntil(msEnd));
      if (cmsNext < 0)
        return true;
    }
  }
}

The main logic is as follows: the function processes messages through a while loop. Each loop will obtain an available Message through Get, and then call Dispatch to dispatch the obtained Message. The two main functions are Dispatch and Get. So far, the initialization and startup process of the entire WebRTC thread has been introduced.

Message acquisition, dispatch, delivery analysis

The above ProcessMessages can be regarded as a message loop, and the Get function will be used to get the message every time in the loop

Get (message acquisition)

file://src/rtc_base/http://thread.cc:472
 

bool Thread::Get(Message* pmsg, int cmsWait, bool process_io) {
   // ......

  // Get w/wait + timer scan / dispatch + socket / event multiplexer dispatch

  int64_t cmsTotal = cmsWait;
  int64_t cmsElapsed = 0;
  int64_t msStart = TimeMillis();
  int64_t msCurrent = msStart;
  while (true) {
    // Check for posted events
    int64_t cmsDelayNext = kForever;
    bool first_pass = true;
    while (true) {
      // All queue operations need to be locked, but nothing else in this loop
      // (specifically handling disposed message) can happen inside the crit.
      // Otherwise, disposed MessageHandlers will cause deadlocks.
      {
        CritScope cs(&crit_);
        // On the first pass, check for delayed messages that have been
        // triggered and calculate the next trigger time.
        if (first_pass) {
          first_pass = false;
          while (!delayed_messages_.empty()) {
            if (msCurrent < delayed_messages_.top().run_time_ms_) {
              cmsDelayNext =
                  TimeDiff(delayed_messages_.top().run_time_ms_, msCurrent);
              break;
            }
            messages_.push_back(delayed_messages_.top().msg_);
            delayed_messages_.pop();
          }
        }
        // Pull a message off the message queue, if available.
        if (messages_.empty()) {
          break;
        } else {
          *pmsg = messages_.front();
          messages_.pop_front();
        }
      }  // crit_ is released here.

      // If this was a dispose message, delete it and skip it.
      if (MQID_DISPOSE == pmsg->message_id) {
        RTC_DCHECK(nullptr == pmsg->phandler);
        delete pmsg->pdata;
        *pmsg = Message();
        continue;
      }
      return true;
    }

    if (IsQuitting())
      break;

    // Which is shorter, the delay wait or the asked wait?

    int64_t cmsNext;
    if (cmsWait == kForever) {
      cmsNext = cmsDelayNext;
    } else {
      cmsNext = std::max<int64_t>(0, cmsTotal - cmsElapsed);
      if ((cmsDelayNext != kForever) && (cmsDelayNext < cmsNext))
        cmsNext = cmsDelayNext;
    }

    {
      // Wait and multiplex in the meantime
      if (!ss_->Wait(static_cast<int>(cmsNext), process_io))
        return false;
    }

    // If the specified timeout expired, return

    msCurrent = TimeMillis();
    cmsElapsed = TimeDiff(msCurrent, msStart);
    if (cmsWait != kForever) {
      if (cmsElapsed >= cmsWait)
        return false;
    }
  }
  return false;
}

The core is to obtain a valid message through a loop, and the loop will end when the Get succeeds, fails, or the external call Stop stops the thread.
Message Acquisition Mechanism

  • Try to get delayed messages. The delayed message list is stored in a priority queue. If the delayed message reaches the runtime, the delayed message will be dequeued from the message priority queue and added to the executable message queue.
  • Determine whether there is a message in the executable message queue, and if there is a message, take a message from the head of the queue and return it to the outside
  • If the executable message queue is empty, perform the Wait operation and wait for the message to trigger WakeUp. The Wait and WakeUp here use the SocketServer object, and later analyze the Wait and wakeUp principles of the SocketServer


Maybe at the beginning of the code, you will have doubts about obtaining the available delayed messages. Why do you only judge whether the running time of the first element of the delayed message queue has arrived? Doesn’t the message behind the queue have a shorter running time than the top message? of it?

while (!delayed_messages_.empty()) {
    if (msCurrent < delayed_messages_.top().run_time_ms_) {
        cmsDelayNext =
            TimeDiff(delayed_messages_.top().run_time_ms_, msCurrent);
        break;
    }
    messages_.push_back(delayed_messages_.top().msg_);
    delayed_messages_.pop();
}

Further look at the definition of delayed_messages_ PriorityQueue delayed_messages_ RTC_GUARDED_BY(crit_);

  // DelayedMessage goes into a priority queue, sorted by trigger time. Messages
  // with the same trigger time are processed in num_ (FIFO) order.
  class DelayedMessage {
   public:
    DelayedMessage(int64_t delay,
                   int64_t run_time_ms,
                   uint32_t num,
                   const Message& msg)
        : delay_ms_(delay),
          run_time_ms_(run_time_ms),
          message_number_(num),
          msg_(msg) {}

    bool operator<(const DelayedMessage& dmsg) const {
      return (dmsg.run_time_ms_ < run_time_ms_) ||
             ((dmsg.run_time_ms_ == run_time_ms_) &&
              (dmsg.message_number_ < message_number_));
    }

    int64_t delay_ms_;  // for debugging
    int64_t run_time_ms_;
    // Monotonicaly incrementing number used for ordering of messages
    // targeted to execute at the same time.
    uint32_t message_number_;
    Message msg_;
  };

  class PriorityQueue : public std::priority_queue<DelayedMessage> {
   public:
    container_type& container() { return c; }
    void reheap() { make_heap(c.begin(), c.end(), comp); }
  };

The delayed message queue is actually a priority message queue of a large item heap, that is, sorted in descending order. The size of the DelayedMessage is compared through the run_time_ms_ parameter. If the run_time_ms_ is smaller, the DelayedMessage is actually larger. If the run_time_ms_ is equal, use the message_number to compare. In layman's terms, the smaller the delay time, the higher the queue.

Message introduction

Before introducing the message dispatch processing, you need to figure out the Message

file://src/rtc_base/thread_message.h
struct Message {
  Message() : phandler(nullptr), message_id(0), pdata(nullptr) {}
  inline bool Match(MessageHandler* handler, uint32_t id) const {
    return (handler == nullptr || handler == phandler) &&
           (id == MQID_ANY || id == message_id);
  }
  Location posted_from;
  MessageHandler* phandler;
  uint32_t message_id;
  MessageData* pdata;
};

Mainly look at the two data phander and pdata, the corresponding classes are as follows

class RTC_EXPORT MessageHandler {
 public:
  virtual ~MessageHandler() {}
  virtual void OnMessage(Message* msg) = 0;
};

class MessageData {
 public:
  MessageData() {}
  virtual ~MessageData() {}
};

Two virtual base classes, MessageData is used to store message content, and MessageHandler is used to process messages. Users can customize their own MessageHandler and MessageData. For example, we can customize our own MessageData as follows:

// 定义了一个自己的MyMessageTask,其中保存了一个function,并且对外提供了一个Run方法
template <class FunctorT>
class MyMessageTask final : public MessageData {
 public:
  explicit MessageWithFunctor(FunctorT&& functor)
      : functor_(std::forward<FunctorT>(functor)) {}
  void Run() { functor_(); }

 private:
  ~MessageWithFunctor() override {}

  typename std::remove_reference<FunctorT>::type functor_;
};

Define a MessageHandler yourself to process messages

// OnMessage函数会在派遣消息的时候被调用,里面的msg存放着一个MessageData对象,这个MessageData对象就是我们自定义的MyMessageTask,获取到这个对象直接调用我们刚刚写好的Run函数运行。
class MyMessageHandlerWithTask : public MessageHandler {
  public:
    void OnMessage(Message* msg) overrider {
      static_cast<MyMesageTask*>(msg->pdata)->Run();
      delete msg->pdata;
    }
}

Above we defined a handler and data, which are mainly used to process the message through the handler when receiving the dispatched message. Let's see how to use our custom handler and data.

// Thread::Post原型
virtual void Post(const Location& posted_from,
                  MessageHandler* phandler,
                  uint32_t id = 0,
                  MessageData* pdata = nullptr,
                  bool time_sensitive = false);
// 注意看Post函数里面有需要我们传入MessageHandler和MessageData,我们只需要将自定义
// 的MessageHandler和MessageData传入即可
static MyMessageHandlerWithTask* myhandler = new MyMessageHandlerWithTask;
MyMessageTask* mytask = new MyMessageTask([]() {int c = a+b;});
Post(FROME_HERE, myhandler, 0, mytask);

After executing the above Post, the anonymous function in MyMessageTask will be executed

Dispatch (message dispatch)

After introducing Message, you can see how Dispatch dispatches messages to MessageHandler for processing
file://src/rtc_base/ http://thread.cc
 

void Thread::Dispatch(Message* pmsg) {
  TRACE_EVENT2("webrtc", "Thread::Dispatch", "src_file",
               pmsg->posted_from.file_name(), "src_func",
               pmsg->posted_from.function_name());
  RTC_DCHECK_RUN_ON(this);
  int64_t start_time = TimeMillis();
  pmsg->phandler->OnMessage(pmsg);
  int64_t end_time = TimeMillis();
  int64_t diff = TimeDiff(end_time, start_time);
  if (diff >= dispatch_warning_ms_) {
    RTC_LOG(LS_INFO) << "Message to " << name() << " took " << diff
                     << "ms to dispatch. Posted from: "
                     << pmsg->posted_from.ToString();
    // To avoid log spew, move the warning limit to only give warning
    // for delays that are larger than the one observed.
    dispatch_warning_ms_ = diff + 1;
  }
}

The Dispatch function is very simple. The key point is to call the OnMessage of the incoming Message and pass the message to the MessageHandler for processing.

message delivery

I read the implementation principle of message acquisition earlier. If there is no message , Wait will be called to wait. Since there is Wait, there must be a place to trigger WaitUp. Yes, WaitUp will be triggered when the message is delivered externally.  There are two methods introduced in the inter-task delivery

, one is synchronous Invoke, and the other is asynchronous Post file://src/rtc_base/thread.h:449

  template <class FunctorT>
  void PostTask(const Location& posted_from, FunctorT&& functor) {
    Post(posted_from, GetPostTaskMessageHandler(), /*id=*/0,
         new rtc_thread_internal::MessageWithFunctor<FunctorT>(
             std::forward<FunctorT>(functor)));
  }

The PostTask core still calls the Post function, and passes in its own MessageData and MessageHandler

file://src/rtc_base/ http://thread.cc:563
void Thread::Post(const Location& posted_from,
                  MessageHandler* phandler,
                  uint32_t id,
                  MessageData* pdata,
                  bool time_sensitive) {
  RTC_DCHECK(!time_sensitive);
  if (IsQuitting()) {
    delete pdata;
    return;
  }

  // Keep thread safe
  // Add the message to the end of the queue
  // Signal for the multiplexer to return

  {
    CritScope cs(&crit_);
    Message msg;
    msg.posted_from = posted_from;
    msg.phandler = phandler;
    msg.message_id = id;
    msg.pdata = pdata;
    messages_.push_back(msg);
  }
  WakeUpSocketServer();
}

void Thread::WakeUpSocketServer() {
  ss_->WakeUp();
}

The implementation of the Post function is very simple and clear. Construct a Message and add it to the queue, and then call ss_->WakeUp() to wake up Wait. ss_ is a SocketServer object. We will analyze it later, first look at synchronous Invoke

file://src/rtc_base/thread.h:388
 

template <
      class ReturnT,
      typename = typename std::enable_if<!std::is_void<ReturnT>::value>::type>
  ReturnT Invoke(const Location& posted_from, FunctionView<ReturnT()> functor) {
    ReturnT result;
    InvokeInternal(posted_from, [functor, &result] { result = functor(); });
    return result;
  }

  template <
      class ReturnT,
      typename = typename std::enable_if<std::is_void<ReturnT>::value>::type>
  void Invoke(const Location& posted_from, FunctionView<void()> functor) {
    InvokeInternal(posted_from, functor);
  }

One of the two overloaded functions has a return result, and the other does not. They both call InvokeInternal internally, and InvokeInternal immediately calls the Send function.

file://src/rtc_base/ http://thread.cc:914
void Thread::Send(const Location& posted_from,
                  MessageHandler* phandler,
                  uint32_t id,
                  MessageData* pdata) {
  RTC_DCHECK(!IsQuitting());
  if (IsQuitting())
    return;

  // Sent messages are sent to the MessageHandler directly, in the context
  // of "thread", like Win32 SendMessage. If in the right context,
  // call the handler directly.
  Message msg;
  msg.posted_from = posted_from;
  msg.phandler = phandler;
  msg.message_id = id;
  msg.pdata = pdata;
  if (IsCurrent()) {
#if RTC_DCHECK_IS_ON
    RTC_DCHECK_RUN_ON(this);
    could_be_blocking_call_count_++;
#endif
    msg.phandler->OnMessage(&msg);
    return;
  }

  AssertBlockingIsAllowedOnCurrentThread();

  Thread* current_thread = Thread::Current();

#if RTC_DCHECK_IS_ON
  if (current_thread) {
    RTC_DCHECK_RUN_ON(current_thread);
    current_thread->blocking_call_count_++;
    RTC_DCHECK(current_thread->IsInvokeToThreadAllowed(this));
    ThreadManager::Instance()->RegisterSendAndCheckForCycles(current_thread,
                                                             this);
  }
#endif

  // Perhaps down the line we can get rid of this workaround and always require
  // current_thread to be valid when Send() is called.
  std::unique_ptr<rtc::Event> done_event;
  if (!current_thread)
    done_event.reset(new rtc::Event());

  bool ready = false;
  PostTask(webrtc::ToQueuedTask(
      [&msg]() mutable { msg.phandler->OnMessage(&msg); },
      [this, &ready, current_thread, done = done_event.get()] {
        if (current_thread) {
          CritScope cs(&crit_);
          ready = true;
          current_thread->socketserver()->WakeUp();
        } else {
          done->Set();
        }
      }));

  if (current_thread) {
    bool waited = false;
    crit_.Enter();
    while (!ready) {
      crit_.Leave();
      current_thread->socketserver()->Wait(kForever, false);
      waited = true;
      crit_.Enter();
    }
    crit_.Leave();

    // Our Wait loop above may have consumed some WakeUp events for this
    // Thread, that weren't relevant to this Send.  Losing these WakeUps can
    // cause problems for some SocketServers.
    //
    // Concrete example:
    // Win32SocketServer on thread A calls Send on thread B.  While processing
    // the message, thread B Posts a message to A.  We consume the wakeup for
    // that Post while waiting for the Send to complete, which means that when
    // we exit this loop, we need to issue another WakeUp, or else the Posted
    // message won't be processed in a timely manner.

    if (waited) {
      current_thread->socketserver()->WakeUp();
    }
  } else {
    done_event->Wait(rtc::Event::kForever);
  }
}

The Send function has a lot of code, but the overall idea is still very clear

  • If the thread calling Send is the current thread owned by Send, directly run OnMessage in Message without task dispatching
  • Not in the same thread, call PostTask to pass the message to the corresponding thread. Here readers may have a question about which thread the task in PostTask is dispatched to. If you have a Thread object workerThread, you can now call workerThread.PostTask in the main thread , this task will be delivered to the thread managed by the Thread object you created, that is, workerThread.
  • After the task is PostTasked to the corresponding thread, there are two situations. Before or after the function runs, the thread has been released.
  • If the thread has been released, only wait for the Event signal that a function execution is completed
  • The thread still exists, wait for the message to be executed, and call WakeUp again after the execution is completed. The comment also explains in detail why it is necessary to call WakeUp again after the execution is completed. The reason is that while(!ready) {... current_thread- >socketserver()->Wait()} may consume some externally triggered WakeUp events. If WakeUp is not called once after the execution is completed, the external new Post message may not be consumed immediately.

Message delivery, dispatch, get state transition diagram

In order to understand WebRTC's message delivery, dispatch, and acquisition mechanism more clearly, I have defined 4 states for easy understanding

  • Idel state: by calling Start, and before calling the Get function
  • Wait state: Convert the Idel state to Wait state by calling the Get function
  • Ready state: By calling the Post state to trigger Waitup, the Wait state is converted to the Ready state
  • Running state: process the message by calling Dispatch, and convert it to the Running state

Current implementation mechanism

Question point: If I want to get the Thread object of the current thread anywhere in the code, how should I do it? singleton?

Take a look at the prototype of the Current function of WebRTC Thread:

class Thread {
  public:
    //......
    static Thread* Current();
}

When we call Thread::Current in thread A, we will get a Thread object of thread A, and call Thread::Current in thread B will get a Thread object of thread B. Let's take a look at the internal implementation

// static
Thread* Thread::Current() {
  ThreadManager* manager = ThreadManager::Instance();
  Thread* thread = manager->CurrentThread();

#ifndef NO_MAIN_THREAD_WRAPPING
  // Only autowrap the thread which instantiated the ThreadManager.
  if (!thread && manager->IsMainThread()) {
    thread = new Thread(CreateDefaultSocketServer());
    thread->WrapCurrentWithThreadManager(manager, true);
  }
#endif

  return thread;
}

The core implementation is in ThreadManager, which is a management class provided for WebRTC Thread, which stores all externally created Threads

Thread* ThreadManager::CurrentThread() {
  return static_cast<Thread*>(TlsGetValue(key_));
}

The implementation of ThreadManager::CurrentThread is very simple. If the private variable key_ is obtained through TlsGetValue, then this key_ must have a Set operation. Yes, the Set operation of this key_ is performed in the Thread constructor. Thread() -> DoInit () -> ThreadManager::SetCurrentThread -> ThreadManager::SetCurrentThreadInternal
 

void ThreadManager::SetCurrentThreadInternal(Thread* thread) {
  TlsSetValue(key_, thread);
}

What does TlsSetValue and TlsGetValue mean? This involves a knowledge point, that is, TLS

Introduction to TLS

The full name of TLS is Thread Local Storage thread local variable or thread private variable. Private means that each thread will own this variable independently.

  • In Windows, use TlsAlloc to obtain an unused TLS slot index in the process, use TlsSetValue to set the value, and TlsGetValue to obtain the value
  • Use pthread_key_create, pthread_getspecific, pthread_setspecific to operate TLS in linux
  • Thread_local is used in C++11


Detailed link:
www.notion.so/TLS-78870a0…
www.notion.so/TLS-78870a0…

Returning to the implementation of the Current function, it uses TLS technology to store its own private variables in different threads (this private variable is Thread*), and then the Thread* obtained by calling Current for the corresponding thread is the current thread WebRTC

thread Proxy mechanism

As mentioned earlier, the APIs exposed by WebRTC, such as PeerConnectionInterface, have a proxy mechanism inside to ensure that each API call is in the correct thread. First look at PeerConnectiontProxy

file://src/api/peer_connection_proxy.h
 

BEGIN_PROXY_MAP(PeerConnection)
PROXY_PRIMARY_THREAD_DESTRUCTOR()
PROXY_METHOD0(rtc::scoped_refptr<StreamCollectionInterface>, local_streams)
PROXY_METHOD0(rtc::scoped_refptr<StreamCollectionInterface>, remote_streams)
PROXY_METHOD1(bool, AddStream, MediaStreamInterface*)
PROXY_METHOD1(void, RemoveStream, MediaStreamInterface*)
PROXY_METHOD2(RTCErrorOr<rtc::scoped_refptr<RtpSenderInterface>>,
              AddTrack,
              rtc::scoped_refptr<MediaStreamTrackInterface>,
              const std::vector<std::string>&)
// ......

// This method will be invoked on the network thread. See
// PeerConnectionFactory::CreatePeerConnectionOrError for more details.
PROXY_SECONDARY_METHOD1(rtc::scoped_refptr<DtlsTransportInterface>,
                        LookupDtlsTransportByMid,
                        const std::string&)
// This method will be invoked on the network thread. See
// PeerConnectionFactory::CreatePeerConnectionOrError for more details.
PROXY_SECONDARY_CONSTMETHOD0(rtc::scoped_refptr<SctpTransportInterface>,
                             GetSctpTransport)   

The above bunch of macros will generate a PeerConnectionProxyWithInternal class. We mainly look at three macros BEGIN_PROXY_MAP, PROXY_METHOD0, PROXY_SECONDARY_METHOD1

BEGIN_PROXY_MAP

#define BEGIN_PROXY_MAP(c)                                                   \
  PROXY_MAP_BOILERPLATE(c)                                                   \
  SECONDARY_PROXY_MAP_BOILERPLATE(c)                                         \
  REFCOUNTED_PROXY_MAP_BOILERPLATE(c)                                        \
 public:                                                                     \
  static rtc::scoped_refptr<c##ProxyWithInternal> Create(                    \
      rtc::Thread* primary_thread, rtc::Thread* secondary_thread,            \
      INTERNAL_CLASS* c) {                                                   \
    return rtc::make_ref_counted<c##ProxyWithInternal>(primary_thread,       \
                                                       secondary_thread, c); \
  }
  
// Helper macros to reduce code duplication.
#define PROXY_MAP_BOILERPLATE(c)                          \
  template <class INTERNAL_CLASS>                         \
  class c##ProxyWithInternal;                             \
  typedef c##ProxyWithInternal<c##Interface> c##Proxy;    \
  template <class INTERNAL_CLASS>                         \
  class c##ProxyWithInternal : public c##Interface {      \
   protected:                                             \
    typedef c##Interface C;                               \
                                                          \
   public:                                                \
    const INTERNAL_CLASS* internal() const { return c_; } \
    INTERNAL_CLASS* internal() { return c_; }

Look at the key point, the first typedef c##ProxyWithInternal<c##Interface> c##Proxy;, that is, the class name used externally adopts PeerConnectionProxy, c##ProxyWithInternal: public c##Interface, that is, it inherits from the PeerConnectionInterface class, that is, we use it externally The PeerConnect pointer object obtained is actually a PeerConnectionProxyWithInternal object. Key point 2, Create function, when will this Create function be called, and what thread does primary_thread and secondary_thread correspond to, see the following code

 

RTCErrorOr<rtc::scoped_refptr<PeerConnectionInterface>>
PeerConnectionFactory::CreatePeerConnectionOrError(
    const PeerConnectionInterface::RTCConfiguration& configuration,
    PeerConnectionDependencies dependencies) {

  rtc::scoped_refptr<PeerConnectionInterface> result_proxy =
      PeerConnectionProxy::Create(signaling_thread(), network_thread(),
                                  result.MoveValue());
  return result_proxy;
}

It can be determined from the above code that the primary_thread in the PeerConnectionProxy class corresponds to the signaling_thread, and the secondary_thread thread is the network_thread thread

PROXY_METHOD0

#define PROXY_METHOD0(r, method)                         \
  r method() override {                                  \
    MethodCall<C, r> call(c_, &C::method);               \
    return call.Marshal(RTC_FROM_HERE, primary_thread_); \
  }

Create the MethodCall class and call Marshal, pay attention to the parameter primary_thread_ passed in by calling Marshal, that is, signaling_thread in PeerConnectionProxy

PROXY_SECONDARY_METHOD1

#define PROXY_SECONDARY_METHOD1(r, method, t1)                \
  r method(t1 a1) override {                                  \
    MethodCall<C, r, t1> call(c_, &C::method, std::move(a1)); \
    return call.Marshal(RTC_FROM_HERE, secondary_thread_);    \
  }

The difference from PROXY_METHOD is that secondary_thread_ is passed in when calling Marshal, which is network_thread in PeerConnectionProxy

MethodCall

template <typename C, typename R, typename... Args>
class MethodCall : public QueuedTask {
 public:
  typedef R (C::*Method)(Args...);
  MethodCall(C* c, Method m, Args&&... args)
      : c_(c),
        m_(m),
        args_(std::forward_as_tuple(std::forward<Args>(args)...)) {}

  R Marshal(const rtc::Location& posted_from, rtc::Thread* t) {
    if (t->IsCurrent()) {
      Invoke(std::index_sequence_for<Args...>());
    } else {
      t->PostTask(std::unique_ptr<QueuedTask>(this));
      event_.Wait(rtc::Event::kForever);
    }
    return r_.moved_result();
  }

 private:
  bool Run() override {
    Invoke(std::index_sequence_for<Args...>());
    event_.Set();
    return false;
  }

  template <size_t... Is>
  void Invoke(std::index_sequence<Is...>) {
    r_.Invoke(c_, m_, std::move(std::get<Is>(args_))...);
  }

  C* c_;
  Method m_;
  ReturnType<R> r_;
  std::tuple<Args&&...> args_;
  rtc::Event event_;
};

It mainly depends on the Marshal function. If it is calling Invoke directly in the current thread, otherwise it will call PostTask to deliver the task to the specified thread and wait for the completion of the operation. For the use of std::tuple, you can check the official documentation. The above code uses two C++ +14 new features std::index_sequence_for and std::get to assist the use of tuple

Author: spider centralized control team
Original  WebRTC thread management learning - Nuggets

 

★The business card at the end of the article can receive audio and video development learning materials for free, including (FFmpeg, webRTC, rtmp, hls, rtsp, ffplay, srs) and audio and video learning roadmaps, etc.

see below!

 

Guess you like

Origin blog.csdn.net/yinshipin007/article/details/132381671