WebRTC代码学习之线程管理

目的

  • 通过源码熟悉整个WebRTC流程
  • 通过源码学习WebRTC中涉及到的网络、音视频技术
  • 阅读和学习优秀的C++代码(WebRTC),从中学习,提炼出实用的技巧、思想、代码

WebRTC的线程管理

为什么是从线程开始切入整个WebRTC源码?相信只要对WebRTC有一定的了解的都清楚WebRTC内部有着自己的一套线程管理机制,WebRTC通过这套线程管理机制,非常简单就达到了多线程安全编码的目的,并且给每个线程划分属于自己的职责,方便后续维护、阅读代码 (当然,WebRTC的线程管理和Chromium、Flutter都非常相似),如果你不了解WebRTC的这套线程管理机制,阅读WebRTC代码会很懵逼,又因为线程管理并不会涉及到一些专业性知识,非常适合作为切入WebRTC源码的起点。

WebRTC代码逻辑主要通过三个线程管理(这里不介绍一些编解码线程):

  • network_thread: 网络线程,一切涉及到耗时的网络操作都在这个线程处理
  • worker_thread: 工作者线程,主要负责逻辑处理,比如一些初始化代码,还有比如在网络线程接收到数据然后会传递给工作者线程进行一些数据处理然后传给解码器线程
  • signal_thread: 信令线程,信令线程通常都是工作在PeerConnect层的,也就是我们绝大部分者调用的API都必须在信令线程,比如AddCandidate、CreateOffer等,WebRTC为了让绝大部分API都运行在信令线程,还专门做了一层Proxy层,强制将API的调用分配到信令线程(后面如果有机会,可以分析以下WebRTC的Proxy层实现原理)

WebRTC线程之间的任务投递

WebRTC线程之间的任务(这里的任务主要指的是函数)投递主要有两种方式

  • 同步Invoke机制,通过这个机制可以将任务指定到某个线程运行,调用Invoke API的线程将会同步等待任务执行完成
  • 异步Post机制,通过这个机制也可以将任务指定到某个线程运行,但是调用PostTask API的线程不会同步等待

Invoke机制,代码如下:
 

// 比如NeedsIceRestart函数是在工作者线程被调用,那么network_thread()->Invoke将会将
// lambda匿名函数从工作者线程派遣到网络线程,并等待执行完成
bool PeerConnection::NeedsIceRestart(const std::string& content_name) const {
  return network_thread()->Invoke<bool>(RTC_FROM_HERE, [this, &content_name] {
    RTC_DCHECK_RUN_ON(network_thread());
    return transport_controller_->NeedsIceRestart(content_name);
  });
}

PostTask机制,代码如下:

// 同Invoke机制不同的是,调用完PostTask之后不用等待任务执行完成
void EmulatedNetworkManager::EnableEndpoint(EmulatedEndpointImpl* endpoint) {
  network_thread_->PostTask(RTC_FROM_HERE, [this, endpoint]() {
    endpoint->Enable();
    UpdateNetworksOnce();
  });
}

WebRTC线程实现细节分析 - Thread

注意:源码版本 M92

★文末名片可以免费领取音视频开发学习资料,内容包括(FFmpeg ,webRTC ,rtmp ,hls ,rtsp ,ffplay ,srs)以及音视频学习路线图等等。

见下方!↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓

线程启动流程

先从WebRTC的信令、工作者、网络线程的创建开始

file://src/ http://pc_connection_context.cc:81
ConnectionContext::ConnectionContext(
    PeerConnectionFactoryDependencies* dependencies)
    : network_thread_(MaybeStartThread(dependencies->network_thread,
                                       "pc_network_thread",
                                       true,
                                       owned_network_thread_)),
      worker_thread_(MaybeStartThread(dependencies->worker_thread,
                                      "pc_worker_thread",
                                      false,
                                      owned_worker_thread_)),
      signaling_thread_(MaybeWrapThread(dependencies->signaling_thread,
                                        wraps_current_thread_)) {

}

通过MabeStartThread函数初始化了工作者、网络线程,信令线程比较特殊一点,是由于信令线程可以直接托管进程中的主线程(准确来说应该是当前调用线程),所以调用的函数是MaybeWrapThread

MaybeStartThread

file://src/http://pc_connection_context.cc:27
 

rtc::Thread* MaybeStartThread(rtc::Thread* old_thread,
                              const std::string& thread_name,
                              bool with_socket_server,
                              std::unique_ptr<rtc::Thread>& thread_holder) {
  if (old_thread) {
    return old_thread;
  }
  if (with_socket_server) {
    thread_holder = rtc::Thread::CreateWithSocketServer();
  } else {
    thread_holder = rtc::Thread::Create();
  }
  thread_holder->SetName(thread_name, nullptr);
  thread_holder->Start();
  return thread_holder.get();
}

暂时忽略with_socket_server,后面会说明CreateWithSocketServer,MaybeStartThread整体流程

  1. old_thread如果不为空直接返回,由于WebRTC的这三个线程都是可以由外部自定义的,所以如果外部有传入自定义线程,后续线程创建操作将不会进行
  2. 调用rtc::Thread::Create
  3. 调用rtc::Thread::SetName
  4. 调用rtc::Thread::Start
  5. 线程启动完成


MaybeWrapThread

file://src/ http://pc_connection_context.cc:44
rtc::Thread* MaybeWrapThread(rtc::Thread* signaling_thread,
                             bool& wraps_current_thread) {
  wraps_current_thread = false;
  if (signaling_thread) {
    return signaling_thread;
  }
  auto this_thread = rtc::Thread::Current();
  if (!this_thread) {
    // If this thread isn't already wrapped by an rtc::Thread, create a
    // wrapper and own it in this class.
    this_thread = rtc::ThreadManager::Instance()->WrapCurrentThread();
    wraps_current_thread = true;
  }
  return this_thread;
}

如果外部没有传入signaling_thread,内部将会获取当前线程作为signaling_thread
rtc::Thread::Start流程

  1. 调用ThreadManager::Instance() 初始化ThreadManager对象
  2. windows上调用CreateThread,linux调用pthread_create创建线程
  3. 进入线程处理函数Thread::PreRun
  4. 调用Thread::Run函数
  5. Thread::Run函数调用ProcessMessage函数

ProcessMessage

file://src/rtc_base/ http://thread.cc:1132
bool Thread::ProcessMessages(int cmsLoop) {
  //...
  int64_t msEnd = (kForever == cmsLoop) ? 0 : TimeAfter(cmsLoop);
  int cmsNext = cmsLoop;

  while (true) {
#if defined(WEBRTC_MAC)
    ScopedAutoReleasePool pool;
#endif
    Message msg;
    if (!Get(&msg, cmsNext))
      return !IsQuitting();
    Dispatch(&msg);

    if (cmsLoop != kForever) {
      cmsNext = static_cast<int>(TimeUntil(msEnd));
      if (cmsNext < 0)
        return true;
    }
  }
}

主要逻辑如下:函数通过一个while循环处理消息,每次循环都会通过Get获取一个可用的Message,然后调用Dispatch派遣获取到的Message,两个主要函数Dispatch、Get。到这里整个WebRTC线程的初始化和启动流程就介绍完了

消息获取、派遣、投递分析

上面的ProcessMessages,可以把它当成一个消息循环,循环中每次都会通过Get函数去获取消息

Get (消息获取)

file://src/rtc_base/ http://thread.cc:472
bool Thread::Get(Message* pmsg, int cmsWait, bool process_io) {
   // ......

  // Get w/wait + timer scan / dispatch + socket / event multiplexer dispatch

  int64_t cmsTotal = cmsWait;
  int64_t cmsElapsed = 0;
  int64_t msStart = TimeMillis();
  int64_t msCurrent = msStart;
  while (true) {
    // Check for posted events
    int64_t cmsDelayNext = kForever;
    bool first_pass = true;
    while (true) {
      // All queue operations need to be locked, but nothing else in this loop
      // (specifically handling disposed message) can happen inside the crit.
      // Otherwise, disposed MessageHandlers will cause deadlocks.
      {
        CritScope cs(&crit_);
        // On the first pass, check for delayed messages that have been
        // triggered and calculate the next trigger time.
        if (first_pass) {
          first_pass = false;
          while (!delayed_messages_.empty()) {
            if (msCurrent < delayed_messages_.top().run_time_ms_) {
              cmsDelayNext =
                  TimeDiff(delayed_messages_.top().run_time_ms_, msCurrent);
              break;
            }
            messages_.push_back(delayed_messages_.top().msg_);
            delayed_messages_.pop();
          }
        }
        // Pull a message off the message queue, if available.
        if (messages_.empty()) {
          break;
        } else {
          *pmsg = messages_.front();
          messages_.pop_front();
        }
      }  // crit_ is released here.

      // If this was a dispose message, delete it and skip it.
      if (MQID_DISPOSE == pmsg->message_id) {
        RTC_DCHECK(nullptr == pmsg->phandler);
        delete pmsg->pdata;
        *pmsg = Message();
        continue;
      }
      return true;
    }

    if (IsQuitting())
      break;

    // Which is shorter, the delay wait or the asked wait?

    int64_t cmsNext;
    if (cmsWait == kForever) {
      cmsNext = cmsDelayNext;
    } else {
      cmsNext = std::max<int64_t>(0, cmsTotal - cmsElapsed);
      if ((cmsDelayNext != kForever) && (cmsDelayNext < cmsNext))
        cmsNext = cmsDelayNext;
    }

    {
      // Wait and multiplex in the meantime
      if (!ss_->Wait(static_cast<int>(cmsNext), process_io))
        return false;
    }

    // If the specified timeout expired, return

    msCurrent = TimeMillis();
    cmsElapsed = TimeDiff(msCurrent, msStart);
    if (cmsWait != kForever) {
      if (cmsElapsed >= cmsWait)
        return false;
    }
  }
  return false;
}

核心是通过一个循环来获取一个有效的消息,循环会在Get成功、失败或者外部调用了Stop停止了线程时结束。
消息的获取机制

  • 尝试获取延迟消息,延迟消息列表使用优先级列队存储,如果延迟消息到达运行时间,延迟消息将会从消息消息优先级队列出列,并将延迟消息加入可执行消息队列
  • 判断可执行消息队列是否存在消息,如果存在从队列头部取出一个消息返回给外部
  • 如果可执行消息队列为空,进行Wait操作,等待消息到来触发WakeUp,这里的Wait和WakeUp使用的是SocketServer对象,后面专门分析SocketServer的Wait和wakeUp原理

可能在一开始看代码会对获取可用延迟消息产生疑问,为什么只判断延迟消息队列的第一个元素的运行时间有没有到达,难道队列后面的消息不会有比这个顶部消息的运行时间更小的吗?

while (!delayed_messages_.empty()) {
    if (msCurrent < delayed_messages_.top().run_time_ms_) {
        cmsDelayNext =
            TimeDiff(delayed_messages_.top().run_time_ms_, msCurrent);
        break;
    }
    messages_.push_back(delayed_messages_.top().msg_);
    delayed_messages_.pop();
}

进一步查看delayed_messages_的定义PriorityQueue delayed_messages_ RTC_GUARDED_BY(crit_);

// DelayedMessage goes into a priority queue, sorted by trigger time. Messages
  // with the same trigger time are processed in num_ (FIFO) order.
  class DelayedMessage {
   public:
    DelayedMessage(int64_t delay,
                   int64_t run_time_ms,
                   uint32_t num,
                   const Message& msg)
        : delay_ms_(delay),
          run_time_ms_(run_time_ms),
          message_number_(num),
          msg_(msg) {}

    bool operator<(const DelayedMessage& dmsg) const {
      return (dmsg.run_time_ms_ < run_time_ms_) ||
             ((dmsg.run_time_ms_ == run_time_ms_) &&
              (dmsg.message_number_ < message_number_));
    }

    int64_t delay_ms_;  // for debugging
    int64_t run_time_ms_;
    // Monotonicaly incrementing number used for ordering of messages
    // targeted to execute at the same time.
    uint32_t message_number_;
    Message msg_;
  };

  class PriorityQueue : public std::priority_queue<DelayedMessage> {
   public:
    container_type& container() { return c; }
    void reheap() { make_heap(c.begin(), c.end(), comp); }
  };

延迟消息队列其实就是一个大项堆的优先级消息队列,也就是使用降序排序,DelayedMessage的大小比较是通过run_time_ms_参数,如果run_time_ms_越小其实DelayedMessage越大,如果run_time_ms_ 相等就使用message_number来比较,通俗说就是延迟时间越小在队列中越靠前。

Message介绍

在介绍消息派遣处理之前需要先弄清楚Message

file://src/rtc_base/thread_message.h
struct Message {
  Message() : phandler(nullptr), message_id(0), pdata(nullptr) {}
  inline bool Match(MessageHandler* handler, uint32_t id) const {
    return (handler == nullptr || handler == phandler) &&
           (id == MQID_ANY || id == message_id);
  }
  Location posted_from;
  MessageHandler* phandler;
  uint32_t message_id;
  MessageData* pdata;
};

主要看两个数据phander和pdata,对应类如下

class RTC_EXPORT MessageHandler {
 public:
  virtual ~MessageHandler() {}
  virtual void OnMessage(Message* msg) = 0;
};

class MessageData {
 public:
  MessageData() {}
  virtual ~MessageData() {}
};

两个虚基类,MesageData用来存储消息的内容,MesageHandler用来处理消息,使用者可以自定义属于自己的MessageHanlder和MessageData,比如我们自定义一个自己的MessageData如下:

// 定义了一个自己的MyMessageTask,其中保存了一个function,并且对外提供了一个Run方法
template <class FunctorT>
class MyMessageTask final : public MessageData {
 public:
  explicit MessageWithFunctor(FunctorT&& functor)
      : functor_(std::forward<FunctorT>(functor)) {}
  void Run() { functor_(); }

 private:
  ~MessageWithFunctor() override {}

  typename std::remove_reference<FunctorT>::type functor_;
};

在自己定义一个MessageHandler用来处理消息

// OnMessage函数会在派遣消息的时候被调用,里面的msg存放着一个MessageData对象,这个MessageData对象就是我们自定义的MyMessageTask,获取到这个对象直接调用我们刚刚写好的Run函数运行。
class MyMessageHandlerWithTask : public MessageHandler {
  public:
    void OnMessage(Message* msg) overrider {
      static_cast<MyMesageTask*>(msg->pdata)->Run();
      delete msg->pdata;
    }
}

上面我们定义了一个handler和data,主要用来在收到派遣过来的消息时通过handler处理消息,来看看如何使用我们自定义的handler和data吧

// Thread::Post原型
virtual void Post(const Location& posted_from,
                  MessageHandler* phandler,
                  uint32_t id = 0,
                  MessageData* pdata = nullptr,
                  bool time_sensitive = false);
// 注意看Post函数里面有需要我们传入MessageHandler和MessageData,我们只需要将自定义
// 的MessageHandler和MessageData传入即可
static MyMessageHandlerWithTask* myhandler = new MyMessageHandlerWithTask;
MyMessageTask* mytask = new MyMessageTask([]() {int c = a+b;});
Post(FROME_HERE, myhandler, 0, mytask);

执行完上面的Post,MyMessageTask里面的匿名函数将被执行

Dispatch (消息派遣)

介绍完Message,就可以看看Dispatch是如何将消息派遣到MessageHandler去处理的

file://src/rtc_base/http://thread.cc
 

void Thread::Dispatch(Message* pmsg) {
  TRACE_EVENT2("webrtc", "Thread::Dispatch", "src_file",
               pmsg->posted_from.file_name(), "src_func",
               pmsg->posted_from.function_name());
  RTC_DCHECK_RUN_ON(this);
  int64_t start_time = TimeMillis();
  pmsg->phandler->OnMessage(pmsg);
  int64_t end_time = TimeMillis();
  int64_t diff = TimeDiff(end_time, start_time);
  if (diff >= dispatch_warning_ms_) {
    RTC_LOG(LS_INFO) << "Message to " << name() << " took " << diff
                     << "ms to dispatch. Posted from: "
                     << pmsg->posted_from.ToString();
    // To avoid log spew, move the warning limit to only give warning
    // for delays that are larger than the one observed.
    dispatch_warning_ms_ = diff + 1;
  }
}

Dispatch函数非常简单,抓住重点就是调用了传入的Message的OnMessage,将消息传递给MessageHandler去处理

消息的投递

前面有看了消息获取的实现原理,如果没有消息将会调用Wait进行等待,既然有Wait,那么肯定就有地方触发WaitUp,没错,就是在外部投递消息的时候会触发WaitUp, 在 WebRTC线程之间的任务投递中有介绍了两种方式,一种同步Invoke,一种异步Post

file://src/rtc_base/thread.h:449
 

 template <class FunctorT>
  void PostTask(const Location& posted_from, FunctorT&& functor) {
    Post(posted_from, GetPostTaskMessageHandler(), /*id=*/0,
         new rtc_thread_internal::MessageWithFunctor<FunctorT>(
             std::forward<FunctorT>(functor)));
  }

PostTask核心还是调用了Post函数,并且传入了属于自己的MessageData和MessageHandler

file://src/rtc_base/ http://thread.cc:563
void Thread::Post(const Location& posted_from,
                  MessageHandler* phandler,
                  uint32_t id,
                  MessageData* pdata,
                  bool time_sensitive) {
  RTC_DCHECK(!time_sensitive);
  if (IsQuitting()) {
    delete pdata;
    return;
  }

  // Keep thread safe
  // Add the message to the end of the queue
  // Signal for the multiplexer to return

  {
    CritScope cs(&crit_);
    Message msg;
    msg.posted_from = posted_from;
    msg.phandler = phandler;
    msg.message_id = id;
    msg.pdata = pdata;
    messages_.push_back(msg);
  }
  WakeUpSocketServer();
}

void Thread::WakeUpSocketServer() {
  ss_->WakeUp();
}

Post函数实现非常简单清晰,构造一个Message添加到队列,然后调用ss_->WakeUp()唤醒Wait,ss_是一个SocketServer对象,后面在分析, 先看同步Invoke

file://src/rtc_base/thread.h:388
 

template <
      class ReturnT,
      typename = typename std::enable_if<!std::is_void<ReturnT>::value>::type>
  ReturnT Invoke(const Location& posted_from, FunctionView<ReturnT()> functor) {
    ReturnT result;
    InvokeInternal(posted_from, [functor, &result] { result = functor(); });
    return result;
  }

  template <
      class ReturnT,
      typename = typename std::enable_if<std::is_void<ReturnT>::value>::type>
  void Invoke(const Location& posted_from, FunctionView<void()> functor) {
    InvokeInternal(posted_from, functor);
  }

两个重载函数一个有返回结果,一个没有,内部都调用InvokeInternal完成,InvokeInternal紧接着调用了Send函数

file://src/rtc_base/ http://thread.cc:914
void Thread::Send(const Location& posted_from,
                  MessageHandler* phandler,
                  uint32_t id,
                  MessageData* pdata) {
  RTC_DCHECK(!IsQuitting());
  if (IsQuitting())
    return;

  // Sent messages are sent to the MessageHandler directly, in the context
  // of "thread", like Win32 SendMessage. If in the right context,
  // call the handler directly.
  Message msg;
  msg.posted_from = posted_from;
  msg.phandler = phandler;
  msg.message_id = id;
  msg.pdata = pdata;
  if (IsCurrent()) {
#if RTC_DCHECK_IS_ON
    RTC_DCHECK_RUN_ON(this);
    could_be_blocking_call_count_++;
#endif
    msg.phandler->OnMessage(&msg);
    return;
  }

  AssertBlockingIsAllowedOnCurrentThread();

  Thread* current_thread = Thread::Current();

#if RTC_DCHECK_IS_ON
  if (current_thread) {
    RTC_DCHECK_RUN_ON(current_thread);
    current_thread->blocking_call_count_++;
    RTC_DCHECK(current_thread->IsInvokeToThreadAllowed(this));
    ThreadManager::Instance()->RegisterSendAndCheckForCycles(current_thread,
                                                             this);
  }
#endif

  // Perhaps down the line we can get rid of this workaround and always require
  // current_thread to be valid when Send() is called.
  std::unique_ptr<rtc::Event> done_event;
  if (!current_thread)
    done_event.reset(new rtc::Event());

  bool ready = false;
  PostTask(webrtc::ToQueuedTask(
      [&msg]() mutable { msg.phandler->OnMessage(&msg); },
      [this, &ready, current_thread, done = done_event.get()] {
        if (current_thread) {
          CritScope cs(&crit_);
          ready = true;
          current_thread->socketserver()->WakeUp();
        } else {
          done->Set();
        }
      }));

  if (current_thread) {
    bool waited = false;
    crit_.Enter();
    while (!ready) {
      crit_.Leave();
      current_thread->socketserver()->Wait(kForever, false);
      waited = true;
      crit_.Enter();
    }
    crit_.Leave();

    // Our Wait loop above may have consumed some WakeUp events for this
    // Thread, that weren't relevant to this Send.  Losing these WakeUps can
    // cause problems for some SocketServers.
    //
    // Concrete example:
    // Win32SocketServer on thread A calls Send on thread B.  While processing
    // the message, thread B Posts a message to A.  We consume the wakeup for
    // that Post while waiting for the Send to complete, which means that when
    // we exit this loop, we need to issue another WakeUp, or else the Posted
    // message won't be processed in a timely manner.

    if (waited) {
      current_thread->socketserver()->WakeUp();
    }
  } else {
    done_event->Wait(rtc::Event::kForever);
  }
}

Send函数的代码比较多,不过整体思路还是很清晰

  • 如果调用Send的线程就是Send所拥有的当前线程,直接运行Message中的OnMessage,不需要任务派遣
  • 不在同一个线程,调用PostTask将消息传递对应线程,这里读者可能会有一个疑问这个PostTask中的任务被派遣到什么线程了,如果你有一个Thread对象workerThread,你现在再main线程中调用workerThread.PostTask,这个任务将会被投递到你创建的Thread对象管理的的线程中,也就是workerThread中。
  • 任务被PostTask到对应线程中之后,存在两种情况,再函数运行之前或者之后,线程已经释放
  • 如果线程已经释放,仅仅等待一个函数执行完成的Event信号
  • 线程还存在,等待消息执行完成,执行完成之后再调用一次WakeUp,注释中也非常详细的解释了为什么需要再执行完成之后再调用一次WakeUp,原因就是再while(!ready) {... current_thread->socketserver()->Wait()}中可能会消费掉一些外部触发的WakeUp事件,如果在执行完成之后不调用一次WakeUp可能导致外部新Post的消息无法被即时消费

消息投递、派遣、获取状态转移图

为了更加清楚的了解WebRTC的消息投递、派遣、获取机制,我自己定义了4种状态,方便理解

  • Idel状态:通过调用Start,并且还没有调用Get函数前
  • Wait状态:通过调用Get函数,将Idel状态转换成Wait状态
  • Ready状态:通过调用Post状态从而触发Waitup,将Wait状态转换成Ready状态
  • Running状态:通过调用Dispatch进行消息的处理,转换成Running状态

Current实现机制

提出疑问点:如果我想要在代码任意位置获取当前线程的Thread对象,要怎么做?单例?

看看WebRTC Thread的Current函数原型:

class Thread {
  public:
    //......
    static Thread* Current();
}

当我们在线程A调用Thread::Current将会获得一个线程A的Thread对象,在线程B调用Thread::Current将会获取一个线程B的Thread对象, 来看看内部实现

// static
Thread* Thread::Current() {
  ThreadManager* manager = ThreadManager::Instance();
  Thread* thread = manager->CurrentThread();

#ifndef NO_MAIN_THREAD_WRAPPING
  // Only autowrap the thread which instantiated the ThreadManager.
  if (!thread && manager->IsMainThread()) {
    thread = new Thread(CreateDefaultSocketServer());
    thread->WrapCurrentWithThreadManager(manager, true);
  }
#endif

  return thread;
}

核心实现都在ThreadManager中,ThreadManager是针对WebRTC Thread提供的一个管理类,里面会存放所有外部创建的Thread

Thread* ThreadManager::CurrentThread() {
  return static_cast<Thread*>(TlsGetValue(key_));
}

ThreadManager::CurrentThread实现很简单,通过TlsGetValue获取了私有变量key_,那这个key_肯定有Set操作,没错,这个key_的Set操作,是在Thread的构造函数中进行的 Thraed() -> DoInit() -> ThreadManager::SetCurrentThread -> ThreadManager::SetCurrentThreadInternal

 

void ThreadManager::SetCurrentThreadInternal(Thread* thread) {
  TlsSetValue(key_, thread);
}

TlsSetValue和TlsGetValue是什么意思? 这里涉及到了一个知识点,也就是TLS
TLS介绍
TLS全称是Thread Local Storage 线程局部变量或者线程私有变量,私有的意思是每个线程都将独自拥有这个变量

  • 在Windows中采用TlsAlloc获取进程中一个未使用的TLS slot index,使用TlsSetValue进行值的设置,TlsGetValue进行值的获取
  • 在linux中采用pthread_key_create、pthread_getspecific、pthread_setspecific对TLS进行操作
  • C++11中采用thread_local


详细链接:
www.notion.so/TLS-78870a0…
www.notion.so/TLS-78870a0…
回归Current函数实现,它就是借助了TLS技术得以实现在不同线程存储属于自己的私有变量(这个私有变量就是Thread*),然后再对应线程调用Current获取到的Thread*也就是当前线程的了

WebRTC线程Proxy机制

前面有提到,WebRTC对外暴露的API比如PeerConnectionInterface在内部都一层代理机制,来确保每一个API调用在正确的线程,先看PeerConnectiontProxy

file://src/api/peer_connection_proxy.h
 

BEGIN_PROXY_MAP(PeerConnection)
PROXY_PRIMARY_THREAD_DESTRUCTOR()
PROXY_METHOD0(rtc::scoped_refptr<StreamCollectionInterface>, local_streams)
PROXY_METHOD0(rtc::scoped_refptr<StreamCollectionInterface>, remote_streams)
PROXY_METHOD1(bool, AddStream, MediaStreamInterface*)
PROXY_METHOD1(void, RemoveStream, MediaStreamInterface*)
PROXY_METHOD2(RTCErrorOr<rtc::scoped_refptr<RtpSenderInterface>>,
              AddTrack,
              rtc::scoped_refptr<MediaStreamTrackInterface>,
              const std::vector<std::string>&)
// ......

// This method will be invoked on the network thread. See
// PeerConnectionFactory::CreatePeerConnectionOrError for more details.
PROXY_SECONDARY_METHOD1(rtc::scoped_refptr<DtlsTransportInterface>,
                        LookupDtlsTransportByMid,
                        const std::string&)
// This method will be invoked on the network thread. See
// PeerConnectionFactory::CreatePeerConnectionOrError for more details.
PROXY_SECONDARY_CONSTMETHOD0(rtc::scoped_refptr<SctpTransportInterface>,
                             GetSctpTransport)   

上面的一堆宏,会生成一个PeerConnectionProxyWithInternal类,我们主要看三个宏 BEGIN_PROXY_MAP、PROXY_METHOD0、PROXY_SECONDARY_METHOD1

BEGIN_PROXY_MAP

#define BEGIN_PROXY_MAP(c)                                                   \
  PROXY_MAP_BOILERPLATE(c)                                                   \
  SECONDARY_PROXY_MAP_BOILERPLATE(c)                                         \
  REFCOUNTED_PROXY_MAP_BOILERPLATE(c)                                        \
 public:                                                                     \
  static rtc::scoped_refptr<c##ProxyWithInternal> Create(                    \
      rtc::Thread* primary_thread, rtc::Thread* secondary_thread,            \
      INTERNAL_CLASS* c) {                                                   \
    return rtc::make_ref_counted<c##ProxyWithInternal>(primary_thread,       \
                                                       secondary_thread, c); \
  }
  
// Helper macros to reduce code duplication.
#define PROXY_MAP_BOILERPLATE(c)                          \
  template <class INTERNAL_CLASS>                         \
  class c##ProxyWithInternal;                             \
  typedef c##ProxyWithInternal<c##Interface> c##Proxy;    \
  template <class INTERNAL_CLASS>                         \
  class c##ProxyWithInternal : public c##Interface {      \
   protected:                                             \
    typedef c##Interface C;                               \
                                                          \
   public:                                                \
    const INTERNAL_CLASS* internal() const { return c_; } \
    INTERNAL_CLASS* internal() { return c_; }

看重点, 第一typedef c##ProxyWithInternal<c##Interface> c##Proxy;, 也就是外部使用的类名采用PeerConnectionProxy, c##ProxyWithInternal: public c##Interface,也就是继承自PeerConnectionInterface类,也就是我们在外部拿到的PeerConnect指针对象,其实是PeerConnectionProxyWithInternal对象, 重点2 , Create函数,这个Create函数会在什么时候调用,并且primary_thread和secondary_thread分别对应着什么线程,看下面代码

 

RTCErrorOr<rtc::scoped_refptr<PeerConnectionInterface>>
PeerConnectionFactory::CreatePeerConnectionOrError(
    const PeerConnectionInterface::RTCConfiguration& configuration,
    PeerConnectionDependencies dependencies) {

  rtc::scoped_refptr<PeerConnectionInterface> result_proxy =
      PeerConnectionProxy::Create(signaling_thread(), network_thread(),
                                  result.MoveValue());
  return result_proxy;
}

通过上面的代码可以确定,在PeerConnectionProxy类中primary_thread对应的就是signaling_thread,secondary_thread线程就是network_thread线程

PROXY_METHOD0

#define PROXY_METHOD0(r, method)                         \
  r method() override {                                  \
    MethodCall<C, r> call(c_, &C::method);               \
    return call.Marshal(RTC_FROM_HERE, primary_thread_); \
  }

创建MethodCall类,并调用Marshal,注意调用Marshal传入的参数primary_thread_ ,在PeerConnectionProxy中也就是,signaling_thread

PROXY_SECONDARY_METHOD1

#define PROXY_SECONDARY_METHOD1(r, method, t1)                \
  r method(t1 a1) override {                                  \
    MethodCall<C, r, t1> call(c_, &C::method, std::move(a1)); \
    return call.Marshal(RTC_FROM_HERE, secondary_thread_);    \
  }

与PROXY_METHOD不同的是在调用Marshal时传入的是secondary_thread_,在PeerConnectionProxy也就是network_thread

MethodCall

template <typename C, typename R, typename... Args>
class MethodCall : public QueuedTask {
 public:
  typedef R (C::*Method)(Args...);
  MethodCall(C* c, Method m, Args&&... args)
      : c_(c),
        m_(m),
        args_(std::forward_as_tuple(std::forward<Args>(args)...)) {}

  R Marshal(const rtc::Location& posted_from, rtc::Thread* t) {
    if (t->IsCurrent()) {
      Invoke(std::index_sequence_for<Args...>());
    } else {
      t->PostTask(std::unique_ptr<QueuedTask>(this));
      event_.Wait(rtc::Event::kForever);
    }
    return r_.moved_result();
  }

 private:
  bool Run() override {
    Invoke(std::index_sequence_for<Args...>());
    event_.Set();
    return false;
  }

  template <size_t... Is>
  void Invoke(std::index_sequence<Is...>) {
    r_.Invoke(c_, m_, std::move(std::get<Is>(args_))...);
  }

  C* c_;
  Method m_;
  ReturnType<R> r_;
  std::tuple<Args&&...> args_;
  rtc::Event event_;
};

主要看Marshal函数,如果是在当前线程直接调用Invoke,否则调用PostTask将任务投递到指定线程,并等待运行完成. 关于std::tuple 的使用可以查看官方文档,上面的代码用到了两个C++14的新特性 std::index_sequence_for和 std::get 来辅助tuple的使用

原文 WebRTC代码学习之线程管理 - 掘金

★文末名片可以免费领取音视频开发学习资料,内容包括(FFmpeg ,webRTC ,rtmp ,hls ,rtsp ,ffplay ,srs)以及音视频学习路线图等等。

见下方!↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓

猜你喜欢

转载自blog.csdn.net/yinshipin007/article/details/132176402