WebRtc AudioRtpSender

https://blog.csdn.net/volvet/article/details/52905765

AudioRtpSender 是连接WebRtc Session 和 AudioTrack的纽带. 如前文所言, AudioTrack 封装了AudioSource, 但是WebRTC Session尚需要AudioRtpSender 来获取数据.
在讲述AudioRtpSender之前, 先来看这个类LocalAudioSinkAdapter

// LocalAudioSinkAdapter receives data callback as a sink to the local
// AudioTrack, and passes the data to the sink of AudioSource.
class LocalAudioSinkAdapter : public AudioTrackSinkInterface,
                              public cricket::AudioSource {
 public:
  LocalAudioSinkAdapter();
  virtual ~LocalAudioSinkAdapter();

 private:
  // AudioSinkInterface implementation.
  void OnData(const void* audio_data,
              int bits_per_sample,
              int sample_rate,
              size_t number_of_channels,
              size_t number_of_frames) override;

  // cricket::AudioSource implementation.
  void SetSink(cricket::AudioSource::Sink* sink) override;

  cricket::AudioSource::Sink* sink_;
  // Critical section protecting |sink_|.
  rtc::CriticalSection lock_;
};
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23

类LocalAudioSinkAdapter, 它实质上是AudioTrack Sink, AudioTrack的Sink 其实是AudioSource Sink, 所以是可以接收来自Audio Srouce 的数据的. 然后, LocalAudioSinkAdaptor 又把自己伪装成为AudioSource, 它对于WebRtc Session来说 是一个AudioSource, 把来自真正的AudioSource的数据传递给WebRtc Session.
来看AudioRtpSender的构造函数

   AudioRtpSender::AudioRtpSender(AudioTrackInterface* track,
                               const std::string& stream_id,
                               AudioProviderInterface* provider,
                               StatsCollector* stats)
    : id_(track->id()),
      stream_id_(stream_id),
      provider_(provider),
      stats_(stats),
      track_(track),
      cached_track_enabled_(track->enabled()),
      sink_adapter_(new LocalAudioSinkAdapter()) {
  RTC_DCHECK(provider != nullptr);
  track_->RegisterObserver(this);
  track_->AddSink(sink_adapter_.get());
} 
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15

构造AudioRtpSender的时候 就把LocalAudioSourceAdaptor 交给AudioTrack, 作为它的Sink. 然后看这段代码:

void AudioRtpSender::SetAudioSend() {
  RTC_DCHECK(!stopped_ && can_send_track());
  cricket::AudioOptions options;
#if !defined(WEBRTC_CHROMIUM_BUILD)
  // TODO(tommi): Remove this hack when we move CreateAudioSource out of
  // PeerConnection.  This is a bit of a strange way to apply local audio
  // options since it is also applied to all streams/channels, local or remote.
  if (track_->enabled() && track_->GetSource() &&
      !track_->GetSource()->remote()) {
    // TODO(xians): Remove this static_cast since we should be able to connect
    // a remote audio track to a peer connection.
    options = static_cast<LocalAudioSource*>(track_->GetSource())->options();
  }
#endif

  cricket::AudioSource* source = sink_adapter_.get();
  ASSERT(source != nullptr);
  provider_->SetAudioSend(ssrc_, track_->enabled(), options, source);
}

RtpParameters AudioRtpSender::GetParameters() const {
  return provider_->GetAudioRtpSendParameters(ssrc_);
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23

在SetAudioSend 的时候, 把LocalAudioSourceAdaptor 交给了provider_, 注意, 这里的provider_, 就是WebRtc Session. 用这种方式完成了Source 到 Session 数据通道的建立。


猜你喜欢

转载自blog.csdn.net/fanhenghui/article/details/80571459