Analysis of HLS protocol in AOSP

[Time: 2018-04] [Status: Open]
[Keywords: streaming, stream, HLS, AOSP, source code analysis, HttpLiveSource, LiveSession, PlaylistFetcher]

1 Introduction

This article is a follow-up article to the HLS review and a companion article to my previous analysis of the GenericSource source code in the Nuplayer source code analysis . Of course, this article focuses on analyzing the related implementation logic in NuPlayer in combination with the HLS principle. If you don't know NuPlayer very well, it is recommended to have a brief understanding first.

This article focuses on the HttpLiveSource in NuPlayer. Conceptually, the HttpLiveSource is mainly responsible for parsing m3u8, processing data according to the HLS protocol, downloading audio and video data through HTTP, and handing it over to a specific demuxer for processing.

2. HttpLiveSource interface

HttpLiveSource inherits from NuPlayer::Source, and its main interface is defined as follows:

// code from ~/frameworks/av/media/libmediaplayerservice/nuplayer/HttpLiveSource.h
struct NuPlayer::HTTPLiveSource : public NuPlayer::Source {
    HTTPLiveSource(const sp<AMessage> &notify, const sp<IMediaHTTPService> &httpService,
            const char *url, const KeyedVector<String8, String8> *headers);

    virtual void prepareAsync();
    virtual void start();

    virtual status_t dequeueAccessUnit(bool audio, sp<ABuffer> *accessUnit);
    virtual sp<AMessage> getFormat(bool audio);

    virtual status_t feedMoreTSData();
    // 以下函数是获得节目信息及选择节目的
    virtual status_t getDuration(int64_t *durationUs);
    virtual size_t getTrackCount() const;
    virtual sp<AMessage> getTrackInfo(size_t trackIndex) const;
    virtual ssize_t getSelectedTrack(media_track_type /* type */) const;
    virtual status_t selectTrack(size_t trackIndex, bool select, int64_t timeUs);
    virtual status_t seekTo(int64_t seekTimeUs);

protected:
    virtual ~HTTPLiveSource();
    virtual void onMessageReceived(const sp<AMessage> &msg);

private:
    sp<IMediaHTTPService> mHTTPService;
    AString mURL;
    KeyedVector<String8, String8> mExtraHeaders;
    uint32_t mFlags;
    status_t mFinalResult;
    off64_t mOffset;
    sp<ALooper> mLiveLooper;
    sp<LiveSession> mLiveSession;
    int32_t mFetchSubtitleDataGeneration;
    int32_t mFetchMetaDataGeneration;
    bool mHasMetadata;
    bool mMetadataSelected;

    void onSessionNotify(const sp<AMessage> &msg);
    void pollForRawData(const sp<AMessage> &msg, int32_t currentGeneration,
            LiveSession::StreamType fetchType, int32_t pushWhat);
};

I won't explain too much here, it is basically similar to the NuPlayer::Source interface, but some implementations are rewritten.

3. The section about HttpLiveSource in NuPlayer

There is only one paragraph that mentions HttpLiveSource, the code is as follows:

static bool IsHTTPLiveURL(const char *url) {
    if (!strncasecmp("http://", url, 7)
            || !strncasecmp("https://", url, 8)
            || !strncasecmp("file://", url, 7)) {
        size_t len = strlen(url);
        if (len >= 5 && !strcasecmp(".m3u8", &url[len - 5])) {
            return true;
        }

        if (strstr(url,"m3u8")) {
            return true;
        }
    }

    return false;
}

void NuPlayer::setDataSourceAsync(
        const sp<IMediaHTTPService> &httpService,
        const char *url,
        const KeyedVector<String8, String8> *headers) {

    sp<AMessage> msg = new AMessage(kWhatSetDataSource, this);
    size_t len = strlen(url);

    sp<AMessage> notify = new AMessage(kWhatSourceNotify, this);

    sp<Source> source;
    if (IsHTTPLiveURL(url)) { // 通过URL判断协议类型
        source = new HTTPLiveSource(notify, httpService, url, headers);
    } else if (!strncasecmp(url, "rtsp://", 7)) {
        source = new RTSPSource(
                notify, httpService, url, headers, mUIDValid, mUID);
    } else if ((!strncasecmp(url, "http://", 7)
                || !strncasecmp(url, "https://", 8))
                    && ((len >= 4 && !strcasecmp(".sdp", &url[len - 4]))
                    || strstr(url, ".sdp?"))) {
        source = new RTSPSource(
                notify, httpService, url, headers, mUIDValid, mUID, true);
    } else {
        sp<GenericSource> genericSource =
                new GenericSource(notify, mUIDValid, mUID);
        // Don't set FLAG_SECURE on mSourceFlags here for widevine.
        // The correct flags will be updated in Source::kWhatFlagsChanged
        // handler when  GenericSource is prepared.

        status_t err = genericSource->setDataSource(httpService, url, headers);

        if (err == OK) {
            source = genericSource;
        } else {
            ALOGE("Failed to set data source!");
        }
    }
    msg->setObject("source", source);
    msg->post();
}

The code logic is relatively simple, and the protocol type is directly judged according to the URL, and the corresponding HttpLiveSource is created. Then there is the generic interface using NuPlayer::Source.

4. Partial interface implementation of HttpLiveSource

This part mainly introduces several core interfaces of HttpLiveSource, such as constructor, prepareAsync, start, seekTo, etc.

4.1 Constructor and Destructor

code show as below:

NuPlayer::HTTPLiveSource::HTTPLiveSource(
        const sp<AMessage> &notify,
        const sp<IMediaHTTPService> &httpService,
        const char *url,
        const KeyedVector<String8, String8> *headers)
    : Source(notify),
      mHTTPService(httpService), // HTTP通信模块
      mURL(url), mFlags(0), mFinalResult(OK),
      mOffset(0),mFetchSubtitleDataGeneration(0),
      mFetchMetaDataGeneration(0),
      mHasMetadata(false), mMetadataSelected(false) {
    // ...额外对headers的处理逻辑
}

NuPlayer::HTTPLiveSource::~HTTPLiveSource() {
    if (mLiveSession != NULL) {
        mLiveSession->disconnect();

        mLiveLooper->unregisterHandler(mLiveSession->id());
        mLiveLooper->unregisterHandler(id());
        mLiveLooper->stop();

        mLiveSession.clear();
        mLiveLooper.clear();
    }
}

4.2 prepareAsync / start function

The code for these two functions is not complicated. code show as below:

void NuPlayer::HTTPLiveSource::prepareAsync() {
    if (mLiveLooper == NULL) {
        mLiveLooper = new ALooper;
        mLiveLooper->setName("http live");
        mLiveLooper->start();

        mLiveLooper->registerHandler(this);
    }

    sp<AMessage> notify = new AMessage(kWhatSessionNotify, this);
    // 创建LiveSession并启动连接
    mLiveSession = new LiveSession(
            notify,
            (mFlags & kFlagIncognito) ? LiveSession::kFlagIncognito : 0,
            mHTTPService);

    mLiveLooper->registerHandler(mLiveSession);

    mLiveSession->connectAsync(
            mURL.c_str(), mExtraHeaders.isEmpty() ? NULL : &mExtraHeaders);
}
void NuPlayer::HTTPLiveSource::start() {}

4.3 seekTo function

The implementation code of seek is also very simple, let LiveSession complete directly, the code is as follows:

status_t NuPlayer::HTTPLiveSource::seekTo(int64_t seekTimeUs) {
    return mLiveSession->seekTo(seekTimeUs);
}

After reading the above three main functions, it seems that there is no HLS protocol parsing and multimedia data demux processing in HttpLiveSource. Well, let's analyze the code of LiveSession.

5 Implementation Analysis of LiveSession Class

The main purpose of this part is to explain how to parse the HLS protocol in LiveSession, how to obtain the audio and video format, and how to complete the function of demuxer reading audio and video data packets?
Of course, LiveSession actually includes a bandwidth estimation function, which automatically performs HLS switching according to the bandwidth estimation.
As you can see from Section 4, HttpLiveSource only calls the LiveSession.connectAsync interface, what about the rest of the processing logic?
First, let's take a look at the implementation code of the LiveSession::connectAsync interface:

void LiveSession::connectAsync(
        const char *url, const KeyedVector<String8, String8> *headers) {
    sp<AMessage> msg = new AMessage(kWhatConnect, this);
    msg->setString("url", url);

    if (headers != NULL) {
        msg->setPointer(
                "headers",
                new KeyedVector<String8, String8>(*headers));
    }
    msg->post();
}

The main logic here is to send a kWhatConnect message, and its processing function is as follows:

//void LiveSession::onMessageReceived(const sp<AMessage> &msg) {
    switch (msg->what()) {
        case kWhatConnect:
        {
            onConnect(msg);
            break;
        }\\ ...

void LiveSession::onConnect(const sp<AMessage> &msg) {
    CHECK(msg->findString("url", &mMasterURL));
    // ...

    // create looper for fetchers
    if (mFetcherLooper == NULL) {
        mFetcherLooper = new ALooper();

        mFetcherLooper->setName("Fetcher");
        mFetcherLooper->start(false, false);
    }

    // 创建 master playerlist fetcher并开始请求数据
    addFetcher(mMasterURL.c_str())->fetchPlaylistAsync();
}

It is found here that the parsing location of the master playlist in the HLS protocol, PlaylistFetcher. Let's take a look at the relevant code:

void PlaylistFetcher::fetchPlaylistAsync() { // 发消息
    (new AMessage(kWhatFetchPlaylist, this))->post();
}
void PlaylistFetcher::onMessageReceived(const sp<AMessage> &msg) {
   case kWhatFetchPlaylist:
    {
        bool unchanged;
        // 最核心的处理逻辑,通过HTTPDownloader::fetchPlaylist获取m3u8的内容
        sp<M3UParser> playlist = mHTTPDownloader->fetchPlaylist(
                mURI.c_str(), NULL /* curPlaylistHash */, &unchanged);

        sp<AMessage> notify = mNotify->dup();
        notify->setInt32("what", kWhatPlaylistFetched);
        notify->setObject("playlist", playlist);
        notify->post();
        break;
    }
}

At this point, everyone should see that the last hidden function is HTTPDownloader::fetchPlaylist. Guess its basic function is to download the m3u8 file of a given url through the HTTP module, parse it through M3UParser, and return it. Here is the implementation code:

ssize_t HTTPDownloader::fetchFile(
        const char *url, sp<ABuffer> *out, String8 *actualUrl) {
    ssize_t err = fetchBlock(url, out, 0, -1, 0, actualUrl, true /* reconnect */);

    // close off the connection after use
    mHTTPDataSource->disconnect();

    return err;
}

sp<M3UParser> HTTPDownloader::fetchPlaylist(
        const char *url, uint8_t *curPlaylistHash, bool *unchanged) {
    *unchanged = false;

    sp<ABuffer> buffer;
    String8 actualUrl;
    // HTTP协议通信,数据放到buffer里面
    ssize_t err = fetchFile(url, &buffer, &actualUrl);

    // close off the connection after use
    mHTTPDataSource->disconnect();

    if (err <= 0) {return NULL;}
    
    // 字符串解析及HLS协议解析
    sp<M3UParser> playlist =
        new M3UParser(actualUrl.string(), buffer->data(), buffer->size());

    if (playlist->initCheck() != OK) return NULL;

    return playlist;
}

Through HTTPDownloader::fetchPlaylist, we have obtained the specific content of m3u8, we can continue to process the kWhatFetchPlaylist message, and then the registered message sends the kWhatPlaylistFetched message to the LiveSession. Its processing function is as follows:

void LiveSession::onMessageReceived(const sp<AMessage> &msg) {
    case PlaylistFetcher::kWhatPlaylistFetched: {
        onMasterPlaylistFetched(msg);
        break;
    }
}

void LiveSession::onMasterPlaylistFetched(const sp<AMessage> &msg) {
    // 检查fetcher的有效性,并删除已完成的fetcher    
    AString uri;
    ssize_t index = mFetcherInfos.indexOfKey(uri);
    if (index < 0) {
        ALOGW("fetcher for master playlist is gone.");
        return;
    }

    // no longer useful, remove
    mFetcherLooper->unregisterHandler(mFetcherInfos[index].mFetcher->id());
    mFetcherInfos.removeItemsAt(index);

    // sp<M3UParser> mPlaylist;
    CHECK(msg->findObject("playlist", (sp<RefBase> *)&mPlaylist));
    if (mPlaylist == NULL) { // 解析m3u8失败,发送错误消息
        postPrepared(ERROR_IO);
        return;
    }

    // 将variant相关属性放到带宽估计的参数里面
    size_t initialBandwidth = 0;
    size_t initialBandwidthIndex = 0;
    int32_t maxWidth = 0;
    int32_t maxHeight = 0;

    if (mPlaylist->isVariantPlaylist()) {
        Vector<BandwidthItem> itemsWithVideo;
        for (size_t i = 0; i < mPlaylist->size(); ++i) {
            BandwidthItem item;
            item.mPlaylistIndex = i;
            item.mLastFailureUs = -1ll;

            sp<AMessage> meta;
            AString uri;
            mPlaylist->itemAt(i, &uri, &meta);

            CHECK(meta->findInt32("bandwidth", (int32_t *)&item.mBandwidth));

            int32_t width, height;
            if (meta->findInt32("width", &width)) {
                maxWidth = max(maxWidth, width);
            }
            if (meta->findInt32("height", &height)) {
                maxHeight = max(maxHeight, height);
            }

            mBandwidthItems.push(item);
            if (mPlaylist->hasType(i, "video")) {
                itemsWithVideo.push(item);
            }
        }

        CHECK_GT(mBandwidthItems.size(), 0u);
        initialBandwidth = mBandwidthItems[0].mBandwidth;

        mBandwidthItems.sort(SortByBandwidth);

        for (size_t i = 0; i < mBandwidthItems.size(); ++i) {
            if (mBandwidthItems.itemAt(i).mBandwidth == initialBandwidth) {
                initialBandwidthIndex = i;
                break;
            }
        }
    }

    mMaxWidth = maxWidth > 0 ? maxWidth : mMaxWidth;
    mMaxHeight = maxHeight > 0 ? maxHeight : mMaxHeight;
    // 选择一个variant(默认使用第一个,最简单的实现)
    mPlaylist->pickRandomMediaItems();
    changeConfiguration(
            0ll /* timeUs */, initialBandwidthIndex, false /* pickTrack */);
}
void LiveSession::changeConfiguration(
        int64_t timeUs, ssize_t bandwidthIndex, bool pickTrack) {
    cancelBandwidthSwitch();

    CHECK(!mReconfigurationInProgress);
    mReconfigurationInProgress = true;
    if (bandwidthIndex >= 0) {
        mOrigBandwidthIndex = mCurBandwidthIndex;
        mCurBandwidthIndex = bandwidthIndex;
    }
    CHECK_LT(mCurBandwidthIndex, mBandwidthItems.size());
    const BandwidthItem &item = mBandwidthItems.itemAt(mCurBandwidthIndex);

    uint32_t streamMask = 0; // streams that should be fetched by the new fetcher
    uint32_t resumeMask = 0; // streams that should be fetched by the original fetcher

    AString URIs[kMaxStreams];
    for (size_t i = 0; i < kMaxStreams; ++i) {
        if (mPlaylist->getTypeURI(item.mPlaylistIndex, mStreams[i].mType, &URIs[i])) {
            streamMask |= indexToType(i);
        }
    }

    // Step 1, 停止不再需要的fetcher,将其暂停以便后续重用
    for (size_t i = 0; i < mFetcherInfos.size(); ++i) {
        // 设置mToBeRemoved之后不能重用了
        if (mFetcherInfos[i].mToBeRemoved)
            continue;

        const AString &uri = mFetcherInfos.keyAt(i);
        sp<PlaylistFetcher> &fetcher = mFetcherInfos.editValueAt(i).mFetcher;

        bool discardFetcher = true, delayRemoval = false;
        for (size_t j = 0; j < kMaxStreams; ++j) {
            StreamType type = indexToType(j);
            if ((streamMask & type) && uri == URIs[j]) {
                resumeMask |= type;
                streamMask &= ~type;
                discardFetcher = false;
            }
        }
        // Delay fetcher removal
        if (discardFetcher && timeUs < 0ll && !pickTrack
                && (fetcher->getStreamTypeMask() & streamMask)) {
            discardFetcher = false;
            delayRemoval = true;
        }

        if (discardFetcher) {
            ALOGV("discarding fetcher-%d", fetcher->getFetcherID());
            fetcher->stopAsync();
        } else {
            float threshold = 0.0f; // default to pause after current block (47Kbytes)
            bool disconnect = false;
            if (timeUs >= 0ll) {
                // seeking, no need to finish fetching
                disconnect = true;
            } else if (delayRemoval) {
                // adapting, abort if remaining of current segment is over threshold
                threshold = getAbortThreshold(
                        mOrigBandwidthIndex, mCurBandwidthIndex);
            }
            fetcher->pauseAsync(threshold, disconnect);
        }
    }

    sp<AMessage> msg;
    if (timeUs < 0ll) {
        // skip onChangeConfiguration2 (decoder destruction) if not seeking.
        msg = new AMessage(kWhatChangeConfiguration3, this);
    } else {
        msg = new AMessage(kWhatChangeConfiguration2, this);
    }
    msg->setInt32("streamMask", streamMask);
    msg->setInt32("resumeMask", resumeMask);
    msg->setInt32("pickTrack", pickTrack);
    msg->setInt64("timeUs", timeUs);
    for (size_t i = 0; i < kMaxStreams; ++i) {
        if ((streamMask | resumeMask) & indexToType(i)) {
            msg->setString(mStreams[i].uriKey().c_str(), URIs[i].c_str());
        }
    }

    mContinuationCounter = mFetcherInfos.size();
    mContinuation = msg;

    if (mContinuationCounter == 0) {
        msg->post();
    }
}

The above only does some parameter checking and fetcher resource sorting, and then there are two messages kWhatChangeConfiguration3 and kWhatChangeConfiguration2, the latter has one more decoder destruction than the former. Next, let's take a look at the implementation of these two functions in turn.

void LiveSession::onChangeConfiguration2(const sp<AMessage> &msg) {
    mContinuation.clear();

    // All fetchers are either suspended or have been removed now.

    // 对于seek的情况,先清除之前暂存的数据
    int64_t timeUs;
    CHECK(msg->findInt64("timeUs", &timeUs));

    if (timeUs >= 0) {
        mLastSeekTimeUs = timeUs;
        mLastDequeuedTimeUs = timeUs;

        for (size_t i = 0; i < mPacketSources.size(); i++) {
            sp<AnotherPacketSource> packetSource = mPacketSources.editValueAt(i);
            sp<MetaData> format = packetSource->getFormat();
            packetSource->setFormat(format);
        }

        for (size_t i = 0; i < kMaxStreams; ++i) {
            mStreams[i].reset();
        }

        mDiscontinuityOffsetTimesUs.clear();
        mDiscontinuityAbsStartTimesUs.clear();

        if (mSeekReplyID != NULL) {
            CHECK(mSeekReply != NULL);
            mSeekReply->setInt32("err", OK);
            mSeekReply->postReply(mSeekReplyID);
            mSeekReplyID.clear();
            mSeekReply.clear();
        }

        // seek之后重置下缓冲状态,这将是整个HLS流播放的驱动源
        restartPollBuffering();
    }

    uint32_t streamMask, resumeMask;
    CHECK(msg->findInt32("streamMask", (int32_t *)&streamMask));
    CHECK(msg->findInt32("resumeMask", (int32_t *)&resumeMask));
    streamMask |= resumeMask;

    AString URIs[kMaxStreams];
    for (size_t i = 0; i < kMaxStreams; ++i) {
        if (streamMask & indexToType(i)) {
            const AString &uriKey = mStreams[i].uriKey();
            CHECK(msg->findString(uriKey.c_str(), &URIs[i]));
            ALOGV("%s = '%s'", uriKey.c_str(), URIs[i].c_str());
        }
    }

    uint32_t changedMask = 0;
    for (size_t i = 0; i < kMaxStreams && i != kSubtitleIndex; ++i) {
        // 在码流切换的情况下,发生seek后流的URL可能变化
        // 这种情况下,取消码流切换,并将seekPos应用到新流上
        if ((mStreamMask & streamMask & indexToType(i))
                && !mStreams[i].mUri.empty()
                && !(URIs[i] == mStreams[i].mUri)) {
            sp<AnotherPacketSource> source = mPacketSources.valueFor(indexToType(i));
            if (source->getLatestDequeuedMeta() != NULL) {
                source->queueDiscontinuity(
                        ATSParser::DISCONTINUITY_FORMATCHANGE, NULL, true);
            }
        }
        // 判断解码器是否需要关闭
        if ((mStreamMask & ~streamMask & indexToType(i))) {
            changedMask |= indexToType(i);
        }
    }

    if (changedMask == 0) {
        // 音视频解码器都没有变化的话,可以直接处理
        onChangeConfiguration3(msg);
        return;
    }
    // 给NuPlayer发送通知,需要关闭对应解码器,结束之后回发kWhatChangeConfiguration3消息
    sp<AMessage> notify = mNotify->dup();
    notify->setInt32("what", kWhatStreamsChanged);
    notify->setInt32("changedMask", changedMask);

    msg->setWhat(kWhatChangeConfiguration3);
    msg->setTarget(this);

    notify->setMessage("reply", msg);
    notify->post();
}

void LiveSession::onChangeConfiguration3(const sp<AMessage> &msg) {
    mContinuation.clear();
    // All remaining fetchers are still suspended, the player has shutdown
    // any decoders that needed it.

    uint32_t streamMask, resumeMask;
    CHECK(msg->findInt32("streamMask", (int32_t *)&streamMask));
    CHECK(msg->findInt32("resumeMask", (int32_t *)&resumeMask));

    mNewStreamMask = streamMask | resumeMask;

    int64_t timeUs;
    int32_t pickTrack;
    bool switching = false;
    CHECK(msg->findInt64("timeUs", &timeUs));
    CHECK(msg->findInt32("pickTrack", &pickTrack));
    // timeUs负值表示刚启动
    if (timeUs < 0ll) {
        if (!pickTrack) {
            // 判断是否需要切换
            mSwapMask =  mNewStreamMask & mStreamMask & ~resumeMask;
            switching = (mSwapMask != 0);
        }
        mRealTimeBaseUs = ALooper::GetNowUs() - mLastDequeuedTimeUs;
    } else {
        mRealTimeBaseUs = ALooper::GetNowUs() - timeUs;
    }

    for (size_t i = 0; i < kMaxStreams; ++i) {
        if (streamMask & indexToType(i)) {
            if (switching) {
                CHECK(msg->findString(mStreams[i].uriKey().c_str(), &mStreams[i].mNewUri));
            } else {
                CHECK(msg->findString(mStreams[i].uriKey().c_str(), &mStreams[i].mUri));
            }
        }
    }

    // Of all existing fetchers:
    // * Resume fetchers that are still needed and assign them original packet sources.
    // * Mark otherwise unneeded fetchers for removal.
    for (size_t i = 0; i < mFetcherInfos.size(); ++i) {
        const AString &uri = mFetcherInfos.keyAt(i);
        if (!resumeFetcher(uri, resumeMask, timeUs))
            mFetcherInfos.editValueAt(i).mToBeRemoved = true;
    }

    // 到此,streamMask中仅包含需要新建的fetcher
    if (streamMask != 0) {
        ALOGV("creating new fetchers for mask 0x%08x", streamMask);
    }

    // Find out when the original fetchers have buffered up to and start the new fetchers
    // at a later timestamp.
    for (size_t i = 0; i < kMaxStreams; i++) {
        if (!(indexToType(i) & streamMask)) {
            continue;
        }

        AString uri;
        uri = switching ? mStreams[i].mNewUri : mStreams[i].mUri;

        sp<PlaylistFetcher> fetcher = addFetcher(uri.c_str());
        CHECK(fetcher != NULL);

        HLSTime startTime;
        SeekMode seekMode = kSeekModeExactPosition;
        sp<AnotherPacketSource> sources[kNumSources];

        if (i == kSubtitleIndex || (!pickTrack && !switching)) {
            startTime = latestMediaSegmentStartTime();
        }

        for (size_t j = i; j < kMaxStreams; ++j) {
            const AString &streamUri = switching ? mStreams[j].mNewUri : mStreams[j].mUri;
            if ((streamMask & indexToType(j)) && uri == streamUri) {
                sources[j] = mPacketSources.valueFor(indexToType(j));

                if (timeUs >= 0) {
                    startTime.mTimeUs = timeUs;
                } else {
                    int32_t type;
                    sp<AMessage> meta;
                    if (!switching) {
                        // selecting, or adapting but no swap required
                        meta = sources[j]->getLatestDequeuedMeta();
                    } else {
                        // adapting and swap required
                        meta = sources[j]->getLatestEnqueuedMeta();
                        if (meta != NULL && mCurBandwidthIndex > mOrigBandwidthIndex) {
                            // switching up
                            meta = sources[j]->getMetaAfterLastDequeued(mUpSwitchMargin);
                        }
                    }

                    if ((j == kAudioIndex || j == kVideoIndex)
                            && meta != NULL && !meta->findInt32("discontinuity", &type)) {
                        HLSTime tmpTime(meta);
                        if (startTime < tmpTime) {
                            startTime = tmpTime;
                        }
                    }

                    if (!switching) {
                        // selecting, or adapting but no swap required
                        sources[j]->clear();
                        if (j == kSubtitleIndex) {
                            break;
                        }
                        sources[j]->queueDiscontinuity(
                                ATSParser::DISCONTINUITY_FORMAT_ONLY, NULL, true);
                    } else {
                        // switching, queue discontinuities after resume
                        sources[j] = mPacketSources2.valueFor(indexToType(j));
                        sources[j]->clear();
                        // the new fetcher might be providing streams that used to be
                        // provided by two different fetchers,  if one of the fetcher
                        // paused in the middle while the other somehow paused in next
                        // seg, we have to start from next seg.
                        if (seekMode < mStreams[j].mSeekMode) {
                            seekMode = mStreams[j].mSeekMode;
                        }
                    }
                }

                streamMask &= ~indexToType(j);
            }
        }

        fetcher->startAsync(
                sources[kAudioIndex],
                sources[kVideoIndex],
                sources[kSubtitleIndex],
                getMetadataSource(sources, mNewStreamMask, switching),
                startTime.mTimeUs < 0 ? mLastSeekTimeUs : startTime.mTimeUs,
                startTime.getSegmentTimeUs(),
                startTime.mSeq,
                seekMode);
    }

    // All fetchers have now been started, the configuration change
    // has completed.

    mReconfigurationInProgress = false;
    if (switching) {
        mSwitchInProgress = true;
    } else {
        mStreamMask = mNewStreamMask;
        if (mOrigBandwidthIndex != mCurBandwidthIndex)
            mOrigBandwidthIndex = mCurBandwidthIndex;
    }

    if (mDisconnectReplyID != NULL) {
        finishDisconnect();// 这个属于响应退出逻辑
    }
}

To sum up, the complete processing logic of connectAsync is basically above, starting from the URL, downloading the master playlist of HLS, then parsing and initializing the fetcher and decoder, and finally starting to download segment data and parse it. There is one less function above, restartPollBuffering, which is our driver for reading the buffer state. Here is its implementation:

void LiveSession::schedulePollBuffering() {
    sp<AMessage> msg = new AMessage(kWhatPollBuffering, this);
    msg->setInt32("generation", mPollBufferingGeneration);
    msg->post(1000000ll);// 10ms下载一次
}
void LiveSession::cancelPollBuffering() {
    ++mPollBufferingGeneration;
    mPrevBufferPercentage = -1;
}

void LiveSession::restartPollBuffering() {
    cancelPollBuffering(); // 取消之前的数据下载
    onPollBuffering(); // 重新下载数据
}

void LiveSession::onPollBuffering() {
    bool underflow, ready, down, up;
    if (checkBuffering(underflow, ready, down, up)) {
        if (mInPreparationPhase) {
            // 支持在preparing时下切
            if (!switchBandwidthIfNeeded(false /* up */, down) && ready) {
                postPrepared(OK);
            }
        }

        if (!mInPreparationPhase) {
            if (ready) {
                stopBufferingIfNecessary();
            } else if (underflow) {
                startBufferingIfNecessary();
            }
            switchBandwidthIfNeeded(up, down);
        }
    }
    // 递归执行此函数
    schedulePollBuffering();
}

// 消息处理例程部分代码
case kWhatPollBuffering:
{
    int32_t generation;
    CHECK(msg->findInt32("generation", &generation));
    if (generation == mPollBufferingGeneration) {
        onPollBuffering();
    }
    break;
}

From here, it is found that this is actually the judgment of the downloaded data, and there is no download control logic, so the actual download code should be in PlaylistFetcher. Those who are interested can take a look. Of course, there is also the logic of bandwidth estimation and switching in LiveSession. You can refer to it if you need it.
The implementation of seekTo will be simpler, and finally implemented through onSeek, the code is as follows:

void LiveSession::onSeek(const sp<AMessage> &msg) {
    int64_t timeUs;
    CHECK(msg->findInt64("timeUs", &timeUs));
    changeConfiguration(timeUs); // 这跟启动时的情况差不多
}

6 Summary

This article refers to the source code of AOSP 7, and briefly sorts out the analysis and processing logic of HttpLiveSource for HLS. The purpose of organizing this article is only to deepen the understanding of this aspect. Of course, this article does not have very detailed protocol analysis and the logic of HLS variant switching. So, just for reference.

6.1 References

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325070812&siteId=291194637