Android video development advanced: Android Media API

Author: qing world

Today, I can finally start to learn the knowledge of video playback on the Android platform! I believe everyone can't wait a long time.


This chapter we will arrange like this

1. The history of the Android platform video playback API

2. Use of Android New Media API

3. An example of playing video using the new Media API

1. The history of the Android platform video playback API

a long long time ago. . . .

Kekeke, the words are heavy. . . .

Before 2012, video playback on the Android platform has always been a very simple matter (for most developers, because most developers do not need to go deep into the underlying MediaPlayer Service),

It's very simple, create a player object, inject URL, play, and release after playing. . . . .

This brings great convenience to Android developers, and there is very little application code. It can be said that before 2011 (especially before the live broadcast business has exploded), this Native Player is still very useful.

But the shortcomings of this player are also very obvious.

1. Many formats of container files do not support, nor do they support adaptive video playback (Adaptive Streaming)

2. It is difficult for application developers to debug the player, and most of the code of MediaPlayer is Native Method. Not at the Java level.

3. It is difficult to customize expansion and settings, such as buffer size, download progress and so on.

It is precisely because the implementation of MediaPlayer itself is completely transparent to developers, it is becoming more and more mysterious, and it is gradually unable to keep up with the current business needs for players.

So Google is also aware of this. At the Google IO conference in 2012, Google announced Android Jelly Bean, that is, after 4.3, the Android platform released a new Media Codec API group. These APIs are no longer like the foolish MediaPlayer before, but are a lower-level concept of video playback oriented to the design of API components. For example, codec API, Extractor API for container file reader, etc.

The above pictures are all screenshots taken from the video of the Google IO conference. We can see from the structure diagram that the original MediaPlayer blocked the Extractor and Codec API in the Framework layer, and the application layer is completely inaccessible. In the new API design, these have been moved to the application layer (in fact, although the MediaCodec API, that is, the codec API is still in the Framework, the application layer can call them)


2. Use of Android Codec API

In the new Media API, the most important ones are MediaExtractor and MediaCodec. The first one can read and control the container file, and the second is the API for encoding and decoding data.

MediaExtractor

MediaExtractor can get the track number and track information (Track) of the container file with the same URL. After determining the track information, you can select the track you want to decode (only one can be selected, so the audio track and the video track need two different MediaExtractors to decode two different MediaCodecs), and then read the data from the track continuously. Enter MediaCodec API for decoding.

MediaCodec

MediaCodec API needs to select the type of Codec when it is created. Then when encoding, the Android platform needs to display the Surface and MediaCrypto objects of the video (if the video is encrypted, I will introduce this detail in the DRM chapter).

After a MediaCodec is created, it will maintain two Queues internally, one is InputQueue and the other is OutputQueue. Similar to the producer-consumer model, MediaCodec will continuously obtain data from InputQueue (InputQueue data is provided by MediaExtractor), decode, and then put the decoded data into OutputQueue, and then provide it to Surface for its video content.

The way these two classes collaborate is as shown below


3. An example of playing video using the new Media API

Then it's time to take a look at the source code! This time we are using an unofficially maintained open source project from Google called grafika. This project is actually a Demo app, which uses the new Media API to make many interesting small examples. Including the example we are going to watch this time, using MediaAPI to play video. There are only three methods here, and the calling sequence is also carried out sequentially.

public void playWithUrl() throws IOException {
        MediaExtractor extractor = null;
        MediaCodec decoder = null;
        try {
            /**
             * 创建一个MediaExtractor对象
             */
            extractor = new MediaExtractor();
            /**
             * 设置Extractor的source,这里可以把mp4的url传进来,
             */
            extractor.setDataSource(context, Uri.parse(url),new HashMap
 /**
     * 我们用Extractor获取轨道数量,然后遍历他们,只要找到第一个轨道是Video的就返回
     */
    private static int selectTrack(MediaExtractor extractor) {
        // Select the first video track we find, ignore the rest.
        int numTracks = extractor.getTrackCount();
        for (int i = 0; i < numTracks; i++) {
            MediaFormat format = extractor.getTrackFormat(i);
            String mime = format.getString(MediaFormat.KEY_MIME);
            if (mime.startsWith("video/")) {
                if (VERBOSE) {
                    Log.d(TAG, "Extractor selected track " + i + " (" + mime + "): " + format);
                }
                return i;
            }
        }

        return -1;
    }
 private void doExtract(MediaExtractor extractor, int trackIndex, MediaCodec decoder,
                           FrameCallback frameCallback) {
        final int TIMEOUT_USEC = 10000;
        /**
         * 获取MediaCodec的输入队列,是一个数组
         */
        ByteBuffer[] decoderInputBuffers = decoder.getInputBuffers();
        int inputChunk = 0;
        long firstInputTimeNsec = -1;

        boolean outputDone = false;
        boolean inputDone = false;
        /**
         * 用while做循环
         */
        while (!outputDone) {
            if (VERBOSE) Log.d(TAG, "loop");
            if (mIsStopRequested) {
                Log.d(TAG, "Stop requested");
                return;
            }

            // Feed more data to the decoder.
            /**
             * 不停的输入数据知道输入队列满为止
             */
            if (!inputDone) {
                /**
                 * 这个方法返回输入队列数组可以放数据的位置,即一个索引
                 */
                int inputBufIndex = decoder.dequeueInputBuffer(TIMEOUT_USEC);
                /**
                 * 如果输入队列还有位置
                 */
                if (inputBufIndex >= 0) {
                    if (firstInputTimeNsec == -1) {
                        firstInputTimeNsec = System.nanoTime();
                    }
                    ByteBuffer inputBuf = decoderInputBuffers[inputBufIndex];
                    // Read the sample data into the ByteBuffer.  This neither respects nor
                    // updates inputBuf's position, limit, etc.
                    /**
                     * 用Extractor读取一个sample的数据,并且放入输入队列
                     */
                    int chunkSize = extractor.readSampleData(inputBuf, 0);
                    /**
                     * 如果chunk size是小于0,证明我们已经读取完毕这个轨道的数据了。
                     */
                    if (chunkSize < 0) {
                        // End of stream -- send empty frame with EOS flag set.
                        decoder.queueInputBuffer(inputBufIndex, 0, 0, 0L,
                                MediaCodec.BUFFER_FLAG_END_OF_STREAM);
                        inputDone = true;
                        if (VERBOSE) Log.d(TAG, "sent input EOS");
                    }
                    else {
                        if (extractor.getSampleTrackIndex() != trackIndex) {
                            Log.w(TAG, "WEIRD: got sample from track " +
                                    extractor.getSampleTrackIndex() + ", expected " + trackIndex);
                        }
                        long presentationTimeUs = extractor.getSampleTime();
                        decoder.queueInputBuffer(inputBufIndex, 0, chunkSize,
                                presentationTimeUs, 0 /*flags*/);
                        if (VERBOSE) {
                            Log.d(TAG, "submitted frame " + inputChunk + " to dec, size=" +
                                    chunkSize);
                        }
                        inputChunk++;
                        /**
                         * Extractor移动一个sample的位置,下一次再调用extractor.readSampleData()就会读取下一个sample
                         */
                        extractor.advance();
                    }
                } else {
                    if (VERBOSE) Log.d(TAG, "input buffer not available");
                }
            }

            if (!outputDone) {
                /**
                 * 开始把输出队列的数据拿出来,decodeStatus只要不是大于零的整数都是异常的现象,需要处理
                 */
                int decoderStatus = decoder.dequeueOutputBuffer(mBufferInfo, TIMEOUT_USEC);
                if (decoderStatus == MediaCodec.INFO_TRY_AGAIN_LATER) {
                    // no output available yet
                    if (VERBOSE) Log.d(TAG, "no output from decoder available");
                } else if (decoderStatus == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) {
                    // not important for us, since we're using Surface
                    if (VERBOSE) Log.d(TAG, "decoder output buffers changed");
                } else if (decoderStatus == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
                    MediaFormat newFormat = decoder.getOutputFormat();
                    if (VERBOSE) Log.d(TAG, "decoder output format changed: " + newFormat);
                } else if (decoderStatus < 0) {
                    throw new RuntimeException(
                            "unexpected result from decoder.dequeueOutputBuffer: " +
                                    decoderStatus);
                } else { // decoderStatus >= 0
                    if ((mBufferInfo.flags & MediaCodec.BUFFER_FLAG_END_OF_STREAM) != 0) {
                        if (VERBOSE) Log.d(TAG, "output EOS");
                            outputDone = true;
                    }
                    boolean doRender = (mBufferInfo.size != 0);
                    if (doRender && frameCallback != null) {
                        frameCallback.preRender(mBufferInfo.presentationTimeUs);
                    }
                    /**
                     * 只要我们调用了decoder.releaseOutputBuffer(),
                     * 就会把输出队列的数据全部输出到Surface上显示,并且释放输出队列的数据
                     */
                    decoder.releaseOutputBuffer(decoderStatus, doRender);
                }
            }
        }
    }

Of course, you may have many questions. For example, did you talk about scalability? Does Extractor still only read the specified format? Wait and so on. I will slowly explain in the next few chapters, through Google's open source player ExoPlayer, we can go deep into how to use and expand these APIs. In the next chapter, I will first explain the concept of adaptive video, and then use Exoplayer examples to illustrate how to use the Media API to play adaptive video.

Attached to the end of the article is the information I collected and organized. The content includes: Android learning PDF + architecture video + source code notes , advanced architecture technology advanced brain map, Android development interview special materials, advanced advanced architecture information these pieces of content

Share with everyone, it is very suitable for friends who have interviews in the near future and want to continue to improve on the road of technology. I also hope that it can help everyone to advance

I believe it will bring you a lot of gains. If you need it, you can click to get it !

If you feel that your learning efficiency is low and you lack correct guidance, you can join a technical circle with rich resources and a strong learning atmosphere to learn and communicate together!

Guess you like

Origin blog.csdn.net/ajsliu1233/article/details/108411823