Android6.0 new hardware decoding multi-threaded mode switch

Disclaimer: This article is a blogger original article, follow the CC 4.0 BY-SA copyright agreement, reproduced, please attach the original source link and this statement.
This link: https://blog.csdn.net/u010029439/article/details/102519884

https://juejin.im/entry/5b8e254851882542d416d41c

The multi-way synchronization, increasing fluency

Android 6.0 (API23) add an interface - setOutputSurface. As the name implies, this output can be provided dynamically Surface. This is the perfect solution to the above problem. Specifically, we can create multiple Texture in advance, and then output to any cycle time OutputBuffer a free Texture and marked with data, when consumed OpenGL image, return to the Texture idle. This is equivalent to consumption between OutputBuffer and OpenGL texture establishment of a buffer. Multithreading can be done in parallel needs.

Obvious drawback is the need to support Android 6.0, but now through the Android statistics panel developer.android.com/about/dashb...

Most phones are able to see on Android 6.0.

 

 

MediaCodec Android application in video hardware decoding assembly

Read 361

Collect

2018-09-04

Original link: click.aliyun.com

background:

With the development of the multimedia industry, the mobile phone side of the increasingly demanding video decoding performance. If cpu is decoded, it will take a lot of cpu resources. Now the mainstream approach is to use the phone gpu resources for video decoding. Android system Android4.0 (API 16) increased MediaCodec, call the java app can support the interface, and then use the underlying hardware audio and video encoding and decoding capabilities. Android ndk in Android 5.0 (API21) provides the corresponding Native method. Function much the same.

MediaCodec can handle coded, the decoding can be processed; can handle audio, video can also be treated, there are software solutions (CPU), but also hard solution (GPU). Specific Android mobile phone system will normally be written on media_codecs.xml. Different phone's location is not the same. In my experience, the majority of mobile phones is under / system / etc / directory.

Here it is mainly about video decoding.

Android MediaCodec internal structure substantially

mediacodec general structure

As shown above, there are two internal mediacodec buffer A is InputBuffer, the other is OutputBuffer. Two buffer size is generally determined code underlying hardware. Decoding process, the need to constantly query InputBuffer situation and OutputBuffer the Client, if InputBuffer idle, it should be placed in the appropriate stream; if there is OutputBuffer output, you should go to consumer video frames and released.

codec from the start of the internal thread is constantly interrogated InputBuffer and OutputBuffer if OutputBuffer idle and there is an unprocessed InputBuffer, then to decode one; otherwise hang.

Android MediaCodec start the process

Android Runtime version 1. Analyzing

Because the interface ndk only have Android 5.0 or higher, we first determine the Android version, if the version number is used Ndk interfaces in 5.0 above, otherwise, the use of java anti-tune the way.

2. Create a decoder

There are two mediaCodec provide a way to create a decoder, a relatively simple way is to create a decoder directly by MIME. MIME is the type of decoder. Such as creating 264 decoder, then you can simply call the following function:

 

AMediaCodec_createDecoderByType("video/avc")


If the phone more than a 264 decoder (usually there on a cell phone and a Google hardware decoder soft decoder), then MediaCodec will choose a default in accordance with the order. Of course, this order can be changed. Under normal circumstances the phone, the default choice of hardware solution.

 

If you want to create precise selection decoder, you can create by name:

 

AMediaCodec_createCodecByName("OMX.video.decoder.avc")

 

Decoder Configuration


AMediaCodec_configure(handle,format,surface, crypto, flag)

 

This function should be noted that there are two, one is mediaFormat, and the other is related to surface.crypto is encrypted, here too every use. flag is to be noted that the parametric coding, decoding generally filled 0.

mediaFormat Client is required in advance to tell some of the parameters decoder formats, including width, height, sps, pps and so on. E.g:


AMediaFormat* videoFormat = AMediaFormat_new();
AMediaFormat_setString(videoFormat, "mime", "video/avc");
AMediaFormat_setInt32(videoFormat, AMEDIAFORMAT_KEY_WIDTH, width); // 视频宽度 
AMediaFormat_setInt32(videoFormat, AMEDIAFORMAT_KEY_HEIGHT, height); // 视频高度 
AMediaFormat_setBuffer(videoFormat, "csd-0", sps, spsSize); // sps 
AMediaFormat_setBuffer(videoFormat, "csd-1", pps, ppsSize); // pps

 

I found that if directly before the sps, pps put the first I-frame, format is not set, but also a successful decode. If set in advance, then, configure function parameters should be checked in advance, if the parameter is not supported, then return early failure.

surface parameters directly determines the mode of operation of the decoder. If we pass a nativeWindow, the decoder AImage after then finished will be rendered by the Release method directly to the surface, then there is a screen. So save images from the GPU to the CPU, CPU and then copy the GPU, a high efficiency; if we pass nullptr, then we need to get through the interface image address. This has the advantage that you can take back some of the CPU of the image processing, to meet our requirements, and then the image rendering.

4. Start the decoder

This is relatively simple. Start the interface is what you can call. If you do not configure, it will fail.


AMediaCodec_start();

 

Data flow

After starting the data transmission begins, the data is decoded taking. The general structure previously described, basically a two-step process the data, transmission data mainly around InputBuffer expanded, deployed around the main data fetch OutputBuffer. In order to achieve best practice, we found it best to deal with these two processes are two threads, so as not to affect each other resulting in reduced efficiency.

1. The transmitted data

Send data in three steps, the first step, get InputBufferIndex. The main purpose of this step is to look InputBuffer is not full. If InputBuffer full, should the upstream data corresponding cache operation.

 

MediaCodec_dequeueInputBuffer(handle, 2000);


Step two, get InputBuffer address, and then fill in the data:

 

 

AMediaCodec_getInputBuffer(handle, idx, &size);


The third step, we tell MediaCodec data filled out:

 

 

AMediaCodec_queueInputBuffer(handle, idx, 0, bufferSize, pts, 0);


Here specific parameters do not speak here, a detailed explanation of the details of Android Developer. I have a question here is, why you want to get InputBuffer address, and then fill in the data and then tell it filled out, this requires two functions getInputBuffer and queueInputBuffer. If the direct function better with a substitute SendDataToInputBuffer not it?

 

Here we must mention that the Android only supports hardware decoding stream AnnexB format, which is the beginning of the 00,000,001 stream, if it is the beginning of the byte stream avcc length, it needs to turn in advance.

2. Take Data

Data fetch relatively complicated transmission data, the first step to index, it is good to see there is no decoded frame:

 

AMediaCodec_dequeueOutputBuffer(handle, &info, 2000);


If so, the framing. If the surface fill nullptr, you can obtain the address data through the interface:

 

 

AMediaCodec_getOutputBuffer(handle, idx, &outsize);


If filled with values ​​before surface, we can render images directly onto the Surface release through the interface:

 

 

AMediaCodec_releaseOutputBuffer(handle, idx, bRender);


Take note that getOutputBuffer when data might get some negative point. These are very significant and negative. For example, AMEDIACODEC_INFO_OUTPUT_FORMAT_CHANGEDit means a change of the output format and the like. We need to focus on the information and timely updates output format decoder.

 

Hard decoding the traffic route

1. Instead of hard decoding software solutions

The easiest way is to configure the time Surface fill null, then copy the data decoded. Doing a little obvious, is soft solution with essentially the same logic before, the outside does not need to change too much, before VideoProcess can then use, do not need to render cooperation, good encapsulation of the engine. The disadvantage is more than a decoder memory to copy its own memory.

2. Using the decoder buffer

If we make a copy for business optimization, reducing copy, which is the second business line. We can use to store the output buffer of the decoder. That is, after we call ouputBuffer, to obtain the output cache index, not in a hurry to copy the image. But wait until render time, call GetOutputBuffer get the picture pointer, then call Image2D, were generated gpu texture.

3. direct rendering using the GPU Image

If we pass Surface configure, we can, by the method of rendering gpu directly transmitted, thus reducing the GPU <-> memory copy between the CPU. First configure the time pass surface, then we call ouputBuffer, to obtain the output cache index, get the rendering time, a direct call image releaseOutputBuffer (handle, idx, true), then the decoder to render the image directly onto the surface.

Although this high efficiency, but the disadvantages are also obvious, first, that is not the image processing done. Second, this approach relies decoder buffer, which can cause problems. If the decoder is advanced destructor, then the cache contents are gone. Or play some of the business logic of the decoder buffer requires more (such as upside down), it can not do.

4. Use GPU Image, SurfaceTexture OpenGL-based rendering pipeline to

For business line 3, Android system is also taken into account this problem, provide us with a plan to do a compromise. We can start to build their own OpenGL environment, and then from the establishment of Texture, through the establishment of SurfaceTexture Texture, then remove the surface, were Configure. In this way, MediaCodec of Release will be rendered to the SurfaceTexture class. We then call the Update method, it is synchronized to the OpenGL Texture of. After various post-treatment can take, and the like swapbuffer displayed.

Such treatment was then, almost all of the business logic are met. But there is a small problem is lack of fluency. Specifically: When the output of a surface, and OpenGL not consume this surface, the decoding output is blocked. In other words, outputBuffer and OpenGL cosume this surface must be executed serially. If the parallel, will have coverage problems.

So we can take a small step adjustment: Texture get a copy of the OpenGL (is GPU-> GPU copy, copy the texture). OpenGL so it does not block decoder output. But the cost will bring a copy of the performance loss.

The multi-way synchronization, increasing fluency

Android 6.0 (API23) add an interface - setOutputSurface. As the name implies, this output can be provided dynamically Surface. This is the perfect solution to the above problem. Specifically, we can create multiple Texture in advance, and then output to any cycle time OutputBuffer a free Texture and marked with data, when consumed OpenGL image, return to the Texture idle. This is equivalent to consumption between OutputBuffer and OpenGL texture establishment of a buffer. Multithreading can be done in parallel needs.

Obvious drawback is the need to support Android 6.0, but now through the Android statistics panel developer.android.com/about/dashb...

Most phones are able to see on Android 6.0.

At last

Google's official documentation on Android MediaCodec is very detailed. There should be a lot of hidden attributes we found. We have to look up the official documentation manuals:

java documentation: developer.android.com/reference/a...

ndk document: developer.android.com/ndk/referen...

At the same time there are Android Samples Sample Code, available for reference

github.com/googlesampl…

It is also related to the package ffmpeg, particularly file: /libavcodec/mediacodecdec.c

Guess you like

Origin blog.csdn.net/u010029439/article/details/102519884