Android Mediacodec gets the bitmap of the current decoded frame

The decoding process of MediaCodec will not be analyzed here.
Mainly analyze the problems after the outputbuffer gets the data.

The outputbuffer has two (only one) available resources, one is the data stream, which is the data stored in byte[], in the yuv format, and the bitmap needs the data in the rgb format.
So take another kind of data stored in image,

image = mediaCodec.getOutputImage(outIndex);

After getting the image,

YuvImage yuvImage = new YuvImage(YUV_420_888toNV21(image), ImageFormat.NV21, width,height, null);
private static byte[] YUV_420_888toNV21(Image image) {
        byte[] nv21;
        ByteBuffer yBuffer = image.getPlanes()[0].getBuffer();
       ByteBuffer uBuffer = image.getPlanes()[1].getBuffer();
       ByteBuffer vBuffer = image.getPlanes()[2].getBuffer();
       int ySize = yBuffer.remaining();
       int uSize = uBuffer.remaining();
       int vSize = vBuffer.remaining();
       nv21 = new byte[ySize + uSize + vSize];
       //U and V are swapped
       yBuffer.get(nv21, 0, ySize);
       vBuffer.get(nv21, ySize, vSize);
       uBuffer.get(nv21, ySize + vSize, uSize);
       return nv21;
    }
ByteArrayOutputStream stream = new ByteArrayOutputStream();
yuvImage.compressToJpeg(new Rect(0, 0, width, height), 80, stream);
bitmap = BitmapFactory.decodeByteArray(stream.toByteArray(), 0, stream.size());
try {
    stream.close();
  } catch (IOException e) {
      e.printStackTrace();
  }

This completes yuv->bitmap.
PS: You can use it directly, if you want to know why and why, you can trust me privately.

Guess you like

Origin blog.csdn.net/mozushixin_1/article/details/91046306