Android technology sharing | One line of code realizes screen and sound collection

I published a line of code before to implement Android screen capture and encoding, and introduced how to capture and encode the screen and package it. Simple calls can implement functions such as MediaProjection permission application, H264 hardcoding, and error handling. This article will introduce the new functions, or just one line of code to realize screen and sound collection.

One line of code to realize screen capture coding

In the previous article, we have already introduced how to implement Android screen capture encoding with one line of code, here is a brief introduction again

ScreenShareKit.init(this).onH264({
    
     buffer, isKeyFrame, w, h, ts ->
    //获取编码后的屏幕内容数据
}).onStart({
    
    
    //用户同意采集,开始采集数据
}).start()

Through this code, we can implement MediaProjection permission application, H264 hard coding, error handling and other functions. At the same time, we have also added an onStart callback, which will be called back after the user agrees to the screen capture, making it easier to process business.

Add global screen rotation monitoring

In the previous version, the screen rotation would cause the screen to be distorted, and the width and height would be disordered. In this update, the screen rotation monitoring is implemented by listening to IRotationWatcher through reflection. IRotationWatcher is an AIDL interface that defines a rotation monitor for monitoring screen rotation events. In the ScreenShare library, we obtain the IRotationWatcher object through reflection, and register an IRotationWatcher.Stub instance to monitor the screen rotation event. When the screen is rotated, the Stub instance will receive a callback, reset the encoder according to the rotation angle, and change the width and height to ensure the correct orientation of the screen. This is not simply to judge whether the device is rotated, but to judge whether the screen content or other app layouts are rotated. Then reset the encoder and change the width and height. Always keep the orientation correct.

Reference: scrcpy

Add RGBA data callback

Due to the large number of Android devices, the performance of various CPUs varies. It is difficult to ensure that each device can be clear and smooth after hardcoding. So add a way to use ImageReader to capture screenshots, and many open source library screenshots use this class. ImageReader is a class provided by the Android system for obtaining screenshots. Through the setOnImageAvailableListener callback, it can continuously obtain screen changes and obtain RGBA data. Here we obtain screen data by creating an ImageReader object and setting its width, height, pixel format and other parameters. In the callback function, we can get the RGBA data of each frame and process it. Although this implementation takes up a little more memory than hardcoding, its advantage is that it is very balanced. Regardless of the powerful or low-end CPU, you can get clearer and smoother data.

ScreenShareKit.init(this).config(screenDataType = EncodeBuilder.SCREEN_DATA_TYPE.RGBA).onRGBA(object : RGBACallBack {
    
    
    override fun onRGBA(
        rgba: ByteArray,
        width: Int,
        height: Int,
        stride: Int,
        rotation: Int,
        rotationChanged: Boolean
    ) {
    
    
        //采集的RGBA数据
    }
}).onStart({
    
    
    //用户同意采集,开始采集数据
}).start()

Increase the built-in sound collection callback of the screen

Android 10 has introduced the AudioPlaybackCapture API. Apps can use this API to copy audio being played by other apps. This function is similar to screen capture, but the capture object is audio. The main use case is video streaming apps that want to capture the audio that the game is playing.

AudioPlaybackCaptureConfiguration config = (new AudioPlaybackCaptureConfiguration.Builder(mediaProjection)).addMatchingUsage(AudioAttributes.USAGE_MEDIA).build();
AudioFormat audioFormat = (new AudioFormat.Builder()).setEncoding(AudioFormat.ENCODING_PCM_16BIT).setSampleRate(sampleRate).setChannelMask(channelConfig).build();
audioRecord = new AudioRecord.Builder().setAudioFormat(audioFormat).setBufferSizeInBytes(bufferSizeInBytes).setAudioPlaybackCaptureConfig(config).build();

**NOTE:** Whether or not the app's audio can be captured also depends on the app targetSdkVersion.

  • Only supports Android 10 and above
  • By default, apps adapted to Android 9.0 and earlier versions are not allowed to capture playback audio. To enable this feature, include it in your app's manifest.xmlfile android:allowAudioPlaybackCapture="true".
  • By default, apps targeting Android 10 (API level 29) or higher allow other apps to capture their audio. To disable the Capture Played Audio feature, include it in your app's manifest.xmlfile android:allowAudioPlaybackCapture="false".

use:

ScreenShareKit.init(this).config(audioCapture = true).onAudio(object : AudioCallBack {
    
    
    override fun onAudio(buffer: ByteArray?, ts: Long) {
    
    
        //应用的音频数据
    }
}).onStart({
    
    
    //用户同意采集,开始采集数据
}).start()

Refer to Capturing Played Audio: Official Documentation

Increase the built-in sound method of the mute screen

During the collection process of the application, if you want to mute the sound without interruption, you can call the following method.

ScreenShareKit.setMicrophoneMute(true)

After setting to true, empty data will be called back, and the mute effect has been achieved. Anyway, normal audio capture resumes.

The above are some useful functions introduced in this article. Through this library, we can realize screen and sound collection with a simple line of code, and support functions such as global screen rotation monitoring, RGBA data callback, and built-in sound collection on the screen. This library is very convenient to use, project address: ScreenShare

insert image description here

Guess you like

Origin blog.csdn.net/anyRTC/article/details/130127408