iOS ReplayKit 50M limit processing strategy!

Figure monster_1536a71225b872de586a35b84e564029_27266.png

iOS screen recording has always been a problem before, but after the official launch of ReplayKit, iOS screen recording is much more convenient.

At the business level, live game streaming, screen sharing, remote assistance, etc.

At present, there are also a large number of related apps in the App Store, which are mainly divided into the following two categories:

  1. Remote screen live broadcast
  2. Local screen recording save class

When the specific project is implemented, ReplayKit2 adopts the method of Extension sub-process, but the system gives a memory limit of 50M. Once it exceeds 50M, the sub-process of the screen recording will crash.

It is because of this limitation that similar processing solutions in the industry will limit its video quality not to exceed 720P, or the number of video frames to be within 30. For example, Tencent's live broadcast SDK.


To fix this, dance in shackles.

Let's first take a look at what the child process is doing:

@implementation SampleHandler

- (void)broadcastStartedWithSetupInfo:(NSDictionary<NSString *, NSObject *> *)setupInfo {
    // User has requested to start the broadcast. Setup info from the UI extension can be supplied but optional.

}

- (void)processSampleBuffer:(CMSampleBufferRef)sampleBuffer withType:(RPSampleBufferType)sampleBufferType {
    switch (sampleBufferType) {
        case RPSampleBufferTypeVideo:
            // Handle video sample buffer
            break;
        case RPSampleBufferTypeAudioApp:
            // Handle audio sample buffer for app audio
            break;
        case RPSampleBufferTypeAudioMic:
            // Handle audio sample buffer for mic audio
            break;

        default:
            break;
    }
}

@end

复制代码

There are only two important functions:

  1. broadcastStartedWithSetupInfo:(NSDictionary<NSString *, NSObject *> *)setupInfo

    Child process start callback

  2. processSampleBuffer:(CMSampleBufferRef)sampleBuffer withType:(RPSampleBufferType)sampleBufferType

    Data callback for video/audio

As can be seen from the function, the data type of the callback is CMSampleBufferRef, which itself hardly occupies memory.

However, when we convert it into bitmap information, especially when it is directly converted into binary stream information, it will consume relatively memory.


Therefore, in order to ensure memory consumption, our idea is to send the data in the child process to the main process, and the image or other operations are performed by the main process.

From this, we introduced **"process communication"**.

The ways that the child process and the main process can communicate that can meet our requirements are:

  1. CFMachPort

    No longer available after iOS7

  2. CFNotificationCenterRef

    Only simple string data can be sent.

    If complex data is sent, the requirements for data assembly are high. It can be implemented using the tripartite encapsulation library MMWormhole . The principle is to archive the data to a file, then send the file identifier between processes, and read the file at the receiving end. less efficient

  3. Local Socket

    Establish a local Socket between processes, and process TCP communication.

    Flexible use and high efficiency.

    We use an GCDAsyncSocketimplementation that can stream NSData directly.


The transmission method of inter-process communication, we finally decided to use the local Socket implementation.

Next we need to consider how to assemble the data.

从系统API可以看到,回调函数中系统为我们提供的数据类型是CMSampleBufferRef

其实每一帧的视频数据,并且它是一种压缩过的,用于存储媒体文件属性的数据结构,它的组成部分如下:

CMTime:64位的value,32位的scale, media的时间格式

CMVideoFormatDesc:video的格式,包括宽高、颜色空间、编码格式、SPS、PPS

CVPixelBuffer: 包含未压缩的像素格式,宽高

CMBlockBuffer: 压缩的的图像数据

CMSampleBuffer: 存放一个或多个压缩或未压缩的媒体文件

如果可以将其发送到主进程再好不过,但是在不对其进行解码的情况下,目前还没有办法进行数据格式的转换,从而进行通信发送。

因此,进一步我们需要解决的问题的是,如何高效轻量的解码。

首先,直接转换成位图不可行,因为在比较大的屏幕分辨率下,每一帧都很吃内存。

所以,我们需要一种中间数据结构,来进行传输,它需要满足以下几个条件:

  1. 能够从CMSampleBufferRef中获取到图像信息,但是比imageData本身要轻量
  2. 从子进程传输到主进程后,可以将其还原为图片信息,并且可以再针对图片进行旋转,裁剪,压缩等操作

当然,解码的选择我们也有很多,无论是硬解,软解,YUV还是RGB。

但是无论怎样,我们都需要先解码。

曾经也想过,是否可以在CMSampleBufferRef本身上直接进行图片压缩等操作,但是最后放弃了。

基于以上,我们最终参考了网易云通信屏幕共享的处理方式,使用了YUV解码,与它的NTESI420Frame中间数据结构,来承载CMSampleBufferRef,就像载波信号一样。

其转换源码如下:

+ (NTESI420Frame *)pixelBufferToI420:(CVImageBufferRef)pixelBuffer
                            withCrop:(float)cropRatio
                          targetSize:(CGSize)size
                      andOrientation:(NTESVideoPackOrientation)orientation
{
    if (pixelBuffer == NULL) {
        return nil;
    }

    CVPixelBufferLockBaseAddress(pixelBuffer, 0);

    OSType sourcePixelFormat = CVPixelBufferGetPixelFormatType(pixelBuffer);

    size_t bufferWidth = 0;
    size_t bufferHeight = 0;
    size_t rowSize = 0;
    uint8_t *pixel = NULL;

    if (CVPixelBufferIsPlanar(pixelBuffer)) {
        int basePlane = 0;
        pixel = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, basePlane);
        bufferHeight = CVPixelBufferGetHeightOfPlane(pixelBuffer, basePlane);
        bufferWidth = CVPixelBufferGetWidthOfPlane(pixelBuffer, basePlane);
        rowSize = CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, basePlane);
    } else {
        pixel = (uint8_t *)CVPixelBufferGetBaseAddress(pixelBuffer);
        bufferWidth = CVPixelBufferGetWidth(pixelBuffer);
        bufferHeight = CVPixelBufferGetHeight(pixelBuffer);
        rowSize = CVPixelBufferGetBytesPerRow(pixelBuffer);
    }

    NTESI420Frame *convertedI420Frame = [[NTESI420Frame alloc] initWithWidth:(int)bufferWidth height:(int)bufferHeight];

    int error = -1;

    if (kCVPixelFormatType_32BGRA == sourcePixelFormat) {
        error = libyuv::ARGBToI420(
            pixel, (int)rowSize,
            [convertedI420Frame dataOfPlane:NTESI420FramePlaneY], (int)[convertedI420Frame strideOfPlane:NTESI420FramePlaneY],
            [convertedI420Frame dataOfPlane:NTESI420FramePlaneU], (int)[convertedI420Frame strideOfPlane:NTESI420FramePlaneU],
            [convertedI420Frame dataOfPlane:NTESI420FramePlaneV], (int)[convertedI420Frame strideOfPlane:NTESI420FramePlaneV],
            (int)bufferWidth, (int)bufferHeight);
    } else if (kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange == sourcePixelFormat || kCVPixelFormatType_420YpCbCr8BiPlanarFullRange == sourcePixelFormat) {
        error = libyuv::NV12ToI420(
            pixel,
            (int)rowSize,
            (const uint8 *)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1),
            (int)CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 1),
            [convertedI420Frame dataOfPlane:NTESI420FramePlaneY],
            (int)[convertedI420Frame strideOfPlane:NTESI420FramePlaneY],
            [convertedI420Frame dataOfPlane:NTESI420FramePlaneU],
            (int)[convertedI420Frame strideOfPlane:NTESI420FramePlaneU],
            [convertedI420Frame dataOfPlane:NTESI420FramePlaneV],
            (int)[convertedI420Frame strideOfPlane:NTESI420FramePlaneV],
            (int)bufferWidth,
            (int)bufferHeight);
    }

    if (error) {
        CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
        NSLog(@"error convert pixel buffer to i420 with error %d", error);
        return nil;
    } else {
        rowSize = [convertedI420Frame strideOfPlane:NTESI420FramePlaneY];
        pixel = convertedI420Frame.data;
    }

    CMVideoDimensions inputDimens = { (int32_t)bufferWidth, (int32_t)bufferHeight };
    CMVideoDimensions outputDimens = [NTESVideoUtil outputVideoDimensEnhanced:inputDimens crop:cropRatio];
//        CMVideoDimensions outputDimens = {(int32_t)738,(int32_t)1312};
    CMVideoDimensions sizeDimens = { (int32_t)size.width, (int32_t)size.height };
    CMVideoDimensions targetDimens = [NTESVideoUtil outputVideoDimensEnhanced:sizeDimens crop:cropRatio];
    int cropX = (inputDimens.width - outputDimens.width) / 2;
    int cropY = (inputDimens.height - outputDimens.height) / 2;

    if (cropX % 2) {
        cropX += 1;
    }

    if (cropY % 2) {
        cropY += 1;
    }
    float scale = targetDimens.width * 1.0 / outputDimens.width;

    NTESI420Frame *croppedI420Frame = [[NTESI420Frame alloc] initWithWidth:outputDimens.width height:outputDimens.height];

    error = libyuv::ConvertToI420(pixel, bufferHeight * rowSize * 1.5,
                                  [croppedI420Frame dataOfPlane:NTESI420FramePlaneY], (int)[croppedI420Frame strideOfPlane:NTESI420FramePlaneY],
                                  [croppedI420Frame dataOfPlane:NTESI420FramePlaneU], (int)[croppedI420Frame strideOfPlane:NTESI420FramePlaneU],
                                  [croppedI420Frame dataOfPlane:NTESI420FramePlaneV], (int)[croppedI420Frame strideOfPlane:NTESI420FramePlaneV],
                                  cropX, cropY,
                                  (int)bufferWidth, (int)bufferHeight,
                                  croppedI420Frame.width, croppedI420Frame.height,
                                  libyuv::kRotate0, libyuv::FOURCC_I420);

    if (error) {
        CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
        NSLog(@"error convert pixel buffer to i420 with error %d", error);
        return nil;
    }

    NTESI420Frame *i420Frame;

    if (scale == 1.0) {
        i420Frame = croppedI420Frame;
    } else {
        int width = outputDimens.width * scale;
        width &= 0xFFFFFFFE;
        int height = outputDimens.height * scale;
        height &= 0xFFFFFFFE;

        i420Frame = [[NTESI420Frame alloc] initWithWidth:width height:height];

        libyuv::I420Scale([croppedI420Frame dataOfPlane:NTESI420FramePlaneY], (int)[croppedI420Frame strideOfPlane:NTESI420FramePlaneY],
                          [croppedI420Frame dataOfPlane:NTESI420FramePlaneU], (int)[croppedI420Frame strideOfPlane:NTESI420FramePlaneU],
                          [croppedI420Frame dataOfPlane:NTESI420FramePlaneV], (int)[croppedI420Frame strideOfPlane:NTESI420FramePlaneV],
                          croppedI420Frame.width, croppedI420Frame.height,
                          [i420Frame dataOfPlane:NTESI420FramePlaneY], (int)[i420Frame strideOfPlane:NTESI420FramePlaneY],
                          [i420Frame dataOfPlane:NTESI420FramePlaneU], (int)[i420Frame strideOfPlane:NTESI420FramePlaneU],
                          [i420Frame dataOfPlane:NTESI420FramePlaneV], (int)[i420Frame strideOfPlane:NTESI420FramePlaneV],
                          i420Frame.width, i420Frame.height,
                          libyuv::kFilterBilinear);
    }

    int dstWidth, dstHeight;
    libyuv::RotationModeEnum rotateMode = [NTESYUVConverter rotateMode:orientation];

    if (rotateMode != libyuv::kRotateNone) {
        if (rotateMode == libyuv::kRotate270 || rotateMode == libyuv::kRotate90) {
            dstWidth = i420Frame.height;
            dstHeight = i420Frame.width;
        } else {
            dstWidth = i420Frame.width;
            dstHeight = i420Frame.height;
        }
        NTESI420Frame *rotatedI420Frame = [[NTESI420Frame alloc]initWithWidth:dstWidth height:dstHeight];

        libyuv::I420Rotate([i420Frame dataOfPlane:NTESI420FramePlaneY], (int)[i420Frame strideOfPlane:NTESI420FramePlaneY],
                           [i420Frame dataOfPlane:NTESI420FramePlaneU], (int)[i420Frame strideOfPlane:NTESI420FramePlaneU],
                           [i420Frame dataOfPlane:NTESI420FramePlaneV], (int)[i420Frame strideOfPlane:NTESI420FramePlaneV],
                           [rotatedI420Frame dataOfPlane:NTESI420FramePlaneY], (int)[rotatedI420Frame strideOfPlane:NTESI420FramePlaneY],
                           [rotatedI420Frame dataOfPlane:NTESI420FramePlaneU], (int)[rotatedI420Frame strideOfPlane:NTESI420FramePlaneU],
                           [rotatedI420Frame dataOfPlane:NTESI420FramePlaneV], (int)[rotatedI420Frame strideOfPlane:NTESI420FramePlaneV],
                           i420Frame.width, i420Frame.height,
                           rotateMode);
        i420Frame = rotatedI420Frame;
    }

    CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
    return i420Frame;
}

复制代码

这个函数中,主要进行了针对原始图像信息的YUV解码,解码后再进行裁剪,压缩,旋转。

从代码量就能看出来,此函数对于我们来说很有很多冗余,我们的目的是尽可能减少子进程中的任何处理,以及内存使用,所以,我们只保留其解码功能,其他剔除,如下:

+ (NTESI420Frame *)pixelBufferToI420:(CVImageBufferRef)pixelBuffer {
    if (pixelBuffer == NULL) {
        return nil;
    }

    CVPixelBufferLockBaseAddress(pixelBuffer, 0);

    OSType sourcePixelFormat = CVPixelBufferGetPixelFormatType(pixelBuffer);

    size_t bufferWidth = 0;
    size_t bufferHeight = 0;
    size_t rowSize = 0;
    uint8_t *pixel = NULL;

    if (CVPixelBufferIsPlanar(pixelBuffer)) {
        int basePlane = 0;
        pixel = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, basePlane);
        bufferHeight = CVPixelBufferGetHeightOfPlane(pixelBuffer, basePlane);
        bufferWidth = CVPixelBufferGetWidthOfPlane(pixelBuffer, basePlane);
        rowSize = CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, basePlane);
    } else {
        pixel = (uint8_t *)CVPixelBufferGetBaseAddress(pixelBuffer);
        bufferWidth = CVPixelBufferGetWidth(pixelBuffer);
        bufferHeight = CVPixelBufferGetHeight(pixelBuffer);
        rowSize = CVPixelBufferGetBytesPerRow(pixelBuffer);
    }
    NTESI420Frame *convertedI420Frame = [[NTESI420Frame alloc] initWithWidth:(int)bufferWidth height:(int)bufferHeight];

    int error = -1;
    if (kCVPixelFormatType_32BGRA == sourcePixelFormat) {
        error = libyuv::ARGBToI420(
            pixel, (int)rowSize,
            [convertedI420Frame dataOfPlane:NTESI420FramePlaneY], (int)[convertedI420Frame strideOfPlane:NTESI420FramePlaneY],
            [convertedI420Frame dataOfPlane:NTESI420FramePlaneU], (int)[convertedI420Frame strideOfPlane:NTESI420FramePlaneU],
            [convertedI420Frame dataOfPlane:NTESI420FramePlaneV], (int)[convertedI420Frame strideOfPlane:NTESI420FramePlaneV],
            (int)bufferWidth, (int)bufferHeight);
    } else if (kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange == sourcePixelFormat || kCVPixelFormatType_420YpCbCr8BiPlanarFullRange == sourcePixelFormat) {
        error = libyuv::NV12ToI420(
            pixel,
            (int)rowSize,
            (const uint8 *)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1),
            (int)CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 1),
            [convertedI420Frame dataOfPlane:NTESI420FramePlaneY],
            (int)[convertedI420Frame strideOfPlane:NTESI420FramePlaneY],
            [convertedI420Frame dataOfPlane:NTESI420FramePlaneU],
            (int)[convertedI420Frame strideOfPlane:NTESI420FramePlaneU],
            [convertedI420Frame dataOfPlane:NTESI420FramePlaneV],
            (int)[convertedI420Frame strideOfPlane:NTESI420FramePlaneV],
            (int)bufferWidth,
            (int)bufferHeight);
    }

    if (error) {
        CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
        NSLog(@"error convert pixel buffer to i420 with error %d", error);
        return nil;
    }
    CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
    return convertedI420Frame;
}

复制代码

我们现在有了中间的数据载体,接下来就要考虑如何进行传输。

进行Socket通信之前,我们需要对上面获取到的数据结构进行二进制转换,网易的源码如下:

//NTESI420Frame.m

- (NSData *)bytes {
    int structSize = sizeof(self.width) + sizeof(self.height) + sizeof(self.i420DataLength) + sizeof(self.timetag);

    void *buffer = malloc(structSize + self.i420DataLength);

    memset(buffer, 0, structSize + self.i420DataLength);
    int offset = 0;

    memcpy(buffer + offset, &_width, sizeof(_width));
    offset += sizeof(_width);

    memcpy(buffer + offset, &_height, sizeof(_height));
    offset += sizeof(_height);

    memcpy(buffer + offset, &_i420DataLength, sizeof(_i420DataLength));
    offset += sizeof(_i420DataLength);

    memcpy(buffer + offset, &_timetag, sizeof(_timetag));
    offset += sizeof(_timetag);

    memcpy(buffer + offset, [self dataOfPlane:NTESI420FramePlaneY], [self strideOfPlane:NTESI420FramePlaneY] * self.height);
    offset += [self strideOfPlane:NTESI420FramePlaneY] * self.height;

    memcpy(buffer + offset, [self dataOfPlane:NTESI420FramePlaneU], [self strideOfPlane:NTESI420FramePlaneU] * self.height / 2);
    offset += [self strideOfPlane:NTESI420FramePlaneU] * self.height / 2;

    memcpy(buffer + offset, [self dataOfPlane:NTESI420FramePlaneV], [self strideOfPlane:NTESI420FramePlaneV] * self.height / 2);
    offset += [self strideOfPlane:NTESI420FramePlaneV] * self.height / 2;
    NSData *data = [NSData dataWithBytes:buffer length:offset];
    free(buffer);
    return data;
}

复制代码

从函数本身来看,没有任何问题,将数据结构中包含的所有信息打包成一个NSData二进制流,最后组成Socket的一帧进行发送就好。

但是,不要忘记,这些操作我们都是在子进程中进行的,在分辨率过高尺寸过大的设备上,一旦图片中信息本身就很丰富的情况下,CPU来不及处理释放这些临时变量时,依然很容易导致内存暴增,最终超过50M,导致录屏进程崩溃。

就像一条河,水量过大,流速太慢,河道本身太窄,都会导致河堤的崩溃。

因此,我们的处理方向可以集中在以下三点:

  1. 减少水流

    a. 利用NTESI420Frame来承载图片信息,而不是位图本身的二进制流信息

    b. 减少临时变量的使用

    c. 拆分数据,大数据拆开成小数据进行处理

  2. 加快流速

    a. 加快子进程中处理信息速度,这一条是在“减少水流”的基础上,数据越小,处理越快

    b. 加快进程通信间的传输速度。使用本地Socket,而不是CFNotificationCenterRef。

    c. 多任务处理数据

    d. 多通道传输数据

  3. 扩宽河道

    由于系统限制50M,我们针对此条无法做处理。

基于以上,我们针对NTESI420Framebyte方法进行了以下优化:

- (void)getBytesQueue:(void (^)(NSData *data,NSInteger index))complete {
    int offset = 0;
    {
        int structSize = sizeof(self.width) + sizeof(self.height) + sizeof(self.i420DataLength) + sizeof(self.timetag);

        void *buffer = malloc(structSize + self.i420DataLength);

        memset(buffer, 0, structSize + self.i420DataLength);

        memcpy(buffer + offset, &_width, sizeof(_width));
        offset += sizeof(_width);

        memcpy(buffer + offset, &_height, sizeof(_height));
        offset += sizeof(_height);

        memcpy(buffer + offset, &_i420DataLength, sizeof(_i420DataLength));
        offset += sizeof(_i420DataLength);

        memcpy(buffer + offset, &_timetag, sizeof(_timetag));
        offset += sizeof(_timetag);
        NSData *data = [NSData dataWithBytes:buffer length:offset];
        if (complete) {
            complete(data,0);
        }
        free(buffer);
        data = NULL;
    }

    {
        void *buffer = malloc([self strideOfPlane:NTESI420FramePlaneY] * self.height);
        offset = 0;
        memset(buffer, 0, [self strideOfPlane:NTESI420FramePlaneY] * self.height);
        memcpy(buffer + offset, [self dataOfPlane:NTESI420FramePlaneY], [self strideOfPlane:NTESI420FramePlaneY] * self.height);
        offset += [self strideOfPlane:NTESI420FramePlaneY] * self.height;
        NSData *data = [NSData dataWithBytes:buffer length:offset];
        if (complete) {
            complete(data,0);
        }
        free(buffer);
        data = NULL;
    }

    {
        void *buffer = malloc([self strideOfPlane:NTESI420FramePlaneU] * self.height / 2);
        offset = 0;
        memset(buffer, 0, [self strideOfPlane:NTESI420FramePlaneU] * self.height / 2);
        memcpy(buffer + offset, [self dataOfPlane:NTESI420FramePlaneU], [self strideOfPlane:NTESI420FramePlaneU] * self.height / 2);
        offset += [self strideOfPlane:NTESI420FramePlaneU] * self.height / 2;
        NSData *data = [NSData dataWithBytes:buffer length:offset];
        if (complete) {
            complete(data,1);
        }
        free(buffer);
        data = NULL;
    }

    {
        void *buffer = malloc([self strideOfPlane:NTESI420FramePlaneV] * self.height / 2);
        offset = 0;
        memset(buffer, 0, [self strideOfPlane:NTESI420FramePlaneV] * self.height / 2);
        memcpy(buffer + offset, [self dataOfPlane:NTESI420FramePlaneV], [self strideOfPlane:NTESI420FramePlaneV] * self.height / 2);
        offset += [self strideOfPlane:NTESI420FramePlaneV] * self.height / 2;
        NSData *data = [NSData dataWithBytes:buffer length:offset];
        if (complete) {
            complete(data,2);
        }
        free(buffer);
        data = NULL;
    }
}

复制代码

此函数在之前基础上,将一大块数据拆成了四份:

  1. 图片头部信息
  2. Y通道信息
  3. U通道信息
  4. V通道信息

转换一条数据,就发送一条,减少数据量,提高数据处理速度,尽快释放临时变量,保持内存值一直处于一个平均水平。


此时,数据已经准备好,接下来,在Socket传输中,我们如何组织它们呢?

由于以上的操作,我们将一张图片分成了四部分:

  1. 图片头部信息
  2. Y通道信息
  3. U通道信息
  4. V通道信息

我们从子进程中分别发送每一条数据到主进程,等到主进程收到一个完成图片信息时再进行后续处理。

虽然是分开发送的,但是我们需要将这四部分数据,在Socket传输中,组成一个完整的帧,这样子,主进程才能知道它得到了一张完整的图片信息。

Therefore, after we send these four parts of data separately, we finally send a data frame similar to HTTP header to the subroutine to tell the main process that a picture information is over.

- (void)sendVideoBufferToHostApp:(CMSampleBufferRef)sampleBuffer {
    if (!self.socket) {
        return;
    }
    if (self.frameCount > 0) {
        //每次只处理1帧画面
        return;
    }
    long curMem = [self getCurUsedMemory];
    NSLog(@"curMem:%@", @(curMem / 1024.0 / 1024.0));
    if (evenlyMem > 0
        && ((curMem - evenlyMem) > (5 * 1024 * 1024)
            || curMem > 45 * 1024 * 1024)) {
        //当前内存暴增5M以上,或者总共超过45M,则不处理
        return;
    }
    self.frameCount++;

    CFRetain(sampleBuffer);
    dispatch_async(self.videoQueue, ^{ // queue optimal
        @autoreleasepool {
            // To data
            NTESI420Frame *videoFrame = [NTESYUVConverter pixelBufferToI420:CMSampleBufferGetImageBuffer(sampleBuffer)];
            CFRelease(sampleBuffer);

            // To Host App
            if (videoFrame) {
                __block NSUInteger length = 0;
                [videoFrame getBytesQueue:^(NSData *data, NSInteger index) {
                        length += data.length;
                        [self.socket writeData:data withTimeout:5 tag:0];
                }];
                @autoreleasepool {
                    NSData *headerData = [NTESSocketPacket packetWithBufferLength:length];
                    [self.socket writeData:headerData withTimeout:5 tag:0];
                }
            }
        };
        if (self->evenlyMem <= 0) {
            self->evenlyMem = [self getCurUsedMemory];
            NSLog(@"平均内存:%@", @(self->evenlyMem));
        }
        self.frameCount--;
    });
}

复制代码

Above, basically solved the 50M system limit problem.

Test model:

  • iPhone 5s
  • iPhone 6s Plus
  • iPhone 7
  • iPad mini4
  • iPad Air2

Try to use complex and fast-changing images for brute force testing.

To sum up, it is still the problem of the river, which is mapped to our computer world, that is, the processing data is too large, the CPU cannot handle it, and the memory is not released in time.

Welcome to my public account: Programming Daxin, let's communicate and progress together!

Guess you like

Origin juejin.im/post/6968738257123147807