Wei Dongshan Embedded Linux_3 USB Camera Monitoring_Mobile App adds video recording function (2)

Connect an article Wei Dongshan embedded Linux_3 of the monitoring _ USB camera phone App to increase recording function (a), began to introduce the modification process for the original App Framework

1. Module division

   i) (Main module) Video capture and playback

   ii) Display mode switching

   iii) Taking pictures

   iv) Video

   v) fps display

   vi) Browse and delete video

Second, the realization of each module

2.1 (main module) video capture and playback

2.1.1 Reference materials:

1) The main frame (decoding, reading frame) refers to Lei Xiaohua : 100 lines of code to implement the simplest video player based on FFMPEG + SDL (SDL1.x)

      For the process of the main frame, please refer to Lei Xiaohua ’s above blog post, which will not be repeated here.

2) Frame display, reference: Android uses FFmpeg (2)-simple realization of video streaming

     The flow of frame display is roughly as follows:

     image

2.2 Display mode switching

Implementation idea: Use the scale and pad functions of av_filter to scale and obtain the necessary four-sided padding for each original frame obtained.

The main frame body code refers to the use example of FFMPEG filter (implementing video scaling, cropping, watermarking, etc.) , which will not be repeated here.

As for how to switch the scale parameter (and pad parameter) in the two display modes, the best method has not been found (after testing, av_opt_set () is only valid for draw_text (see below: 2.5 fps display ), but for scale and pad invalid),

At present, the dumb method is temporarily adopted:

1) Define two filter_descr templates, and corresponding AVFilterGraph, AVFilterContext

/ * Used to maintain the aspect ratio display mode * / 
const  char * m_filter_descr_template = " scale =% d:% d, pad =% d:% d:% d:% d: blue, drawtext = fontfile = / sdcard / data /FreeSerif.ttf:fontsize=20:text=fps:x=(w-tw-%d):y=%d ";
 char   m_filter_descr [200];
/ * For full-screen display mode * / 
const  char * m_filter_descr2_template = " scale =% d:% d, pad =% d:% d:% d:% d: blue, drawtext = fontfile = / sdcard / data / FreeSerif. ttf: fontsize = 20: text = fps: x = (w-tw-5): y = 5 ";
 char   m_filter_descr2 [200];

/ * Used to maintain the aspect ratio display mode * /
AVFilterContext * m_buffersink_ctx1;
AVFilterContext * m_buffersrc_ctx1;
AVFilterGraph * m_filter_graph1;

/ * For full-screen display mode * /
AVFilterContext * m_buffersink_ctx2;
AVFilterContext * m_buffersrc_ctx2;
AVFilterGraph * m_filter_graph2;

2) During initialization, first call keep_img_AR () to pre-calculate the value of filter_descr corresponding to the two display modes

int keep_img_AR ( int nSrcW, int nSrcH, int nDstW, int nDstH) 
{ / * Calculate the width to height ratio with black borders on top and bottom, or left and right with black borders, how many black borders * / int imgW = 0, imgH = 0;
     int padW = 0, padH = 0; // Must be rounded
     to a multiple of 2, otherwise ffmpeg will report an error when calculating the pad: Input area not within the padded area or zero-sized 
    nDstW = nDstW / 2 * 2; 
    nDstH = nDstH / 2 * 2; 
    imgW = nSrcW * nDstH / nSrcH / 2 * 2; 
    imgH = nSrcH * nDstW / nSrcW / 2 * 2; if (imgW <nDstW) { 
        padW = (nDstW-imgW) / 2; 
        imgH = nDstH / 2 * 2 ; // imgW = -1; 
    } else if 
        padH = (nDstH-imgH) / 2;
    
    

    
        
      (imgH <nDstH) {
        imgW = nDstW/2*2;
        //imgH = -1;
    }
    sprintf(m_filter_descr, m_filter_descr_template, imgW, imgH, nDstW, nDstH, padW, padH, padW+5, padH+5);
    sprintf(m_filter_descr2, m_filter_descr2_template, nDstW, nDstH, nDstW, nDstH, 0, 0);

    return 1 ;
}

3) Then call init_filters () to initialize m_filter_graph1, m_buffersink_ctx1, m_buffersrc_ctx1 and m_filter_graph2, m_buffersink_ctx2, m_buffersrc_ctx2

     The code of init_filters () refers to the use example of FFMPEG filter (implementing video scaling, cropping, watermarking, etc.) , which will not be repeated here.

4) And switching the playback mode is actually switching (m_filter_graph1, m_buffersink_ctx1, m_buffersrc_ctx1) and (m_filter_graph2, m_buffersink_ctx2, m_buffersrc_ctx2) triple

/ ** 
 * When the player holding the video aspect ratio 
 * / 
void playVideoKeepAspectRatio () 
{ 
    m_play_video_mode = PLAY_VIDEO_KEEP_ASPECT_RATIO; 
    m_filter_graph = m_filter_graph1; 
    m_buffersrc_ctx = m_buffersrc_ctx1; 
    m_buffersink_ctx = m_buffersink_ctx1; 
} / ** 
 * playback video display area fill 
 * / void playVideoFullScreen () 
{ 
    m_play_video_mode = PLAY_VIDEO_FULL_SCREEN; 
    m_filter_graph = m_filter_graph2; 
    m_buffersrc_ctx = m_buffersrc_ctx2; 
    m_buffersink_ctx = m_buffersink_ctx2; 
}


Note: With regard to display mode switching, another way to achieve this is to use sws_scale () and av_picture_pad (), reference: use ffmpeg's lib library to achieve the original width-to-height ratio / stretching of the video window

But the amount of code is large, and after testing, I found some problems, such as:

-After adding the draw_text of av_filter, the fps display will jump up and down slightly, the reason is to be investigated

-fps positioning is more difficult to achieve (because of the width of the pad)

So in the end, this method was not adopted (but the algorithm for calculating scale and pad in keep_img_AR () refers to this article).

2.3 Taking pictures

Implementation ideas:

1) Define m_pFrameCur to represent the currently acquired frame

2) In the while loop of the video playback function videoStreamStartPlay (), use av_frame_ref (m_pFrameCur, pFrame) to make m_pFrameCur point to the currently acquired frame

3) __save_frame_2_jpeg (file_path, m_pFrameCur, m_input_codec_ctx-> pix_fmt) to save the current frame to the specified file

     Code reference: ffmpeg realizes the collection-preview-photograph of mjpeg camera , no more details here

2.4 Video

Reference: How to use FFmpeg API to collect camera video and microphone audio, and realize the function of recording files

In the demo of this article, the recording function is well encapsulated in a class CAVOutputStream. I basically use it intact for the underlying implementation of the recording function.

The work I added is to call the state machine video_capture_state_machine () in the while loop of the video playback function videoStreamStartPlay (). The code is roughly as follows:

void video_capture_state_machine(AVFrame *pFrame)
{
    switch(m_video_capture_state)
    {
        case VIDEO_CAPTURE_START:
            LOGD("VIDEO_CAPTURE_START");
            m_start_time = av_gettime();
            m_OutputStream.SetVideoCodec(AV_CODEC_ID_H264); //设置视频编码器属性
            if(true == m_OutputStream.OpenOutputStream(m_save_video_path.c_str()))
                m_video_capture_state = VIDEO_CAPTURE_IN_PROGRESS;
            else
                m_video_capture_state = VIDEO_CAPTURE_IDLE;
            break;
        case VIDEO_CAPTURE_IN_PROGRESS:
            LOGD("VIDEO_CAPTURE_IN_PROGRESS");
            m_OutputStream.write_video_frame(m_input_format_ctx->streams[m_video_stream_index], m_input_format_ctx->streams[m_video_stream_index]->codec->pix_fmt, pFrame, av_gettime() - m_start_time);
            break;
        case VIDEO_CAPTURE_STOP:
            LOGD("VIDEO_CAPTURE_STOP");
            m_OutputStream.CloseOutput();
            m_video_capture_state = VIDEO_CAPTURE_IDLE;
            break;
        default:
            if(m_video_capture_state == VIDEO_CAPTURE_IDLE){
                LOGD("VIDEO_CAPTURE_IDLE");
            }
            else{
                LOGD("m_video_capture_state: %d", m_video_capture_state);
            }
            break;
    }//eo switch(m_video_capture_state)
}

The interfaces of the native layer and the JAVA layer are as follows:

/* 开始录像 */
void videoStreamStartCapture(const char* file_path)
{
    m_save_video_path = file_path;
    m_video_capture_state = VIDEO_CAPTURE_START;
}

/* 停止录像 */
void videoStreamStopCapture( )
{
    m_video_capture_state = VIDEO_CAPTURE_STOP;
}

2.5 fps display

The realization idea is the same: 2.2 Display mode switching .

The dynamic display of fps value is realized by using av_opt_set (filter_ctx_draw_text-> priv, "text", str_fps, 0).

2.6 Browse and delete video

Implementation ideas: Basically use the original framework of the app, with only a few changes. Mainly as follows:

1) MainActivity.java

      When the user clicks the "Photo" button, an AlertDialog pops up, prompting to select the type of browsing, and then according to the user's choice, call before startActivity (intent)

      intent.putExtra("picturePath", picturePath);

      intent.putExtra("scan_type", ScanPicActivity.SCAN_TYPE_VIDEO);

      or

      intent.putExtra("picturePath", videoRecordPath);

      intent.putExtra("scan_type", ScanPicActivity.SCAN_TYPE_PIC);

2)ScanPicActivity.java

      -In the init () function, scan_type = getIntent (). GetIntExtra ("scan_type", SCAN_TYPE_PIC); save the current browsing type

      -Scan_type is added to every place where "jpeg" character string is involved. The code is omitted, see the project source code for details

3)Generic.java

      Imitate getShrinkedPic (), add the function getShrinkedPicFromVideo (), the core is ThumbnailUtils.createVideoThumbnail (). The code is omitted, see the project source code for details

Reference materials:

1) Wei Dongshan Embedded Linux Training Phase 3 project actual usb camera monitoring, mobile phone App source code

2) Android official tutorial: https://developer.android.google.cn/guide/

3) AndroidStudio3.x develops and debugs the C ++ code of Android-NDK

4) NDK development notes-CMake build JNI

5) Lei Xiaohua's blog series of articles: [Summary] FFMPEG video and audio codec zero-based learning method

6) Android uses FFmpeg (2)-simple realization of video streaming

7) How to use FFmpeg API to collect camera video and microphone audio, and realize the function of recording files

8) ffmpeg realizes the collection, preview and taking pictures of mjpeg camera

9) Use examples of FFMPEG filter (to achieve video scaling, cropping, watermarking, etc.)

10) Use ffmpeg's lib library to achieve the original width-to-height ratio / stretch of video window

11) ffmpeg achieves dynamic adjustment of subtitle content

12) Ffmpeg usage summary (below)

13) Print FFmpeg debugging information in Android logcat

Guess you like

Origin www.cnblogs.com/normalmanzhao2003/p/12695432.html