[Screen projection] Scrcpy source code analysis three (Client articles - projection stage)

Scrcpy Source Code Analysis Series
[Screencasting] Scrcpy Source Code Analysis 1 (Compilation)
[Screencasting] Scrcpy Source Code Analysis 2 (Client - Connection Stage)
[Screencasting] Scrcpy Source Code Analysis 3 (Client - Screencasting Stage)
[Screencasting] 】Srcpy source code analysis four (final chapter - Server chapter)

In the previous article, we explored the logic of the connection phase of the Scrcpy Client, and in this article we continue to explore the screen projection phase of the Client.

1. Audio and video and FFmpeg

Because a lot of audio and video codec knowledge and FFmpeg-related APIs are used in the screen projection stage, before continuing to analyze the code, let's review these contents briefly and quickly. Because FFmpeg has a wide range of functions, we only introduce a part used in Scrcpy.

1.1 Audio and video basics

1.1.1 Encoding/Decoding

Encoding (Encode) - converts one audio and video format file (usually raw, uncompressed) into another format file through compression technology.
Decode (Decode) - restore the compressed audio and video format files to the original audio and video format files.

Usually what we call a codec (Codec) includes both encoding and decoding capabilities.

The significance of encoding is that the uncompressed original type has a very large data stream, which is not conducive to storage and network transmission, so it needs to be encoded. Common video original types include YUV, , RAWetc., and audio original types include PCM. Common video encoding types include H264, , H265etc., and audio encoding types include AAC, MP3.

1.1.2 Containers

A container generally refers to an encapsulation format that contains multiple streams. For example, a container can contain audio streams, video streams, subtitle streams, etc., and the data format corresponding to the audio streams and video streams is the encoding type of audio and video.

Mix/Mux (mux) - Mix multiple streams into one container.
Demux/Demux (demux) - Split into multiple streams from a container.

Common containers are MP4 , FLV , MKV , AVI .

1.1.3 Audio and video playback process

The process of audio and video playback is usually:

编码
混流
分流
解码
采集
YUV
H264
传输
H264
YUV
播放

If only audio or video is required, the process of stream mixing/distributing can be omitted.

In the previous article, the principle of Scrcpy was mentioned. The Android device side continuously records and encodes the screen, transmits the video stream to the PC, and the PC performs decoding and rendering, which is a process similar to the above.

The encoding of the Android device is hard-coded by MediaCodec. We don’t need to pay too much attention to this for the time being. We only need to encode the YUV raw data into H264 by Android, and transmit it to the PC through video_socket. After the PC side receives the video stream, it decodes it through FFmpeg and renders it through SDL.

1.2 FFmpeg

FFmpeg is a set of audio and video open source software, which provides powerful audio and video processing capabilities and is widely used. The most basic of these is the ability to encode and decode.

Scrcpy mainly uses the decoding ability of FFmpeg, and our focus in this article is still Scrcpy, so we just briefly describe the APIs that FFmpeg needs to use for decoding to facilitate subsequent analysis.

The key process of FFmpeg decoding is as follows (the code is incomplete, just focus on the key API):

int ffmpeg_decode() {
    
    
	// 注册所有编解码器
	avcodec_register_all();
	
	// 创建解码器,传入对应的解码器ID,比如这里是H264解码器
	AVCodec *codec = avcodec_find_decoder(AV_CODEC_ID_H264);
	// 分配AVCodecContext空间并初始化
	AVCodecContext *codecContext = avcodec_alloc_context3(codec);
	// 通过AVCodec对AVCodecContext进行初始化
	avcodec_open2(codecContext, codec, NULL);
	// 初始化AVCodecParserContext
	AVCodecParserContext *parserContext = av_parse_init(AV_CODEC_ID_H264);
	
	// 分配AVPacket空间
	AVPacket *avPacket = av_packet_alloc();
	// 分配AVFramen空间
	AVFrame *frame = av_frame_alloc();
	
	while(!eof(input)) {
    
    
		// 解析一个packet
		av_parser_parse2(parserContext, codecContext,  &pkt->data, &pkt->size, data, (int)data_size, AV_NOPTS_VALUE, AV_NOPTS_VALUE, 0);
		// 解码
		decode();
	}

	// 资源释放
	avcodec_free_context(&codecContext);
	av_parse_close(parseContext);
	av_frame_free(&frame);
	av_packet_free(&avPacket);
}

int decode(AVCodecContext *codec_ctx, AVPacket *pkt, AVFrame *frame) {
    
    
	// 将packet送入解码器
	int ret = avcodec_send_packet(codec_ctx, pkt);

	while(ret >= 0) {
    
    
		// 从解码器中拿到解码后的帧数据
		ret = avcodec_receive_frame(codec_ctx, frame);
		// [TODO] 已经拿到帧数据frame->data
	}
}

The above is the template code for video decoding of H264 using FFmpeg. There are several stages:

  1. It is related to initialization. At this stage, you need to create AVCodec, AVCodecContext, and AVCodecParserContextvariables, and perform related initialization.
  2. Allocate space for AVPacketsum structures. It refers to a data packet after encoding, which is a frame of data after decoding, and one frame in the video represents one frame of picture data.AVFrameAVPacketAVFrame
  3. In the decoding stage, this stage needs to parse a packet from the input source (file or network), and then send it to the decoder for decoding, to the frame frame data.
  4. In the data processing stage, after receiving the frame data AVFrame->data, the data can be processed according to business needs.

The decoding process using FFmpeg in Scrcpy is roughly the same as above.

2. Screen casting stage

Last time, we mentioned the function scrcpy()inside await_for_server(). This function is waiting for the SDL event internally. After receiving the successful connection event, it will jump out of the wait and continue to execute the subsequent logic.

// scrcpy.c
enum scrcpy_exit_code
scrcpy(struct scrcpy_options *options) {
    
    
	// 连接阶段...
	await_for_server();
	
	// 【投屏阶段】
	// 初始化文件上传相关数据结构
	sc_file_pusher_init(&s->file_pusher, serial, options->push_target)
	// 初始化解码相关数据结构
	sc_decoder_init(&s->decoder);
	// 初始化录制相关数据结构
	sc_recorder_init(&s->recorder,
                              options->record_filename,
                              options->record_format,
                              info->frame_size);
	// 初始化分流相关数据结构
	sc_demuxer_init(&s->demuxer, s->server.video_socket, &demuxer_cbs, NULL);
	// 将解码器加到分流器的一路流中
	sc_demuxer_add_sink(&s->demuxer, &dec->packet_sink);
	// 将录制器加到分流器的一路流中
	sc_demuxer_add_sink(&s->demuxer, &rec->packet_sink);
	// 初始化键盘拦截相关数据结构
	sc_keyboard_inject_init(&s->keyboard_inject, &s->controller,
                                    options->key_inject_mode,
                                    options->forward_key_repeat);
	// 初始化鼠标拦截相关数据结构
	sc_mouse_inject_init(&s->mouse_inject, &s->controller);
	// 初始化控制socket
	sc_controller_init(&s->controller, s->server.control_socket,
                                acksync);
	// 开启两个控制相关的新线程,一个发,一个收
	sc_controller_start(&s->controller);
	// 初始化屏幕渲染相关数据结构
	sc_screen_init(&s->screen, &screen_params);
	// 将屏幕加到解码器的一路流中
	sc_decoder_add_sink(&s->decoder, &s->screen.frame_sink);
	// 将v4l2加到解码器的一路流中
	sc_decoder_add_sink(&s->decoder, &s->v4l2_sink.frame_sink);
	// 开启新线程执行分流和解码
	sc_demuxer_start(&s->demuxer);
	// SDL事件循环等待事件
	event_loop(s);
	
	// 关闭窗口
	sc_screen_hide_window(&s->screen);
	// 关闭和释放服务相关资源
	sc_server_destroy(&s->server);
}

We need to pay attention to several parts during the screen projection stage:

  1. sc_file_pusher_init- Initialize the data structure related to file upload. File upload refers to dragging files from the PC into the mirror window and automatically synchronizing them to /sdcard/Downloadthe directory.
  2. sc_decoder_init& sc_recorder_init- Initialization of decoder and recording related data structures. struct sc_packet_sink_opsThe callback function is mainly set to trigger corresponding actions at three timings of open, , closeand . push(Note: The callback here is for Packet, as mentioned earlier, Packet refers to a compressed and encoded data packet).
  3. sc_demuxer_init- Initialize the data structure related to the shunt.
  4. sc_demuxer_add_sink- Add the decoder and recorder to the shunt, Scrcpy's shunt (Demuxer) is not the same as the container shunt mentioned above. The shunting of the container is to separate multiple streams, and the shunting in Scrcpy refers to sending the same data to different places for processing. For example, it will be sent to the decoder for decoding. If it is specified to record when the program starts, then a copy of the data will also be sent to the recorder for data storage.
  5. sc_keyboard_inject_init& sc_mouse_inject_init- Initializes the data structure for keyboard and mouse interception.
  6. sc_controller_init- Initialize the control_socket link.
  7. sc_controller_start- Open two new threads related to control, one for sending and one for receiving.
  8. sc_screen_init- Initialize the window and create the window with SDL. The set struct sc_frame_sink_opscallback function triggers the corresponding actions at the three timings of open, close, and . push(Note: Unlike the previous ones, the callback here is for frame, that is, the frame data after packet decoding).
  9. sc_decoder_add_sink- Add the window and V4L2 to one stream of the decoder. Like the splitter, the frame data decoded by the decoder will also be sent to the window and the V4L2 device (the V4L2 device needs to be specified in the startup program, if not specified, Then the V4L2 logic will not be triggered here).
  10. sc_demuxer_start- Start a new thread to perform streaming and decoding.
  11. event_loop- Event loop, listening to SDL events.
  12. sc_server_destroy- Close and release service related resources. Because the previous step is an infinite loop, the loop will only exit when the exit event is triggered, and the release logic here.

Among them, 5 , 7 , 8 , 10 , and 11 that need to be focused on , we will focus on 8 , 10 , 11 , 5 , and 7 in order of importance .

2.1 sc_screen_init- Initialize the window

We list sc_screen_initthe key code of the function:

// screen.c
bool
sc_screen_init(struct sc_screen *screen,
               const struct sc_screen_params *params) {
    
    
	// 设置on_new_frame回调
	static const struct sc_video_buffer_callbacks cbs = {
    
    
        .on_new_frame = sc_video_buffer_on_new_frame,
    };
	// 对video buffer进行初始化
	sc_video_buffer_init(&screen->vb, params->buffering_time, &cbs, screen);
	// 开启新线程执行帧数据处理
	sc_video_buffer_start(&screen->vb);
	// 创建SDL窗口
	SDL_CreateWindow(params->window_title, 0, 0, 0, 0, window_flags);
	// 创建渲染器
	SDL_CreateRenderer(screen->window, -1, SDL_RENDERER_ACCELERATED);

	// 设置解码后frame数据回调
	static const struct sc_frame_sink_ops ops = {
    
    
	        .open = sc_screen_frame_sink_open,
	        .close = sc_screen_frame_sink_close,
	        .push = sc_screen_frame_sink_push,
	    };
	    
    screen->frame_sink.ops = &ops;
}

sc_screen_initThere are four main effects we see :

  1. Set on_new_frameand pass the callback to the initialization method of video_buffer of the screen (that is, the desktop window), which can be simply understood as binding this callback to the client, which will be used later.

  2. Start a new thread to perform frame processing. The function here is to finally fetch frame data from a frame queue and send it to on_new_framethe function.

    // video_buffer.c
    bool
    sc_video_buffer_start(struct sc_video_buffer *vb) {
          
          
    	// 开启新线程执行run_buffering函数
    	sc_thread_create(&vb->b.thread, run_buffering, "scrcpy-vbuf", vb);
    }
    
    static int
    run_buffering(void *data) {
          
          
    	for (;;) {
          
          
    		// 从&vb->b.queue队列中取帧
    		sc_queue_take(&vb->b.queue, next, &vb_frame);
    		// 调用此函数,将帧传入
    		sc_video_buffer_offer(vb, vb_frame->frame);
    	}
    }
    
    static bool
    sc_video_buffer_offer(struct sc_video_buffer *vb, const AVFrame *frame) {
          
          
    	// 帧数据处理后,将数据通过on_new_frame回调传出
        vb->cbs->on_new_frame(vb, previous_skipped, vb->cbs_userdata);
    }
    
  3. Create windows and renderers via SDL.

  4. Set the callback of the decoded frame data. As mentioned earlier, the packet will be sent to the decoder and the recorder as two packet streams, and the decoder can divide the screen window and the V4L2 device into two frame streams. The callback here is the callback that the decoder sends to the screen window after decoding the data.

// screen.c
static const struct sc_frame_sink_ops ops = {
    
    
	        .open = sc_screen_frame_sink_open,
	        .close = sc_screen_frame_sink_close,
	        .push = sc_screen_frame_sink_push,
	    };

static bool
sc_screen_frame_sink_push(struct sc_frame_sink *sink, const AVFrame *frame) {
    
    
    return sc_video_buffer_push(&screen->vb, frame);
}

// video_buffer.c
bool
sc_video_buffer_push(struct sc_video_buffer *vb, const AVFrame *frame) {
    
    
	// 往&vb->b.queue队列中插帧数据
    sc_queue_push(&vb->b.queue, next, vb_frame);
}

After analyzing sc_screen_initthe function, we know that the process of this part is basically shown in the figure below, so the remaining problem now is how to initiate the push callback of the decoder from the outside, and on_new_framewhat is done inside. First bury a hole here, and we will fill it later.
insert image description here

2.2 sc_demuxer_start- Streaming and Decoding

We list sc_demuxer_startthe key code of the function:

// demuxer.c
bool
sc_demuxer_start(struct sc_demuxer *demuxer) {
    
    
	sc_thread_create(&demuxer->thread, run_demuxer, "scrcpy-demuxer", demuxer);
}

static int
run_demuxer(void *data) {
    
    
	// FFmpeg API: 初始化AVCodec和AVCodecContext
	AVCodec *codec = avcodec_find_decoder(AV_CODEC_ID_H264);
	demuxer->codec_ctx = avcodec_alloc_context3(codec);
	
	// open sinks,回调到struct sc_packet_sink_ops的.open回调
	sc_demuxer_open_sinks(demuxer, codec);
	
	// FFmpeg API: 初始化AVCodecParserContext和AVPacket
	demuxer->parser = av_parser_init(AV_CODEC_ID_H264);
	AVPacket *packet = av_packet_alloc();

	// 不断地读packet,并将packet窗到sink中
	for(;;) {
    
    
		sc_demuxer_recv_packet(demuxer, packet);
		sc_demuxer_push_packet(demuxer, packet);
	}

	// FFmpeg API: 释放
	av_packet_free(&packet);
	av_parser_close(demuxer->parser);
	avcodec_free_context(&demuxer->codec_ctx);
}

We can see that sc_demuxer_startthe main thing is to initialize FFmpeg-related data structures in the sub-thread, and then read packet data and push continuously in an infinite loop. Specifically, let’s look at the and function :sc_demuxer_recv_packetsc_demuxer_push_packet

// demuxer.c
// sc_demuxer_recv_packet的作用就是接收packet
static bool
sc_demuxer_recv_packet(struct sc_demuxer *demuxer, AVPacket *packet) {
    
    
	// 通过video_socket从网络读packet header
	net_recv_all(demuxer->socket, header, SC_PACKET_HEADER_SIZE);
	// 通过video_socket从网络读packet数据
	net_recv_all(demuxer->socket, packet->data, len);
}

// sc_demuxer_push_packet的作用就是调用struct sc_packet_sink_ops的.push回调
static bool
sc_demuxer_push_packet(struct sc_demuxer *demuxer, AVPacket *packet) {
    
    
	push_packet_to_sinks(demuxer, packet);
}

static bool
push_packet_to_sinks(struct sc_demuxer *demuxer, const AVPacket *packet) {
    
    
  for (unsigned i = 0; i < demuxer->sink_count; ++i) {
    
    
        struct sc_packet_sink *sink = demuxer->sinks[i];
        if (!sink->ops->push(sink, packet)) {
    
    
            return false;
        }
    }
	return true;
}

Note that the packet has just been obtained from the network and has not been decoded into a frame, so the callback is called struct sc_packet_sink_ops, .pushnot the decoded push callback in Section 2.1. sc_decoder_initHere the packet callback is registered in the function mentioned above :

void
sc_decoder_init(struct sc_decoder *decoder) {
    
    
    decoder->sink_count = 0;

    static const struct sc_packet_sink_ops ops = {
    
    
        .open = sc_decoder_packet_sink_open,
        .close = sc_decoder_packet_sink_close,
        .push = sc_decoder_packet_sink_push,
    };

    decoder->packet_sink.ops = &ops;
}

// packet的push回调方法
static bool
sc_decoder_packet_sink_push(struct sc_packet_sink *sink,
                            const AVPacket *packet) {
    
    
    struct sc_decoder *decoder = DOWNCAST(sink);
    return sc_decoder_push(decoder, packet);
}

static bool
sc_decoder_push(struct sc_decoder *decoder, const AVPacket *packet) {
    
    
	// FFmpeg API: 将packet送到packet送到解码器中
	avcodec_send_packet(decoder->codec_ctx, packet);
	// FFmpeg API: 从解码器中拿到解码后的帧数据
	avcodec_receive_frame(decoder->codec_ctx, decoder->frame);
	// 将解码后的帧数据传给sinks
	push_frame_to_sinks(decoder, decoder->frame);
}

Therefore, the main function of the push callback of the packet is to decode the packet into a frame through the decoder, which is consistent with the FFmpeg decoding process mentioned above. After getting the frame, it's time to call the push_frame_to_sinkspush callback that sent the frame to the decoder:

static bool
push_frame_to_sinks(struct sc_decoder *decoder, const AVFrame *frame) {
    
    
    for (unsigned i = 0; i < decoder->sink_count; ++i) {
    
    
        struct sc_frame_sink *sink = decoder->sinks[i];
        if (!sink->ops->push(sink, frame)) {
    
    
            return false;
        }
    }

    return true;
}

That's right, it is here that the first question mark in the flow chart in the previous section is triggered, so the flow chart can be filled in:
insert image description here

2.3 event_loop- Event loop

// scrcpy.c
static enum scrcpy_exit_code
event_loop(struct scrcpy *s) {
    
    
    SDL_Event event;
    while (SDL_WaitEvent(&event)) {
    
    
        switch (event.type) {
    
    
            case EVENT_STREAM_STOPPED:
                LOGW("Device disconnected");
                return SCRCPY_EXIT_DISCONNECTED;
            case SDL_QUIT:
                LOGD("User requested to quit");
                return SCRCPY_EXIT_SUCCESS;
            default:
                sc_screen_handle_event(&s->screen, &event);
                break;
        }
    }
    return SCRCPY_EXIT_FAILURE;
}

event_loopThe structure of the function is relatively clear, that is, it has been waiting for SDL events. Except for EVENT_STREAM_STOPPEDand SDL_QUITevents, other events are handed over to sc_screen_handle_eventthe function for processing:

// screen.c
void
sc_screen_handle_event(struct sc_screen *screen, SDL_Event *event) {
    
    
	switch (event->type) {
    
    
		// new frame事件
    	case EVENT_NEW_FRAME:
    		sc_screen_update_frame(screen);
    		return;
    	// SDL窗口事件,包括窗口最大化、恢复、窗口失去焦点等
        case SDL_WINDOWEVENT:
        	return;
        // 键盘事件
        case SDL_KEYDOWN:
        case SDL_KEYUP:
        // 鼠标事件
        case SDL_MOUSEWHEEL:
        case SDL_MOUSEMOTION:
        case SDL_MOUSEBUTTONDOWN:
        // 触摸事件
        case SDL_FINGERMOTION:
        case SDL_FINGERDOWN:
        case SDL_FINGERUP:
        case SDL_MOUSEBUTTONUP:
        	// 省略了部分代码
    }
    
    sc_input_manager_handle_event(&screen->im, event);
}

In sc_screen_handle_eventthe function, we handle EVENT_NEW_FRAMEevents and other mouse and keyboard events. Let's focus on EVENT_NEW_FRAMEevents first. After receiving this event, sc_screen_update_framethe function will be executed. The key code is as follows:

// screen.c
static bool
sc_screen_update_frame(struct sc_screen *screen) {
    
    
	// 更新数据
    update_texture(screen, frame);
	// 第一次执行则打开窗口
	if (!screen->has_frame) {
    
    
 		sc_screen_show_initial_window(screen);
 	}
	// 数据渲染
    sc_screen_render(screen, false);
}

static void
update_texture(struct sc_screen *screen, const AVFrame *frame) {
    
    
	// 将YUV数据写到SDL上下文中
    SDL_UpdateYUVTexture(screen->texture, NULL,
            frame->data[0], frame->linesize[0],
            frame->data[1], frame->linesize[1],
            frame->data[2], frame->linesize[2]);
}

static void
sc_screen_show_initial_window(struct sc_screen *screen) {
    
    
	// 展示窗口
	SDL_ShowWindow(screen->window);
}

static void
sc_screen_render(struct sc_screen *screen, bool update_content_rect) {
    
    
	// SDL模板代码,将上下文中的数据渲染到窗口上
	SDL_RenderClear(screen->renderer);
	SDL_RenderCopy(screen->renderer, screen->texture, NULL, &screen->rect);
	SDL_RenderPresent(screen->renderer);
}

So we know that the function of this function is to open the window and render the frame data (that is, the decoded YUV) into the window. The source is EVENT_NEW_FRAMEthis event. So where did this incident come from? It is the previous on_new_frame callback, corresponding to sc_video_buffer_on_new_framethe function:

// screen.c
static void
sc_video_buffer_on_new_frame(struct sc_video_buffer *vb, bool previous_skipped,
                             void *userdata) {
    
    
	// 这里将EVENT_NEW_FRAME通过SDL的事件机制发出
	static SDL_Event new_frame_event = {
    
    
	          .type = EVENT_NEW_FRAME,
	      };
	SDL_PushEvent(&new_frame_event);
}

Seeing this, our flow chart can be filled completely.
insert image description here

So far, the video stream has basically been analyzed, and the data in this part is video_socket. As mentioned in the previous article, there is also a control_socket, which is mainly used to control event transmission, such as mouse and keyboard control, which is the anti-control function in screen projection. This is also a very important part of screen projection business. Let's look at this part below.

2.4 sc_keyboard_inject_init& sc_mouse_inject_init- Mouse and keyboard events

Because the overall logic of the keyboard and the mouse is similar, here we follow the process of the keyboard, and the mouse will not go into details. sc_keyboard_inject_initThe main function is to register the keyboard callback:

// mouse_inject.c
void
sc_keyboard_inject_init(struct sc_keyboard_inject *ki,
                        struct sc_controller *controller,
                        enum sc_key_inject_mode key_inject_mode,
                        bool forward_key_repeat) {
    
    
	 static const struct sc_key_processor_ops ops = {
    
    
        .process_key = sc_key_processor_process_key,
        .process_text = sc_key_processor_process_text,
    };

	ki->key_processor.ops = &ops;
}

static void
sc_key_processor_process_key(struct sc_key_processor *kp,
                             const struct sc_key_event *event,
                             uint64_t ack_to_wait) {
    
    
	sc_controller_push_msg(ki->controller, &msg)
}

bool
sc_controller_push_msg(struct sc_controller *controller,
                       const struct sc_control_msg *msg) {
    
    
    // 键盘事件入队列
	cbuf_push(&controller->queue, *msg);
}

You can see that keyboard events will eventually be put into the queue. So where does the keyboard event come from? It's from the previous section event_loop. SDL will automatically detect the keyboard and mouse events received by the window, you only need to event_looplisten to the corresponding events in the window, and eventually the event callback will be triggered:

// input_manager.c
void
sc_input_manager_handle_event(struct sc_input_manager *im, SDL_Event *event) {
    
    
	switch (event->type) {
    
    
		// ...
		case SDL_KEYDOWN:
        case SDL_KEYUP:
            sc_input_manager_process_key(im, &event->key);
            break;
        // ...
	}
}

static void
sc_input_manager_process_key(struct sc_input_manager *im,
                             const SDL_KeyboardEvent *event) {
    
    
    // 调用process_key回调
	im->kp->ops->process_key(im->kp, &evt, ack_to_wait);
}

So so far, the flow of keyboard and mouse events is:
insert image description here

2.5 sc_controller_start- Sending and receiving of events

The event method mentioned here is mainly to interact with the event on the mobile phone side. Let’s see how it is done:

bool
sc_controller_start(struct sc_controller *controller) {
    
    
	sc_thread_create(&controller->thread, run_controller,
                               "scrcpy-ctl", controller);

	receiver_start(&controller->receiver);
}

bool
receiver_start(struct receiver *receiver) {
    
    
	sc_thread_create(&receiver->thread, run_receiver,
                               "scrcpy-receiver", receiver);
}

sc_controller_startThe function will open two threads, one for receiving and one for sending:

  • Receiving thread - mainly receives pasteboard events from the mobile phone side, and the copy operation triggered by the mobile phone side will transfer the data to the PC side, and the PC will be placed in the pasteboard. I won’t go into details here, and interested students can chase down the source code by themselves.

  • Send thread - Send events on the PC side to the mobile phone. This is the focus of our attention. Let's look at run_controllerthe core logic of the function:

// controller.c
static int
run_controller(void *data) {
    
    
	for(;;) {
    
    
		// 从队列里取事件
		cbuf_take(&controller->queue, &msg);
		// 处理事件
		process_msg(controller, &msg);
	}
}

static bool
process_msg(struct sc_controller *controller,
            const struct sc_control_msg *msg) {
    
    
    // 通过control_socket将事件发出去
	net_send_all(controller->control_socket, serialized_msg, length);
}

The main logic of sending threads is an infinite loop, which continuously fetches events from the queue and sends them out through control_socket.

So the process of keyboard and mouse events can be improved:
insert image description here

2.6 Timing Diagram

As usual, throw out a timing diagram of the projection stage. Different colors represent different threads.

insert image description here

3. Summary

In this article, we explored the logic of Scrcpy Client-side projection stage. The points involved are FFmpeg decoding, SDL window drawing and keyboard and mouse anti-control.

So far, the logic of the client side has been introduced, which is divided into the connection phase and the screen projection phase. In the next article, we will explore the server side, that is, the functional logic of the mobile phone side, see you in the next article.

Guess you like

Origin blog.csdn.net/ZivXu/article/details/128932688