Commonly used APIs of FFmpeg

The role of av_register_all() is to initialize all components. Only when this function is called, the multiplexer and codec can be used. The internal call of this interface is:
(1).avcodec_register_all(), the internal execution steps of this interface:

  • Register hardware accelerator: REGISTER_HWACCEL()

  • Register audio and video encoder: REGISTER_ENCODER()

  • Register audio and video decoder: REGISTER_DECODER()

  • Package registration: REGISTER_ENCDEC()

  • Register Parser: REGISTER_PARSER()

(2). Execute the registration of the multiplexer and demultiplexer:

  • Register a multiplexer: REGISTER_MUXER()

  • Register demuxer: REGISTER_DEMUXER()

  • Package registration: REGISTER_MUXDEMUX()

av_malloc, av_free() : av_malloc allocates a block of memory whose alignment is suitable for all memory accesses (including vectors available on the CPU), av_free (void *ptr): releases memory space that has been allocated using av_malloc and av_realoc()f, void * av_realoc(void *ptr, size_t size), the size of the expanded prt memory is the size indicated by size.

There are many data structures in ffmpeg, and common data structures include the following parts:

  • Solution protocol (http, rtmp, mms)

AVIOContext , URLProtocol , URLContext , are mainly used to store the protocol type and state used by audio and video, and URLProtocol is mainly used for

The encapsulation format used to store the input, each protocol will correspond to a URLProtocal structure.

  • Decapsulation (flv, avi, rmvb, MP4)

AVFormatContext mainly stores the information contained in the audio and video package , AVInputFormat : stores the package format used in the input audio, each audio and video package format

All correspond to an AVInputFormat structure.

  • Decoding (h264, mpeg2, mp3, aac)

Each AVstream stores relevant data information of a video/audio stream, and each AVStream corresponds to an AVCodecContext , which stores the usage corresponding to the video and audio

Use the relevant data of the decoding method, each AVcodecContext corresponds to an AVCodec , including the decoder structure used by the video or audio, each decoder

Corresponds to an AVCode structure.

  • save data

The video structure stores one frame, while the audio structure can store several frames, AVPacket stores the data before decoding, and AVFrame stores the decoded data.

Analysis of Keyword Structure

  1. AVIOContext : It is the top-level structure related to io operations in FFMpeg, and it is the core of avio. FFmpeg supports opening the URL of the local file path and Liu Ridai's protocol.
    unsigned char *buffer;  // buffer起始地址
    int buffer_size;        // 可以读取或者写入的最大的buffer size
    unsigned char *buf_ptr; // 当前正在读或写操作的buffer地址
    unsigned char *buf_end; // 数据结束的buffer地址,如果读取函数返回的数据小于请求数据,buf_end可能小于buffer + buffer_size
    void *opaque;  // 一个私有指针,传递给read / write / seek / 等函数
    int (*read_packet)(void *opaque, uint8_t *buf, int buf_size); // 读取音视频数据的函数。
    int (*write_packet)(void *opaque, uint8_t *buf, int buf_size); // 写入音视频数据的函数
    int64_t (*seek)(void *opaque, int64_t offset, int whence);
    int64_t pos; // 当前buffer在文件中的位置
    int must_flush; // 如果下一个seek应该刷新,则为true
    int eof_reached; // 如果到达eof(end of file 文件尾),则为true
    int write_flag; // 如果开放写,则为true
    int (*read_pause)(void *opaque, int pause); // 暂停或恢复网络流媒体协议的播放
    int64_t (*read_seek)(void *opaque, int stream_index,
                         int64_t timestamp, int flags); // 快进到指定timestamp
    int seekable; // 如果为0,表示不可seek操作。其它值查看AVIO_SEEKABLE_XXX
    int64_t maxsize; // max filesize,用于限制分配空间大小
    int direct; // avio_seek是否直接调用底层的seek功能。
    int64_t bytes_read; // 字节读取统计数据
    int seek_count; // seek计数
    int writeout_count; // 写入次数统计
    int orig_buffer_size; // 原始buffer大小
    const char *protocol_whitelist; // 允许协议白名单,以','分隔
    const char *protocol_blacklist; // 不允许的协议黑名单,以','分隔
		// 用于替换write_packet的回调函数。
    int (*write_data_type)(void *opaque, uint8_t *buf, int buf_size,
                           enum AVIODataMarkerType type, int64_t time);
} AVIOContext;

①(*read_packet): Function to read audio and video data. can be defined by yourself

②(*write_packet): Function to write audio and video data. can be defined by yourself

③(*read_pause): Pause or resume the playback of network streaming media protocol. can be defined by yourself

During the decoding process, the buffer is used to store the data read by ffmpeg. When opening a file, the data needs to be read from the disk into the buffer first, and then sent to the decoder for use. That

The opaque in specifies the URLContext.

Among them, you can use avio_alloc_context() to initialize the AVIOContext object, and then use av_free() that needs to be called after the end to destroy the object.

avio_alloc_context():可以构造AVIOContext对象,

AVIOContext* avio_alloc_context	(	unsigned char * 	buffer,           //指定字符串地址
int 	buffer_size,													  //指定缓冲区大小
int 	write_flag,													      //如果缓冲区应该是可写的,则设置为 1,否则设置为 0。
void * 	opaque,															  //指向特殊数据的不透明指针											
int(*)(void *opaque, uint8_t *buf, int buf_size) 	read_packet,		  //用于重新填充的缓冲区的函数为NULL
int(*)(void *opaque, uint8_t *buf, int buf_size) 	write_packet,		  //用于写入缓冲区内容的函数可能是NULL
int64_t(*)(void *opaque, int64_t offset, int whence) 	seek 			  //用于寻找指定字节的位置
)	

Among them, opaque points to the URLContext structure, and the specific structure is as follows:

typedef struct URLContext {
	const AVClass *av_class; ///< information for av_log(). Set by url_open().
	//指向相应URLProtocal
	struct URLProtocol *prot;
	int flags;
	int is_streamed;  /**< true if streamed (no seek possible), default = false */
	int max_packet_size;  /**< if non zero, the stream is packetized with this max packet size */
	void *priv_data;//一般用来指向某种具体协议的上下文信息
	char *filename; /**< specified URL */
	int is_connected;
	AVIOInterruptCB interrupt_callback;
} URLContext;

There is also a structure URLProtocol in the URLContext structure. Note: Each protocol (rtp, rtmp, file, etc.) corresponds to a URLProtocol. This structure is also not in FFMPEG

in the provided header file. Find its definition from the FFMPEG source code:

URLProtocol ff_file_protocol = {
    .name                = "file",
    .url_open            = file_open,
    .url_read            = file_read,
    .url_write           = file_write,
    .url_seek            = file_seek,
    .url_close           = file_close,
    .url_get_file_handle = file_get_handle,
    .url_check           = file_check,
};

URLProtocol ff_rtmp_protocol = {
    .name                = "rtmp",
    .url_open            = rtmp_open,
    .url_read            = rtmp_read,
    .url_write           = rtmp_write,
    .url_close           = rtmp_close,
    .url_read_pause      = rtmp_read_pause,
    .url_read_seek       = rtmp_read_seek,
    .url_get_file_handle = rtmp_get_file_handle,
    .priv_data_size      = sizeof(RTMP),
    .flags               = URL_PROTOCOL_FLAG_NETWORK,
};

URLProtocol ff_udp_protocol = {
    .name                = "udp",
    .url_open            = udp_open,
    .url_read            = udp_read,
    .url_write           = udp_write,
    .url_close           = udp_close,
    .url_get_file_handle = udp_get_file_handle,
    .priv_data_size      = sizeof(UDPContext),
    .flags               = URL_PROTOCOL_FLAG_NETWORK,
};
  1. AVInputFormat: The function is to read the media file and split it into data packets, each data packet contains one or more encoded frames, and stores the encapsulation format of the input audio and video.
typedef struct AVInputFormat {
    const char *name; // 输入格式的短名称
    const char *long_name; // 格式的长名称(相对于短名称而言,更易于阅读)
    /**
     * Can use flags: AVFMT_NOFILE, AVFMT_NEEDNUMBER, AVFMT_SHOW_IDS,
     * AVFMT_GENERIC_INDEX, AVFMT_TS_DISCONT, AVFMT_NOBINSEARCH,
     * AVFMT_NOGENSEARCH, AVFMT_NO_BYTE_SEEK, AVFMT_SEEK_TO_PTS.
     */
    int flags;
    const char *extensions; // 如果定义了扩展,就不会进行格式探测。但因为该功能目前支持不够,不推荐使用
    const struct AVCodecTag * const *codec_tag; // 见名知意
    const AVClass *priv_class; ///< AVClass for the private context
    const char *mime_type; // mime类型,它用于在探测时检查匹配的mime类型。
		/* 此行下方的任何字段都不是公共API的一部分。 它们不能在libavformat之外使用,可以随意更改和删除。
		 * 应在上方添加新的公共字段。*/
    struct AVInputFormat *next; // 用于链接下一个AVInputFormat
    int raw_codec_id; // 原始demuxers将它们的解码器id保存在这里。
    int priv_data_size; // 私有数据大小,可以用于确定需要分配多大的内存来容纳下这些数据。
		/**
		 * 判断给定文件是否有可能被解析为此格式。 提供的缓冲区保证为AVPROBE_PADDING_SIZE字节大,因此除非您需		 * 要更多,否则无需检查。
		 */
    int (*read_probe)(AVProbeData *);
    /**
     * 读取格式头,并初始化AVFormatContext结构体
     * @return 0 表示操作成功
     */
    int (*read_header)(struct AVFormatContext *);
    /**
     * 读取一个packet并存入pkt指针中。pts和flags会被同时设置。
     * @return 0 表示操作成功, < 0 发生异常
     *         当返回异常时,pkt可定没有allocated或者在函数返回之前被释放了。
     */
    int (*read_packet)(struct AVFormatContext *, AVPacket *pkt);
     // 关闭流,AVFormatContext和AVStreams并不会被这个函数释放。
    int (*read_close)(struct AVFormatContext *);
    /**
     * 在stream_index的流中,使用一个给定的timestamp,seek到附近帧。
     * @param stream_index 不能为-1
     * @param flags 如果没有完全匹配,决定向前还是向后匹配。
     * @return >= 0 成功
     */
    int (*read_seek)(struct AVFormatContext *,
                     int stream_index, int64_t timestamp, int flags);
    // 获取stream[stream_index]的下一个时间戳,如果发生异常返回AV_NOPTS_VALUE
    int64_t (*read_timestamp)(struct AVFormatContext *s, int stream_index,
                              int64_t *pos, int64_t pos_limit);
    // 开始或者恢复播放,只有在播放rtsp格式的网络格式才有意义。
    int (*read_play)(struct AVFormatContext *);
    int (*read_pause)(struct AVFormatContext *);// 暂停播放,只有在播放rtsp格式的网络格式才有意义。
    /**
     * 快进到指定的时间戳
     * @param stream_index 需要快进操作的流
     * @param ts 需要快进到的地方
     * @param min_ts max_ts seek的区间,ts需要在这个范围中。
     */
    int (*read_seek2)(struct AVFormatContext *s, int stream_index, int64_t min_ts, int64_t ts, int64_t max_ts, int flags);
    // 返回设备列表和其属性
    int (*get_device_list)(struct AVFormatContext *s, struct AVDeviceInfoList *device_list);
    // 初始化设备能力子模块
    int (*create_device_capabilities)(struct AVFormatContext *s, struct AVDeviceCapabilitiesQuery *caps);
    // 释放设备能力子模块
    int (*free_device_capabilities)(struct AVFormatContext *s, struct AVDeviceCapabilitiesQuery *caps);
} AVInputFormat;

AVPacket is a structure that stores information related to compressed encoded data.

AVBufferRef * 	buf		//buf是AVBufferRef类型指针,用来管理data指针引用的数据缓存
int64_t 	pts 	    //显示时间戳
int64_t 	dts         //解码时间戳
uint8_t * 	data		//指向保存压缩数据的指针
int 	size			//压缩数据的长度
int 	stream_index	//Packets所在stream流的索引
int 	flags           //标志,其中的最低位为1表示该数据是关键帧
AVPacketSideData * 	side_data    //容器可以提供附加数据包 
int 	side_data_elems  //边缘数据元数的个数
int64_t 	duration     //此数据包持续时间
int64_t 	pos          //数据流媒体中存在的位置 
void * 	opaque           //for some private data of the user More...
AVBufferRef * 	opaque_ref // 	AVBufferRef for free use by the API user. More...
AVRational 	time_base      //初始化时间

AVFrame is used to store raw data (that is, uncompressed data, such as YUV, RGB for video, and PCM for audio), and also contains some related information. For example, data such as the macroblock type table, QP table, and motion vector table are stored during decoding.

uint8_t * 	data [AV_NUM_DATA_POINTERS]                     //解码手的原始数据(YUV,RGB,PCM)
 
int 	linesize [AV_NUM_DATA_POINTERS] 					//data中一行数据的大小。
 
uint8_t ** 	extended_data									//指向数据平面的指针,对于视屏应该指向data[],对于音频
 
int 	nb_samples											//音频的一个frame中可能包换多少音频帧
 
int 	format												//解码手原始数据类型
 
int 	key_frame											//是否是关键帧
 
enum AVPictureType 	pict_type							//帧类型(IBP)
 
AVRational 	sample_aspect_ratio							//宽高比

int64_t 	pts											//显示时间戳
 
int64_t 	pkt_dts                
 
AVRational 	time_base
 	Time base for the timestamps in this frame. More...
 
int 	coded_picture_number						//编码帧序号
 
int 	display_picture_number					    //显示帧序号

int 	quality										//采样质量

 
void * 	opaque
 	for some private data of the user More...
 
int 	repeat_pict
 	When decoding, this signals how much the picture must be delayed. More...
 
int 	interlaced_frame
 	The content of the picture is interlaced. More...
 
int 	top_field_first
 	If the content is interlaced, is top field displayed first. More...
 
int 	palette_has_changed
 	Tell user application that palette has changed from previous frame. More...
 
int64_t 	reordered_opaque
 	reordered opaque 64 bits (generally an integer or a double precision float PTS but can be anything). More...
 
int 	sample_rate
 	Sample rate of the audio data. More...
 
attribute_deprecated uint64_t 	channel_layout
 	Channel layout of the audio data. More...
 
AVBufferRef * 	buf [AV_NUM_DATA_POINTERS]
 	AVBuffer references backing the data for this frame. More...
 
AVBufferRef ** 	extended_buf
 	For planar audio which requires more than AV_NUM_DATA_POINTERS AVBufferRef pointers, this array will hold all the references which cannot fit into AVFrame.buf. More...
 
int 	nb_extended_buf
 	Number of elements in extended_buf. More...
 
AVFrameSideData ** 	side_data
 
int 	nb_side_data
 
int 	flags
 	Frame flags, a combination of AV_FRAME_FLAGS. More...
 
enum AVColorRange 	color_range
 	MPEG vs JPEG YUV range. More...
 
enum AVColorPrimaries 	color_primaries
 
enum AVColorTransferCharacteristic 	color_trc
 
enum AVColorSpace 	colorspace
 	YUV colorspace type. More...
 
enum AVChromaLocation 	chroma_location
 
int64_t 	best_effort_timestamp
 	frame timestamp estimated using various heuristics, in stream time base More...
 
int64_t 	pkt_pos
 	reordered pos from the last AVPacket that has been input into the decoder More...
 
attribute_deprecated int64_t 	pkt_duration
 	duration of the corresponding packet, expressed in AVStream->time_base units, 0 if unknown. More...
 
AVDictionary * 	metadata
 	metadata. More...
 
int 	decode_error_flags
 	decode error flags of the frame, set to a combination of FF_DECODE_ERROR_xxx flags if the decoder produced a frame, but there were errors during the decoding. More...
 
attribute_deprecated int 	channels
 	number of audio channels, only used for audio. More...
 
int 	pkt_size
 	size of the corresponding packet containing the compressed frame. More...
 
AVBufferRef * 	hw_frames_ctx
 	For hwaccel-format frames, this should be a reference to the AVHWFramesContext describing the frame. More...
 
AVBufferRef * 	opaque_ref
 	AVBufferRef for free use by the API user. More...
 
AVBufferRef * 	private_ref
 	AVBufferRef for internal use by a single libav* library. More...
 
AVChannelLayout 	ch_layout						//音频的时间布局
 
int64_t 	duration                    			//时间周期

avcodec_open : Use the codec object to initialize the AVCodecContext object, and options is the setting option of the initialized AVCodec object

int avcodec_open2	(	AVCodecContext * 	avctx,   const AVCodec * 	codec,  AVDictionary ** 	options )	

avcodec_close : closes the given AVCodecContext and frees all data associated with it,

int avodec_close(AVCodecContext *avctx)

avcodec_find_decoder() : Use the id of its parameter to find the registered encoder:

const AVCodec* avcodec_find_decoder	(	enum AVCodecID 	id	)
ID:表示编码器对应的解码器	

avcodec_find_decoder_by_name() : returns the AVcodec object by the name of the decoder

const AVCodec* avcodec_find_decoder_by_name	(	const char * 	name	)	
name:解码器对应的名

avcodec_find_encoder() : Use the id of its parameter to find the registered encoder:

const AVCodec* avcodec_find_encoder	(	enum AVCodecID 	id	)
ID:表示编码器对应的解码器	

avcodec_find_encoder_by_name() : returns the AVcodec object by the name of the decoder

const AVCodec* avcodec_find_encoder_by_name	(	const char * 	name	)	
name:解码器对应的名

avcodec_send_packet() : Provides raw packet data as input to the decoder:

int avcodec_send_packet	(	AVCodecContext * 	avctx, const AVPacket * 	avpkt )	

avcodec_receive_frame() : will extract a frame from the decoder queue

int avcodec_receive_frame	(	AVCodecContext * 	avctx, AVFrame * 	frame)	
avctx: 编码器队列
frame:这将设置为由解码器分配的参考计数视频或音频帧(取决于解码器类型)。请注意,该函数将始终在执行任何其他操作之前调用 av_frame_unref(frame)

avformat_open_input : Open the output stream and read the header information of the stream, the encoder is not open, and use avformat_close_input() to close the stream

int avformat_open_input	(	AVFormatContext ** 	ps,const char * url,const AVInputFormat * 	fmt,AVDictionary ** options )
ps:函数调佣成功之后处理过的AVFormatContext结构体
url:打开的音视频流的URL
fmt:强制指定AVFormatContext中AVIputFormat的。这个参数一般情况下可以设置为NULL,这个FFmpeg可以自动检测AVInputformat
options:附加的一些选项,一般情况下可以设置为NULL

avformat_find_stream_info : Read audio and video data to get some relevant information

int avformat_find_stream_info	(	AVFormatContext * 	ic  ,  AVDictionary ** 	options )	
ic:输入的上下文信息
AVDictionary:额外的选项	

av_read_frame() : Read the audio data of one or several frames of video in the stream

int av_read_frame	(	AVFormatContext * 	s, AVPacket * 	pkt )	

avformat_close_input() : Close the context information of the input stream

void avformat_close_input	(	AVFormatContext ** 	s	)	
s	:输入流上下文信息

Encoding-related APIs: In applications based on audio and video encoders, the avformat_alloc_output_context2() function is the first function that is usually called. In addition to the functions used, it also includes the commonly used file operation functions in FFmpeg: av_write_frame(): Used to write video data, avformat_write_header(): used to write video header file information, av_frame_tailer: used to write video file tail information.

avformat_alloc_output_context2() : AVFormatContext structure usually used for output


int avformat_alloc_output_context2	(AVFormatContext ** ctx, const AVOutputFormat * oformat,const char * 	format_name,const char * 	filename )	
ctx:函数调用成功之后创造的AVFormatContext结构结构体;
oformat:用于确定输出的AVOutputFormat,用于确定输出的格式
format_name:支出输出格式的名称
filename:指出输出文件的名称

avformat_write_heder() : Allocate the private data of the stream and write the stream header to an output media file

 int avformat_write_header	(	AVFormatContext * 	s,AVDictionary ** 	options )	
s:用于输出的AVFormatContext
options :额外的选项

avcodec_send_frame (): Provides an AVFrame of uncompressed video or audio to the encoder:

int avcodec_send_frame	(	AVCodecContext * 	avctx,
const AVFrame * 	frame 
)	

avcodec_receive_packet(): Get the encoded packet from the encoder, on success it will bring back an AVPacket with compressed frames.

int avcodec_receive_packet	(	AVCodecContext * 	avctx,AVPacket * 	avpkt )	

Avcodec_send_frame ()/avcodec_receive_packet (), instructions for use:

[1] According to the increasing order of pts , the original data frame is sent to the encoder, and the encoder outputs the encoded packet in the increasing order of dts . In fact, the encoder pays attention to

The input pts does not pay attention to dts, he only pays attention to the received frame, buffers and encodes according to the demand

[2]: When avcodec_receive_packet() outputs a Packet, it will set packet.dts, starting from 0, adding 1 to the dts of each output packet, which is the dts of the video layer, which is the video

The dts of the frequency layer, the user needs to convert it into the dts of the container layer

[3]: When avcodec_receive_packet outputs a packet, packet.pts will copy the corresponding frame.pts, which is the pts user of the video screen layer needs to convert it into the pts of the container layer

[4]: When avcodec_send_frame() sends NULL frame, the encoder will enter fflush mode

[5]: avcodec_send_frame() sends the first NULL to return successfully, and the subsequent NULL will return AVERROR_EOF:

[6]: avcodec_send_frame() sending NULL multiple times will not cause the content of the encoder buffer to disappear. Using avcodec_flush_buffer() can immediately discard the codec

The content of the buffer in the encoder. Therefore, after the encoding is completed, you need to use avcodec_send_frame (NULL) to get the content.

av_write_frame()/av_interleaved_write_frame() : used to output a frame of streaming media data

int av_write_frame	(	AVFormatContext * 	s,AVPacket * 	pkt )	
s:用于输出的AVFormatContext
pkt:等待输出AVPacket

av_write_trailer() : used to output the tail of the file

int av_write_trailer	(	AVFormatContext * 	s	)	
  1. Image processing API

libswscale: It is mainly used to process the image pixel database class, which can complete the image format replacement and image stretching.

sws_getContext() : returns and allocates a SwsContext object, whose prototype can be expressed as:

struct SwsContext* sws_getContext	(	int 	srcW,                                     //被转化原图像的宽度
int 	srcH,																			  //被转化原图像的高度
enum AVPixelFormat 	srcFormat,															  //被转化原图像的格式(RGB,BGR,YUV)
int 	dstW,																			  //转化指定原图像的高度
int 	dstH,																			  // 转换后指定原图像的宽度
enum AVPixelFormat 	dstFormat,															  //转化后指定的原图像的格式
int 	flags,																			  //转化使用的算法
SwsFilter * 	srcFilter,																  //输入使用的滤波器
SwsFilter * 	dstFilter,																  //输出使用的滤波器
const double * 	param 																	  // 	调整使用的缩放器的额外参数
)	

sws_scale() : Processing image data, mainly used for conversion of video pixel format and resolution


int sws_scale	(	struct SwsContext * 	c,                             //使用sws_getContext()创建的缩放上下文
const uint8_t *const 	srcSlice[],										   //包含指向源切片平面的指针的数组
const int 	srcStride[],									 			   //包含源图像每个平面的步幅的数组
int 	srcSliceY,														   //要处理的切片在源图像中的位置
int 	srcSliceH,                                                         //源切片的高度,即切片中的行数
uint8_t *const 	dst[],													   //包含指向目标图像平面的指针的数组
const int 	dstStride[] 											       //包含目标图像每个平面的步幅的数组
)	

**sws_freeContext():** Release a SwsContext()

Resampling API, which can realize the following functions: sampling frequency conversion; channel format replacement; sample format replacement.


 1. 创建上下文环境;重采样上下文环境为SwrContext数据结构
 2. 参数设置:转换的参数设置到SwrContext中
 3. SwrContex他初始化:swr_init()
 4. 分配样本的数据内存空间:使用av_sample_alloc_array_and_sameples,av_samples_allpc等工具
 5. 开启重采样操作,通过swr_convert完成
 6. 重采样完成后释放相关的资源

swr_alloc(): Create SwrContext object
av_opt_set_*(): Set input and output audio information
swr_init(): Initialize SwrContext.
av_samples_alloc_array_and_samples(): Allocate the corresponding size of memory space according to the audio format
av_samples_alloc: Allocate the corresponding size of memory according to the audio format, which is used to adjust the output memory size during the replacement process.
swr_convert: Perform resampling conversion

swr_alloc() : allocate a SwrContext object

struct SwrContext* swr_alloc	(	void 		)	

int swr_alloc_set_opts2	(	struct SwrContext ** 	ps,			//	如果可用,则指向现有的 Swr 上下文,如果不可用,则指向 NULL
AVChannelLayout * 	out_ch_layout,								//  输出通道布局
enum AVSampleFormat 	out_sample_fmt,							//  输出样本格式
int 	out_sample_rate,										//  输出采样的频率
AVChannelLayout * 	in_ch_layout,								//	输入通道布局
enum AVSampleFormat 	in_sample_fmt,							//  输入样本格式
int 	in_sample_rate,                                         //  输入采样平率
int 	log_offset,												//  日志界别偏移
void * 	log_ctx 											    //  父日志移动上下文
)		

swr_init() : After the parameters are set, swr_nit() must be called to aggressively initialize the SwrContext object

int swr_init	(	struct SwrContext * 	s	)	

v_samples_alloc_array_and_samples()

av_find_input_format(): : find input

const AVInputFormat* av_find_input_format ( const char * short_name )

Guess you like

Origin blog.csdn.net/qq_44632658/article/details/131737947