deepstream study notes (four): tracking module tracker access and rtsp stream abnormal problem solving

introduction

This article mainly wants to supplement the remaining tracker module in the previous article to make the whole process more perfect. In addition, for the abnormal problem of rtsp stream, some solutions in gstreamer or other tools are summarized.

Introduction to Gst-nvtracker

The Gst-nvtracker plugin allows DeepStream pipelines to use an underlying tracker to track detection objects with unique IDs. It supports any underlying library that implements the NvDsTracker API, including three reference implementations: NvDCF, KLT, and IOU tracker. As part of this API, plugins query the underlying library for capabilities and requirements regarding input formats and memory types. Then, based on the results of these queries, the plugin converts the input framebuffer to the format requested by the underlying library. For example, the KLT tracker uses the Luma-specific format; NvDCF and DeepSORT use the NV12 or RGBA format; IOU does not require a buffer. The four tracker libraries support different tracking algorithms specifically:

  • The KLT tracker uses a CPU-based implementation of the Kanade-Lucas-Tomasi (KLT) tracker algorithm. This library does not require a configuration file.
  • The IOU tracker uses the IOU values ​​in detector bounding boxes between two consecutive frames to perform an association between them or assign a new ID. This library accepts an optional configuration file.
  • The NvDCF tracker uses an online discriminative learning algorithm based on a correlation filter as a visual target tracker, and uses a data association algorithm for multi-target tracking. This library accepts an optional configuration file.
  • DeepSORT: The DeepSORT tracker is a reimplementation of the official DeepSORT tracker that uses deep cosine metric learning and Re-ID neural networks. This implementation allows users to use any Re-ID network as long as it is supported by NVIDIA's TensorRT™ framework.

The comparison and trade-offs of these four trackers are as follows:

Tracker Type GPU Compute CPU Compute advantage shortcoming best use case
IOU X very low lightweight - No visual signature for matching, so prone to frequent tracker ID switching and glitches. Not suitable for fast moving scenes. - Object locations are sparse and vary in size
- the detector is expected to run every frame or very frequently (e.g. every other frame)
KLT X high Works fairly well for simple scenes - High CPU utilization. Vulnerable to changing visual appearance due to noise and disturbances such as shadows, non-rigid deformations, out-of-plane rotations, and partial occlusions. Cannot handle low textured objects. - Objects have strong textures and simple backgrounds.
- Ideally high CPU resource availability.
NvDCF medium Low - Highly robust to partial occlusions, shadows, and other transient visual changes.
- ID switching frequency is low.
- Can be used with PGIE intervals > 0 without significant loss of accuracy Easily tune parameters according to application requirements to trade off accuracy and performance
- slower than IOU due to increased computational complexity of visual feature extraction - Multi-object, complex scenes even with partial occlusions - PGIE interval > 0
DeepSORT high Low - Allows custom Re-ID models for visual appearance matching
- highly discriminative depending on the Re-ID model used
- Higher computational cost since inference is required for each object
- Tracking can only be performed if the detector's bbox is available
- Cannot easily tune accuracy/performance level unless switching Re-ID model
- Same as NvDCF (except preferred PGIE interval=0)

The above content comes from the deepstream sdk 6.1 and 5.1 documents. The 6.1 document removes the KLT tracker and adds deepsort instructions. The dynamic library is even more huge for tracking. The following is /opt/nvidia/deepstream/deepstream-5.1/libthe so file included in deepstream 5.1:

libnvds_tracker.so
libnvds_mot_iou.so
libnvds_mot_klt.so
libnvds_nvdcf.so

From the name, it is clear which algorithm they belong to, and in the same directory of deepstream 6.1, only one so file is compiled:

libnvds_nvmultiobjecttracker.so

I also found out that the environment was switched back and forth because two docker images were installed on one machine. As for why it was switched back and forth, that is a sad story. . .

Starting from deepstream 6.0, nvidia has unified three tracker algorithms (namely IOU, NvDCF and DeepSORT) in one architecture, supports multi-stream and multi-object tracking in batch mode, and can be processed efficiently on CPU and GPU.

I won't go into details here libnvds_nvmultiobjecttracker.so. If you are interested, you can go to the workflow and core modules section of the NvMultiObjectTracker library in Gst - nvtracker to check the building relationships it supports and the shared module data association table, because I found that in terms of target tracking, The following test process only uses the tracking module, and the configuration file is still based on DvDCF's yaml:deepstream-python-appdeepstream-test2
insert image description here

I feel that the py routine does not consider several other situations, and if you want to go deeper into the principle, you still have to look at the deepstream-app on the C side, because its current directory contains a very comprehensive configuration file:

root@$$:/opt/nvidia/deepstream/deepstream-6.1/samples/configs/deepstream-app# ls | grep config_tracker
config_tracker_DeepSORT.yml
config_tracker_IOU.yml
config_tracker_NvDCF_accuracy.yml
config_tracker_NvDCF_max_perf.yml
config_tracker_NvDCF_perf.yml

So the tracker plug-in is introduced here, and the access part will start below.

Gst-nvtracker access process

Here I use the above section deepstream-imagedata-multistreamas a template to connect the tracker in test2, which is actually very simple. After configuring the element parameter of the tracker, add it to the infer plug-in, and then link it. For details, we can first look at the parameter configuration.

gst tracker parameter

According to the introduction of nvidia's official website, the parameter table of the tracking module is as follows:

Property Meaning type and scope Notes
tracker-width The frame width, in pixels, for the tracker to operate on. Integer, 0 to 4,294,967,295 tracker-width=640 (a multiple of 32)
tracker-height The frame height, in pixels, at which the tracker will run. Integer, 0 to 4,294,967,295 tracker-height=384 (a multiple of 32)
ll-lib-file Gst-nvtracker Pathname of the tracker library to load. String ll-lib-file=/opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
ll-config-file Not required, the configuration file of the library. The path to the configuration file ll-config-file=config_tracker_NvDCF_perf.yml
gpu-id The ID of the GPU on which the device/unified memory is to be allocated, and the ID of the GPU on which the buffer copy/scaling is to be done. (dGPUs only.) Integer, 0 to 4,294,967,295 gpu-id=0
enable-batch-process Enable/disable batch mode. Only works if the underlying library supports both batch and per-stream processing. (optional) (default is 1) Boolean value enable-batch-process=1
enable-past-frame Enable/disable reporting of past frame data mode. Only works if the underlying library supports it. (optional) (default is 0) Boolean value enable-past-frame=1
tracking-surface-type Sets the surface flow type to track. (default value is 0) Integer,≥0 tracking-surface-type=0
display-tracking-id Enables tracking ID display on OSD. Boolean value display-tracking-id=1
compute-hw Calculation engine for scaling. 0 - default 1 - GPU 2 - VIC (Jetson only) Integer, 0 to 2 compute-hw=1
tracking-id-reset-mode 允许基于管道事件强制重置跟踪 ID。一旦启用跟踪 ID 重置并发生此类事件,跟踪 ID 的低 32 位将重置为 0
0:当流重置或 EOS 事件发生时不重置跟踪 ID
1:在流重置发生时终止所有现有跟踪器并为流分配新 ID(GST_NVEVENT_STREAM_RESET)2:在收到EOS事件后让tracking ID从0起(GST_NVEVENT_STREAM_EOS)(注:只有tracking ID的低32位从0开始)
3:启用选项 1 和 2
Integer,0 到 3 tracking-id-reset-mode=0

以上部分说明根据我自己的理解与翻译进行了一些修改。知道了大致的一些参数后,我们就能上一节中的config_infer_primary_yoloV5.txt文件,加入tracker类:

[tracker]
tracker-width=640
tracker-height=384
gpu-id=0
ll-lib-file=/opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
# ll-lib-file = /opt/nvidia/deepstream/deepstream/lib/libnvds_mot_klt.so
# ll-config-file=config_tracker_NvDCF_perf.yml
enable-past-frame=1
enable-batch-process=1
display-tracking-id=1

这里要注意的是,官方的例程是没有最后的display-tracking-id参数的,而enable-past-frame参数也是关闭状态,其中前者是必要参数,后者我目前感觉没啥用处,可能为了提升效果吧,但我跑起来发现我没有开启。主要问题是当时调试了很久发现没有输出,明明跟踪都加载成功了,后来我才发现我少了个参数。。。width和height按照自己想要的来,最好是32的倍数。其它为默认或者不加也行。

tracker代码接入

首先,在main函数中,我们先初始化这样一个tacker Element:

    tracker = Gst.ElementFactory.make("nvtracker", "tracker")
    if not tracker:
        sys.stderr.write(" Unable to create tracker \n")

然后加载我们还刚写入txt中的配置信息:

    #Set properties of tracker
    config = configparser.ConfigParser()
    config.read('dstest2_tracker_config.txt')
    config.sections()

    for key in config['tracker']:
        if key == 'tracker-width' :
            tracker_width = config.getint('tracker', key)
            tracker.set_property('tracker-width', tracker_width)
        if key == 'tracker-height' :
            tracker_height = config.getint('tracker', key)
            tracker.set_property('tracker-height', tracker_height)
        if key == 'gpu-id' :
            tracker_gpu_id = config.getint('tracker', key)
            tracker.set_property('gpu_id', tracker_gpu_id)
        if key == 'll-lib-file' :
            tracker_ll_lib_file = config.get('tracker', key)
            tracker.set_property('ll-lib-file', tracker_ll_lib_file)
        if key == 'll-config-file' :
            tracker_ll_config_file = config.get('tracker', key)
            tracker.set_property('ll-config-file', tracker_ll_config_file)
        if key == 'enable-batch-process' :
            tracker_enable_batch_process = config.getint('tracker', key)
            tracker.set_property('enable_batch_process', tracker_enable_batch_process)
        if key == 'enable-past-frame' :
            tracker_enable_past_frame = config.getint('tracker', key)
            tracker.set_property('enable_past_frame', tracker_enable_past_frame)
        if key == 'display-tracking-id' :
            tracker_tracking_id = config.getint('tracker', key)
            tracker.set_property('display_tracking_id', tracker_tracking_id )

做好了这个element后,就可以加入pipeline并link起来了:

# add部分
pipeline.add(tracker)


# link部分
pgie.link(tracker)
tracker.link(nvvidconv)

原推理模块是连接图像转换器,现在相当于在中间插入了tracker模块,其它不变。那到此,主函数就已经没问题了,我们就可以从探针中的回调函数里获取到tracker的info,这里整个获取数据的代码为:

#past traking meta data
if(past_tracking_meta[0]==1):
	l_user=batch_meta.batch_user_meta_list
	while l_user is not None:
		try:
			# Note that l_user.data needs a cast to pyds.NvDsUserMeta
			# The casting is done by pyds.NvDsUserMeta.cast()
			# The casting also keeps ownership of the underlying memory
			# in the C code, so the Python garbage collector will leave
			# it alone
			user_meta=pyds.NvDsUserMeta.cast(l_user.data)
		except StopIteration:
			break
		if(user_meta and user_meta.base_meta.meta_type==pyds.NvDsMetaType.NVDS_TRACKER_PAST_FRAME_META):
			try:
				# Note that user_meta.user_meta_data needs a cast to pyds.NvDsPastFrameObjBatch
				# The casting is done by pyds.NvDsPastFrameObjBatch.cast()
				# The casting also keeps ownership of the underlying memory
				# in the C code, so the Python garbage collector will leave
				# it alone
				pPastFrameObjBatch = pyds.NvDsPastFrameObjBatch.cast(user_meta.user_meta_data)
			except StopIteration:
				break
			for trackobj in pyds.NvDsPastFrameObjBatch.list(pPastFrameObjBatch):
				print("streamId=",trackobj.streamID)
				print("surfaceStreamID=",trackobj.surfaceStreamID)
				for pastframeobj in pyds.NvDsPastFrameObjStream.list(trackobj):
					print("numobj=",pastframeobj.numObj)
					print("uniqueId=",pastframeobj.uniqueId)
					print("classId=",pastframeobj.classId)
					print("objLabel=",pastframeobj.objLabel)
					for objlist in pyds.NvDsPastFrameObjList.list(pastframeobj):
						print('frameNum:', objlist.frameNum)
						print('tBbox.left:', objlist.tBbox.left)
						print('tBbox.width:', objlist.tBbox.width)
						print('tBbox.top:', objlist.tBbox.top)
						print('tBbox.right:', objlist.tBbox.height)
						print('confidence:', objlist.confidence)
						print('age:', objlist.age)
		try:
			l_user=l_user.next
		except StopIteration:
			break

这部分代码跟infer基本一致,代码也是紧接着infer取数后来的,跟infer共用batch_meta,而batch_meta是deepstream从哈希buffer中拿到的所有的info。这里相当于就只多了个id,前面的推理数据就可以注释了。首行的past_tracking_meta判断可以直接给0,虽然说前面我在制作element的时候有加载进这个配置,该参数就相当于一个开关,程序启动后从用户输入中获取选择0或者1,我一般都给0并且跳过判断,虽然说目前我这边还没有上线,目前感觉作用不大。

那到此为止,deepstream-imagedata-multistream的例程就改造完成,可以重新跑整个demo,并创建管道图,跑出来的图如下:
insert image description here
另外,还有一个现象是加入tracker会输出如下日志:

gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
gstnvtracker: Optional NvMOT_ProcessPast not implemented
gstnvtracker: Optional NvMOT_RemoveStreams not implemented
gstnvtracker: Batch processing is OFF
gstnvtracker: Past frame output is OFF

这是模型加载的时候爆出来的日志,仅仅是有些东西没有开,对整个结果可能只是精度上的影响,作为测试的话影响不大。

介绍完这个问题后,我还想说明的一个问题就是rtsp流的事情。tracker深入的参数调优文档,可以看官方的说明:

https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_NvMultiObjectTracker_parameter_tuning_guide.html

rtsp流异常说明

这个问题,是我长时间跑rtsp流遇到的一个bug,或者说当我测试gstreamer对于流断的异常处理,这就引起了我的一个思考,然而印了那张表情包,30分钟后,思考崩溃,能用就行,emmm。

这个问题在C的源码里是不存在的,原因是C的bus_callback函数差不多写了2/3百行,大大小小所有情况都考虑清楚了,而python的,nvidia在每个版本里,都是定义在common中,为:

import gi
import sys
gi.require_version('Gst', '1.0')
from gi.repository import Gst
def bus_call(bus, message, loop):
    t = message.type
    if t == Gst.MessageType.EOS:
        sys.stdout.write("End-of-stream\n")
        loop.quit()
    elif t==Gst.MessageType.WARNING:
        err, debug = message.parse_warning()
        sys.stderr.write("Warning: %s: %s\n" % (err, debug))
    elif t == Gst.MessageType.ERROR:
        err, debug = message.parse_error()
        sys.stderr.write("Error: %s: %s\n" % (err, debug))
        loop.quit()
    return True

这个程序说明,除了警告,主要遇到EOS与ERROR,程序就会退出,我尝试过将警告和EOS(EOS的意思可以理解为流媒体结束的一个标志,即EOS of stream)注释掉,但发现整个程序陷入了假死状态,因为pipeline已经无法分析了,FPS会变成0,然后我针对这个问题,进行了一些资料查找,当然,最好的办法是看懂C端的解决方案,但我发现我看完了还是没得办法,因为API不同步,比如C里有个reset_pipeline_xxx好像是这名字,在python中我并没有找到类似的,这种有很多,于是有了如下简单的解决方案:

首先,我们可以从pipeline考虑,如果中间出现问题,比如说输入源这种,不是内部element报错,那么我们可以利用pipeline的特性,先暂停管道,然后运行完自定义事件,再重新开启:

pipeline.set_state(Gst.State.NULL)
//do your stuff for example, change some elements, remove some elements etc:
pipeline.set_state(Gst.State.PLAYING)

这个过程可以参照stackoverflow中Sink restart on failure without stopping the pipeline的方案,为:

def event_probe2(pad, info, *args):
    Gst.Pad.remove_probe(pad, info.id)
    tee.link(opusenc1)
    opusenc1.set_state(Gst.State.PLAYING)
    oggmux1.set_state(Gst.State.PLAYING)
    queue1.set_state(Gst.State.PLAYING)
    shout2send.set_state(Gst.State.PLAYING)
    return Gst.PadProbeReturn.OK

def reconnect():
    pad = tee.get_static_pad('src_1')
    pad.add_probe(Gst.PadProbeType.BLOCK_DOWNSTREAM, event_probe2, None)

def event_probe(pad, info, *args):
    Gst.Pad.remove_probe(pad, info.id)
    tee.unlink(opusenc1)
    opusenc1.set_state(Gst.State.NULL)
    oggmux1.set_state(Gst.State.NULL)
    queue1.set_state(Gst.State.NULL)
    shout2send.set_state(Gst.State.NULL)
    GLib.timeout_add_seconds(interval, reconnect)
    return Gst.PadProbeReturn.OK

def message_handler(bus, message):
    if message.type == Gst.MessageType.ERROR:
        if message.src == shout2send:
            pad = tee.get_static_pad('src_1')
            pad.add_probe(Gst.PadProbeType.BLOCK_DOWNSTREAM, event_probe, None)
        else:
            print(message.parse_error())
            pipeline.set_state(Gst.State.NULL)
            exit(1)
    else:
        print(message.type)

而这是错误发生时的解决,关于EOS,我找到的一个类似方案为Restarting/Reconnecting RTSP source on EOS

其中部分代码为:

  msg_type = msg.type
if msg_type == Gst.MessageType.EOS:
    ret = self.pipeline.set_state(Gst.State.PAUSED)
    self.loop.quit()
    Gst.debug_bin_to_dot_file(self.pipeline, Gst.DebugGraphDetails.ALL, "EOS")
    print("Setting Pipeline to Paused State")
    time.sleep(10)
    print("Trying to set back to playing state")
    if ret == Gst.StateChangeReturn.SUCCESS or ret == Gst.StateChangeReturn.NO_PREROLL:
        flush_start = self.pipeline.send_event(Gst.Event.new_flush_start())
        print("Managed to Flush Start: ", flush_start)
        flush_stop = self.pipeline.send_event(Gst.Event.new_flush_stop(True))
        print("Managed to Flush Stop: ", flush_stop)
        i = 0
        uri = configFile['source%u' % int(i)]['uri']
        padname = "sink_%u" % int(i)
        removed_state = self.remove_source_bin()
        if all(element == 1 for element in removed_state):
            self.nbin = self.create_source_bin(i, uri)
            added_state = self.pipeline.add(self.nbin)
            print("Added state: ", added_state)
            self.streammux_sinkpad = self.streammux.get_request_pad(padname)
            if not self.streammux_sinkpad:
                sys.stderr.write("Unable to create sink pad bin \n")
                print("Pad name: ", padname)
            self.srcpad = self.nbin.get_static_pad("src")
            self.srcpad.link(self.streammux_sinkpad)
            Gst.debug_bin_to_dot_file(self.pipeline, Gst.DebugGraphDetails.ALL, "Resetting_Source")

            self.bus = self.pipeline.get_bus()
            self.bus.add_signal_watch()
            self.bus.connect("message", self.bus_call, self.loop)

            self.pipeline.set_state(Gst.State.PLAYING)

            self.nbin.set_state(Gst.State.PLAYING)
            nbin_check = self.nbin.get_state(Gst.CLOCK_TIME_NONE)[0]
            if nbin_check == Gst.StateChangeReturn.SUCCESS or nbin_check == Gst.StateChangeReturn.NO_PREROLL:  
                self.uri_decode_bin.set_state(Gst.State.PLAYING)
                uridecodebin_check = self.uri_decode_bin.get_state(Gst.CLOCK_TIME_NONE)[0]
                if uridecodebin_check == Gst.StateChangeReturn.SUCCESS or uridecodebin_check == Gst.StateChangeReturn.NO_PREROLL: 
                    self.streammux.set_state(Gst.State.PLAYING)
                    streammux_check = self.streammux.get_state(Gst.CLOCK_TIME_NONE)[0]
                    if streammux_check == Gst.StateChangeReturn.SUCCESS or streammux_check == Gst.StateChangeReturn.NO_PREROLL:  
                        self.pipeline.set_state(Gst.State.PLAYING)
                        pipeline_check = self.pipeline.get_state(Gst.CLOCK_TIME_NONE)[0]
                        if pipeline_check == Gst.StateChangeReturn.SUCCESS or pipeline_check == Gst.StateChangeReturn.NO_PREROLL:  
                            print("We did it boys!")
                            Gst.debug_bin_to_dot_file(self.pipeline, Gst.DebugGraphDetails.ALL, "Trying_Playing")
                        else:
                            print("pipeline failed us")
                    else:
                        print("streammux failed us")
                else:
                    print("uridecodebin failed us")
            else:
                print("nbin failed us")

            self.loop.run()

我感觉很有参考意义,虽然我目前还没有处理EOS的问题,因为我是rtsp流,不过我后来改写的解决方案与上述类似,主要参考nvidia写得deepstream_rt_src_add_del例程,这个引用部分删除资源函数为:

def stop_release_source(source_id):
    global g_num_sources
    global g_source_bin_list
    global streammux
    global pipeline

    #Attempt to change status of source to be released 
    state_return = g_source_bin_list[source_id].set_state(Gst.State.NULL)

    if state_return == Gst.StateChangeReturn.SUCCESS:
        print("STATE CHANGE SUCCESS\n")
        pad_name = "sink_%u" % source_id
        print(pad_name)
        #Retrieve sink pad to be released
        sinkpad = streammux.get_static_pad(pad_name)
        #Send flush stop event to the sink pad, then release from the streammux
        sinkpad.send_event(Gst.Event.new_flush_stop(False))
        streammux.release_request_pad(sinkpad)
        print("STATE CHANGE SUCCESS\n")
        #Remove the source bin from the pipeline
        pipeline.remove(g_source_bin_list[source_id])
        source_id -= 1
        g_num_sources -= 1

    elif state_return == Gst.StateChangeReturn.FAILURE:
        print("STATE CHANGE FAILURE\n")
    
    elif state_return == Gst.StateChangeReturn.ASYNC:
        state_return = g_source_bin_list[source_id].get_state(Gst.CLOCK_TIME_NONE)
        pad_name = "sink_%u" % source_id
        print(pad_name)
        sinkpad = streammux.get_static_pad(pad_name)
        sinkpad.send_event(Gst.Event.new_flush_stop(False))
        streammux.release_request_pad(sinkpad)
        print("STATE CHANGE ASYNC\n")
        pipeline.remove(g_source_bin_list[source_id])
        source_id -= 1
        g_num_sources -= 1

def delete_sources(data):
    global loop
    global g_num_sources
    global g_eos_list
    global g_source_enabled

    #First delete sources that have reached end of stream
    for source_id in range(MAX_NUM_SOURCES):
        if (g_eos_list[source_id] and g_source_enabled[source_id]):
            g_source_enabled[source_id] = False
            stop_release_source(source_id)

    #Quit if no sources remaining
    if (g_num_sources == 0):
        loop.quit()
        print("All sources stopped quitting")
        return False

    #Randomly choose an enabled source to delete
    source_id = random.randrange(0, MAX_NUM_SOURCES)
    while (not g_source_enabled[source_id]):
        source_id = random.randrange(0, MAX_NUM_SOURCES)
    #Disable the source
    g_source_enabled[source_id] = False
    #Release the source
    print("Calling Stop %d " % source_id)
    stop_release_source(source_id)

    #Quit if no sources remaining
    if (g_num_sources == 0):
        loop.quit()
        print("All sources stopped quitting")
        return False

    return True

异常接收改为:

    elif t == Gst.MessageType.ELEMENT:
        struct = message.get_structure()
        #Check for stream-eos message
        if struct is not None and struct.has_name("stream-eos"):
            parsed, stream_id = struct.get_uint("stream-id")
            if parsed:
                #Set eos status of stream to True, to be deleted in delete-sources
                print("Got EOS from stream %d" % stream_id)
                g_eos_list[stream_id] = True

我针对于此改写成我自己业务想要的样子,不过我目前发现还有点bug在于,rtsp流如果中途断开,我的整个pipeline会出现5s左右的延迟,只有当坏的流完全释放掉,才能恢复,我不清楚这个是哪里的问题,有bug,感觉还是要研究一下,之后解决了会在这里补充说明,现在先略过,如果有大佬会或者有啥好资料可以评论区教一手或者私信我,我将不胜感激。

Finally, what to do if the stream itself is broken at the beginning. There are many solutions here, but I didn’t find a particularly suitable one, so I wrote it myself. I used ffprobe to filter out the unconnected ones. stream, the ffprobe function is as follows:

def get_rtsp_format(self,strFileName):
    strCmd = 'ffprobe -v quiet -print_format json -show_format -show_streams -i "{0}"' + format(strFileName)
    mystring = os.popen(strCmd).read()
    result = json.loads(mystring)
    return result["format"]

Of course, there are other methods. The above can also be replaced with the command line of ffmpeg and gstreamer, but I just want to analyze the rtsp stream before using the analysis tool, or the easiest way is to directly ping the stream resource as a judgment condition .

So far, this note is over.

Guess you like

Origin blog.csdn.net/submarineas/article/details/126433341