Use deepstream python to send statistical data generated by analytics to kafka

Overview

The deepstream-occupancy-analytics project provides a way to send analytics statistics to kafka. But all changes, especially the main program are developed in C language. But when I wrote this article, I didn’t find any official systematic instructions and explanations on the Internet. They were just fragmentary questions and answers.

Therefore, the comprehensive reference document , taking cross-line statistics as an example, provides a method for the python version to send statistical data, and also details which C programs and deepstream python bindings need to be modified and compiled. You can refer to the main changes to customize the data content and format you want to collect and send.

There are not many changes, but for people who have never been exposed to C language, it will take some time, so the exploration process is recorded below. Please refer to deepstream_python_nvdsanalytics_to_kafka for the program

Someone on the nvidia forum replied that in future releases, the function of sending custom data based on deepstream python will be provided.

The main changes are divided into three points:

  1. Append custom data structures to NvDsEventMsgMeta , such as adding lc_curr_straightandlc_cum_straight
  2. Modify the eventmsg_payload program and compile it to generatelibnvds_msgconv.so
  3. Synchronously change bindschema.cpp and compile deepstream python bindings

Finally, you only need to add the following code to the python program to send customized statistical data:

# line crossing current count of frame
obj_lc_curr_cnt = user_meta_data.objLCCurrCnt
# line crossing cumulative count
obj_lc_cum_cnt = user_meta_data.objLCCumCnt
msg_meta.lc_curr_straight = obj_lc_curr_cnt["straight"]
msg_meta.lc_cum_straight = obj_lc_cum_cnt["straight"] 

The keys of obj_lc_curr_cnt and obj_lc_cum_cnt are defined in config_nvdsananlytics.txt

There is a simpler solution. If the delay is not important in the scenario requirements, and there is no need to process large-scale video streams at the same time, you can consider using kafka-pythonpython libraries such as Python to directly send the obtained analytics without going through nvmsgconvthese nvmsgbrokertwo plug-ins.
If latency is important, or large-scale video streams need to be processed, you need to refer to the following to fine-tune the C source code and recompile, because the probe function is blocking and is not suitable for adding complex processing logic to it.

Operating environment

  • nvidia-docker2
  • deepstream-6.1

How to run

If you want to insert a custom message, please refer directly to the main changes

Build the docker image and run

  • Clone the code repository, run in deepstream_python_nvdsanalytics_to_kafkathe directory, and sh docker/build.sh <image_name>build the image, eg:
    sh docker/build.sh deepstream:6.1-triton-jupyter-python-custom

  • Run the docker image and enter the jupyter environment

    docker run --gpus  device=0  -p 8888:8888 -d --shm-size=1g  -w /opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/mount/   -v ~/deepstream_python_nvdsanalytics_to_kafka/:/opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/mount  deepstream:6.1-triton-jupyter-python-custom
    

    Browser input http://<host_ip>:8888enters the jupyter development environment

  • (Optional) On the master node of kubernetes, run to sh /docker/ds-jupyter-statefulset.shstart a deepstream instance. The premise is that the cluster has been deployednvidia-device-plugin

Run deepstream python push message

deepstream python pipeline is located at /pyds_kafka_example/run.py, main reference deepstream-test4anddeepstream-nvdsanalytics

The main structure of the pipeline is as follows:
alt

  • Before running, you need to pyds_kafka_example/cfg_kafka.txtmodify the value of partition-key and set it to deviceId, so that the nvmsgbroker plug-in will set the value corresponding to deviceId in the message body to partition-key.

  • Install java
    apt update && apt install -y openjdk-11-jdk

  • If you don’t have a separate kafka cluster, please refer to deploying kafka in a deepstream instance and creating a topic

    tar -xzf kafka_2.13-3.2.1.tgz
    cd kafka_2.13-3.2.1
    bin/zookeeper-server-start.sh config/zookeeper.properties
    bin/kafka-server-start.sh config/server.properties
    bin/kafka-topics.sh --create --topic ds-kafka --bootstrap-server localhost:9092
    
  • Enter pyds_kafka_examplethe directory and run deepstream python pipeline, eg:

    python3 run.py -i /opt/nvidia/deepstream/deepstream-6.1/samples/streams/sample_720p.h264 -p /opt/nvidia/deepstream/deepstream-6.1/lib/libnvds_kafka_proto.so --conn-str="localhost;9092;ds-kafka" -s 0 --no-display
    

Consume kafka data

# go to kafka_2.13-3.2.1 directory and run
bin/kafka-console-consumer.sh --topic ds-kafka --from-beginning --bootstrap-server localhost:9092

Enter the following:

{
    
    
  "messageid" : "34359fe1-fa36-4268-b6fc-a302dbab8be9",
  "@timestamp" : "2022-08-20T09:05:01.695Z",
  "deviceId" : "device_test",
  "analyticsModule" : {
    
    
    "id" : "XYZ",
    "description" : "\"Vehicle Detection and License Plate Recognition\"",
    "source" : "OpenALR",
    "version" : "1.0",
    "lc_curr_straight" : 1,
    "lc_cum_straight" : 39
  }
}

Main changes

Add analytics msg meta in the NvDsEventMsgMeta structure

At nvdsmeta_schema.hline 232, insert the custom analytics msg meta into NvDsEventMsgMetathe structure

  guint lc_curr_straight;
  guint lc_cum_straight;

Compile libnvds_msgconv.so

  • deepstream_schema

    In /opt/nvidia/deepstream/deepstream/sources/libs/nvmsgconvthe directory, nvmsgconv/deestream_schema/deepstream_schema.hon line 93 of the file, add the same analytics msg meta definition to NvDsAnalyticsObjectthe structure

      guint lc_curr_straight;
      guint lc_cum_straight;
    
  • eventmsg_payload

    The most important step in customizing the message body is to add custom analytics msg meta nvmsgconv/deepstream_schema/eventmsg_payload.cppto the function on line 186 of the file.generate_analytics_module_object

      // custom analytics data
      // json_object_set_int_member (analyticsObj, 消息体中的key, 消息体中的value);
      json_object_set_int_member (analyticsObj, "lc_curr_straight", meta->lc_curr_straight);
      json_object_set_int_member (analyticsObj, "lc_curr_straight", meta->lc_curr_straight);
      json_object_set_int_member (analyticsObj, "lc_cum_straight", meta->lc_cum_straight);
    

    In the 536-line generate_event_messagefunction, invalid messages can be commented out to reduce the size of the sent message.

    // // place object
    // placeObj = generate_place_object (privData, meta);
    
    // // sensor object
    // sensorObj = generate_sensor_object (privData, meta);
    
    // analytics object
    analyticsObj = generate_analytics_module_object (privData, meta);
    
    // // object object
    // objectObj = generate_object_object (privData, meta);
    
    // // event object
    // eventObj = generate_event_object (privData, meta);
    
    // root object
    rootObj = json_object_new ();
    json_object_set_string_member (rootObj, "messageid", msgIdStr);
    // json_object_set_string_member (rootObj, "mdsversion", "1.0");
    json_object_set_string_member (rootObj, "@timestamp", meta->ts);
    
    // use the orginal params sensorStr in NvDsEventMsgMeta to accept deviceId that generated by python script
    json_object_set_string_member (rootObj, "deviceId", meta->sensorStr);
    // json_object_set_object_member (rootObj, "place", placeObj);
    // json_object_set_object_member (rootObj, "sensor", sensorObj);
    json_object_set_object_member (rootObj, "analyticsModule", analyticsObj);
    
    // not use these metadata
    // json_object_set_object_member (rootObj, "object", objectObj);
    // json_object_set_object_member (rootObj, "event", eventObj);
    
    // if (meta->videoPath)
    //   json_object_set_string_member (rootObj, "videoPath", meta->videoPath);
    // else
    //   json_object_set_string_member (rootObj, "videoPath", "");
    
  • Recompile customized libnvds_msgconv.so

    cd /opt/nvidia/deepstream/deepstream/sources/libs/nvmsgconv \
    && make \
    && cp libnvds_msgconv.so /opt/nvidia/deepstream/deepstream/lib/libnvds_msgconv.so
    

Compile Python bindings

Before compiling deepstream python binding, <your own path>/deepstream_python_apps/bindings/src/bindschema.cppadd the corresponding msg definition in

  .def_readwrite("lc_curr_straight", &NvDsEventMsgMeta::lc_curr_straight)
  .def_readwrite("lc_cum_straight", &NvDsEventMsgMeta::lc_cum_straight);

Then compile the deepstream python binding and install it through pip. For more operations, please refer to/docker/Dockerfile

Reference documentation

Guess you like

Origin blog.csdn.net/weixin_41817841/article/details/126451689