Smart construction site detection system based on YOLOV5 and PYQT5

1. Background

        The smart construction site inspection system is a system that uses computer vision and artificial intelligence technology for intelligent monitoring and management. Based on the YOLOv5 target detection algorithm and the PYQT5 graphical user interface library, the system realizes real-time monitoring and identification of people, objects and other targets in the construction site scene, and provides corresponding alarm and management functions.

        In traditional construction site management, a large amount of manpower is often required for inspection and monitoring to ensure the safety and order of the construction site. However, there are some problems in this method, such as limited monitoring range, low monitoring efficiency, and prone to missed detection. In order to improve the intelligence level and efficiency of construction site monitoring, a smart construction site inspection system based on computer vision and artificial intelligence came into being.

        YOLOv5 is a deep learning-based target detection algorithm that can efficiently detect multiple targets in an image and give their location and category information. Compared with traditional target detection algorithms, YOLOv5 has higher accuracy and real-time performance.

        PYQT5 is a Python GUI programming toolkit, which provides a wealth of graphical interface components and functions, which can be used to develop intuitive and friendly user interfaces. In the smart construction site detection system, the user interface of the system can be conveniently constructed by using PYQT5, so that users can intuitively view the monitoring results, and perform system configuration and management.

        The smart construction site detection system based on YOLOV5+PYQT5 uses the camera to capture the real-time video stream of the construction site, and analyzes and processes the video through the target detection algorithm. The system can identify and detect whether workers are wearing safety helmets, protective clothing and other behaviors that comply with safety regulations. Once violations are detected, the system will issue an alarm in time to remind relevant personnel to take measures to avoid potential security incidents.

 2. Challenges and motivations 

        The smart construction site detection system based on YOLOv5 and PyQt5 can provide real-time target detection and monitoring functions for construction site safety management and accident prevention. However, this system also faces some challenges and motivations.

(1) Challenges:

1. Dataset acquisition: Building an efficient smart construction site detection system requires a large amount of labeled image data to train the target detection model. However, due to the diversity and complexity of construction sites, acquiring and labeling large-scale construction site image data can be a daunting task.

2. Complex scenes: Construction sites, mines and other natural environments are complex and changeable, with a large number of people, and there may be problems such as occlusion, lighting changes, background interference, etc., which will challenge the accuracy and stability of the target detection algorithm.

3. Real-time requirements: The smart construction site detection system needs to monitor the safety problems in the construction site in real time and respond in time. Therefore, algorithms and systems need to have efficient real-time processing capabilities while ensuring accuracy.

4. Hardware and resource limitations: In order to achieve real-time target detection and monitoring, the system needs to have sufficient computing resources and appropriate hardware devices. When deploying systems in large-scale jobsite environments, cost and technical constraints are faced.

5. System integration: Integrating YOLOv5 and PyQt5 into a complete smart construction site inspection system may require in-depth development and debugging work. This involves combining object detection algorithms with interface design and user interaction to ensure system stability and functional integrity.

(2) motivation

1. Construction site safety management: The smart construction site detection system can help monitor potential safety hazards in the construction site, such as not wearing a safety helmet, intrusion into dangerous areas, etc., and issue an alarm in time or take corresponding preventive measures to reduce the possibility of accidents. Thereby improving the safety and management level of the construction site.

2. Improve work efficiency: Traditional construction site inspections require a lot of human resources and time, while smart construction site inspection systems can automatically detect and monitor targets, reducing the workload of manual inspections. The system can provide real-time feedback on construction site status and safety conditions, helping managers find problems in a timely manner and take corresponding measures, thereby improving work efficiency and management effects.

3. Data analysis and statistics: The smart construction site detection system can collect and analyze data in the construction site, such as personnel density, operation activities, safety violations, etc., and provide valuable information and statistical reports for site managers to optimize site management and decision making.

4. Remote monitoring and management: The smart construction site detection system can realize remote monitoring and management functions, enabling construction site managers to remotely access and control the system through the Internet. In this way, they can monitor the status of the construction site anytime and anywhere, keep abreast of the construction site situation, and carry out necessary management and scheduling, improving the real-time monitoring and management capabilities of the construction site.

        To sum up, the smart construction site inspection system based on YOLOv5 and PyQt5 has the requirements of real-time, accuracy and efficiency, and aims to improve site safety management and work efficiency, and provide support for data analysis and statistics. However, these goals also need to overcome challenges such as dataset acquisition, complex scenarios, and real-time requirements.

3. Interface and functions

(1) pyqt5 build interface

 (2) Function introduction

1. Run the system to display Beijing time in real time.

2. Users can choose three models: helmet detection, fire detection and reflective vest detection.

3. Users can adjust the confidence level and iou. Confidence: Indicates the reliability of the prediction results; iou: used to measure the degree of overlap between the prediction results and the real annotations in tasks such as target detection, semantic segmentation, and instance segmentation.

4. Prompt area: used to display alarm time, alarm type, number of people and other information.

5. The user can choose three methods: image detection, video detection and camera detection. The original picture or video is displayed on the left side of the interface, and the detection result is displayed on the right side.

6. After the detection is completed, the user can click the "Download" button in the upper right corner to save the detection picture, or click the "Alarm Data Download" button below to save the data information.

Four, the main code 

(1) Switch detection model

def selectionChanged(self, index):
    select_weight = self.comboBox.itemText(index)
    print('selected weight:', select_weight)
    if select_weight == '安全帽检测':
            self.model = torch.hub.load("./", "custom", path="runs/train/exp3/weights/helmet_head_person_s.pt",
                                        source="local")  # 加载安全帽检测模型
            self.weight = "det_helmet"
    if select_weight == '反光背心检测':
            self.model = torch.hub.load("./", "custom", path="runs/train/reflect_clothes.pt",
                                        source="local")  # 加载反光背心检测模型
            self.weight = "det_reflect_clothes"
    if select_weight == '火灾检测':
            self.model = torch.hub.load("./", "custom", path="runs/train/det_fire.pt",
                                        source="local")  # 加载火情检测模型
            self.weight = "det_fire"

 (2) Image detection

def image_pred(self, file_path):
    results = self.model(file_path)  # 加载模型检测
    # print("model信息:", self.model)
    image = results.render()[0]
    # 调用检测结果
    self.judge(results)
    return convert2QImage(image)  # 转换图片格式

def open_image(self):
    print("点击了检测图片")
    self.textBrowser.clear()  # 清除textBrowser文本框的文字
    self.timer.stop()  # 停止计时
    file_path = QFileDialog.getOpenFileName(self, directory="./data/images", filter="*.jpg;*.png;*.jpeg")  # 选择图片
    if file_path[0]:
         file_path = file_path[0]
         self.input.setPixmap(QPixmap(file_path))  # 显示原始图片
         qimage = self.image_pred(file_path)  # 调用图片检测
         self.input.setPixmap(QPixmap(file_path))
         self.output.setPixmap(QPixmap.fromImage(qimage))  # 显示检测结果图片
         self.result_image_path = file_path

(3) Video detection

def video_pred(self):
    ret, frame = self.video.read()
    if not ret:  # 未读取视频,停止计时(使用计时器,在pyqt5中实时更新视频图片)
         self.timer.stop()
    else:
         frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)  # 转换每一帧图像的格式
         self.input.setPixmap(QPixmap.fromImage(convert2QImage(frame)))  # 播放原始视频

         self.results = self.model(frame)  # 加载检测模型
         image = self.results.render()[0]
         # qimage = self.video_pred(frame)
    self.output.setPixmap(QPixmap.fromImage(convert2QImage(image)))  # 播放检测后视频
    return self.judge(self.results)  # 判断检测结果

def open_video(self):
    print("点击了视频检测")
    file_path = QFileDialog.getOpenFileName(self, directory="./data", filter="*.mp4")  # 选择视频
    if file_path[0]:
       file_path = file_path[0]
       self.video = cv2.VideoCapture(file_path)  # 加载视频
       self.timer.start()  # 计时器开始计时

 (4) Camera detection

def open_camera(self):
    print("点击了摄像头检测")
    self.video = cv2.VideoCapture(0)  # 调用笔记本摄像头(在实际项目里,通过调用不同区域的联网摄像头来实时检测工地)
    self.timer.start()

(5) Determine the type of alarm

def judge(self, result):
    alarm = None
    person_count = 0
    helmet_count = 0
    preds = result.pandas().xyxy[0]
    # 获取当前时间
    self.current_time = datetime.now()
    formatted_datetime = self.current_time.strftime("%Y-%m-%d %H:%M:%S")
    # print(formatted_datetime)
    labels = preds.values
    # 安全帽检测报警
    if self.weight == 'det_helmet':
        for label in labels:
            # print("lab:", label)
            if label[6] == 'person':
                person_count += 1
            if label[6] == 'helmet':
                helmet_count += 1
        print("person:", person_count)
        print("helmet:", helmet_count)
        self.textBrowser.append(f"人数为:{person_count}")
        self.textBrowser.append(f"头盔个数为:{helmet_count}")
        if helmet_count < person_count:
            print("没佩戴头盔!")
            alarm = "没佩戴头盔!"

     # 反光背心检测报警 label为1
     if self.weight == 'det_reflect_clothes':
         for label in labels:
             if label[6] != 'reflective_clothes':
                 print("检测到没穿戴反光背心!")
                 alarm = "没穿戴反光背心!"

     # 明火检测报警
     if self.weight == 'det_fire':
         for label in labels:
             if label[6] == 'fire':
                 print("检测到火焰!")
                 alarm = "检测到火焰!"

     # 提示板显示
     self.textBrowser.append(formatted_datetime)
     self.textBrowser.append(f"警告:{alarm}")
     # 记录报警
     if alarm is not None:
         return self.record(alarm)

(ps: In fact, in the reflective vest detection, the function of detecting the number of people can also be added, but since the reflective vest detection model is not trained by the author, it does not mark the person.)

(6) Record the alarm time and type

def record(self, type):  # 记录报警时间和类型
    with open('Attendance.csv', 'r+') as f:
    myDatalist = f.readlines()  # 读取文件中所有的行
    List = []
    for line in myDatalist:
         entry = line.split(',')
         List.append(entry[0])
         current_time = datetime.now()  # 获取当前时间
         dtString = current_time.strftime("%Y-%m-%d %H:%M:%S")  # 将日期时间格式化成字符串
         f.writelines(f'{type},{dtString}')  # 写入报警类型和时间
         f.write('\n')
         alarm_thread = threading.Thread(target=self.sound_alarm) #警报声响
         alarm_thread.start()

(7) Download data

def Download_data(self):
    source_file_path = "Attendance.csv"  # 记录表
    save_path, _ = QFileDialog.getSaveFileName(self, "Save File", "", "CSV Files (*.csv)")
    if save_path:
        try:
            shutil.copyfile(source_file_path, save_path)
            print("File downloaded successfully!")
            self.textBrowser.append("报警数据下载成功!")
        except Exception as e:
                print("Error while downloading file:", str(e))
    else:
        print("No save path selected.")

(8) Signal and slot function binding

def bind_slots(self):  # 信号、槽函数绑定
    self.det_image.clicked.connect(self.open_image) #图片检测
    self.det_video.clicked.connect(self.open_video) #视频检测
    self.det_camera.clicked.connect(self.open_camera) #摄像头检测
    self.comboBox.currentIndexChanged.connect(self.selectionChanged) #模型选择
    self.dL_data.clicked.connect(self.Download_data) #下载数据
    self.download.clicked.connect(self.Download_image) #保存图片
    self.slider.valueChanged.connect(self.Conf_change) #调节置信度
    self.Iou_Slider.valueChanged.connect(self.Conf_change) #调节iou
    self.timer.timeout.connect(self.video_pred) #视频处理

5. Experimental results

 (1) Model selection

Next, the three methods of image detection, camera detection and video detection are used to detect the recognition and detection effects of the above three scene models.

 (2) Image detection

1. Helmet detection 

2. Reflective vest detection

3. Flame detection 

 (3) Camera detection

Here the notebook camera is called to complete the test task.

1. Helmet detection

(ps: The reason for not detecting the number of people here is: this picture was tested on the 24th, and the function of the number of people has not been added at that time. The author is too lazy to update the mosaic) 

 2. Reflective vest detection

(4) Video detection

To save time, only the experimental video of flame detection is placed here.

flame detection

Through the video, the system can effectively detect the flame and realize the alarm function, but found that there is an obvious delay and freeze, which is affected by the performance of the computer graphics card (the CPU of the Xiaomi notebook I used, the record of training 120 pictures for 6 hours), The impact of issues such as I/O latency and concurrent processing. Qualified friends can use high-performance graphics cards or optimize the yolo algorithm model to reduce delays.

(5) Alarm data download

 The alarm data is as follows:

 6. Summary

        The intelligent construction site detection system based on YOLOV5 and PyQt5 combines the target detection algorithm and interface design, aiming to realize the intelligent monitoring and detection of the construction site environment. The system generally realizes the recognition and detection functions under different scene models, but there are still many defects. For example, the processing of alarm data records is too simple, which can increase the number of people detected, detection locations, etc.; when detecting helmets, the system is prone to false alarms because the helmets are small in size and easy to be blocked; in addition, during video detection, due to The graphics card performance of the computer equipment is insufficient, resulting in slow detection processing speed, video frame freeze and delay. There are also some shortcomings in the test. The complexity of the actual environment is not considered. When detecting pictures and videos, image preprocessing should be performed to improve the recognition rate and system processing rate. This experiment comes from my computer vision course assignment, and it is also a small hands-on implementation process of the article "Personnel Recognition and Helmet Detection" written in May . It can be regarded as applying what I have learned.

Guess you like

Origin blog.csdn.net/weixin_44686138/article/details/131253672