Blood cell intelligent detection and counting software (Python+YOLOv5 deep learning model+fresh interface version)

insert image description here

Abstract: Blood cell intelligent detection and counting software applies deep learning technology to intelligently detect red blood cells, sickle cells and other cells of different shapes in blood cell images and count them visually to assist medical cell detection. This paper introduces the blood cell intelligent detection and counting software in detail. While introducing the algorithm principle, it also gives the implementation code of Python and the UI interface of PyQt . In the interface, various pictures and videos can be selected for detection and recognition; multiple targets in the image can be recognized and classified, with fast detection speed and high recognition accuracy. The blog post provides a complete Python code and usage tutorial, which is suitable for beginners to refer to. For the complete code resource file, please go to the download link at the end of the article. The catalog of this blog post is as follows:

➷Click to jump to the download page of all the complete code files involved at the end of the article☇

Demonstration and introduction of blood cell intelligent detection and counting software (Python+YOLOv5 deep learning model+fresh interface version)


foreword

        Target detection mainly uses artificial intelligence technology to identify and locate objects in the image, and has a wide range of applications in real life: digital camera intelligent fire monitoring, medical imaging tumor detection, digital camera face automatic positioning and many other fields. Traditional object detection algorithms are susceptible to interference from factors such as object occlusion and illumination changes because they cannot use the deep features of the image, resulting in missed and false detections. The emergence of deep learning can solve this problem well. Deep learning algorithms can learn from samples, and extract deeper features of images by processing and combining features. This paper studies the application of object detection in blood cells with the support of deep learning theory. .

        The acquisition and labeling of medical images requires huge labor costs, and insufficient training data will lead to overfitting of the model, especially in the identification of various types of cells in blood cells. Moreover, the blood cell image itself has the characteristics of low contrast and large differences in the number and shape of various cells, which affect the accuracy of blood cell detection. Based on the research on the deep learning target detection algorithm, this paper identifies various types of cells in blood cells, and uses the algorithm to obtain higher detection accuracy of blood cells.

        This system uses login and registration for user management. For pictures, videos and real-time images captured by cameras, it can detect medical images. The system supports result recording, display and storage, and the results of each detection are recorded in the form. In this regard, here is the interface designed by the blogger. The simple style of the same style, the function can also meet the recognition and detection of pictures, videos and cameras. I hope you can like it. The initial interface is as follows:

insert image description here

        The screenshot of the interface when detecting categories (click on the picture to enlarge) is as shown below, which can identify multiple categories in the screen, and can also enable camera or video detection:

insert image description here

         For the detailed function demonstration effect, please refer to the blogger’s B station video or the animation demonstration in the next section. Friends who think it is good, please like, follow and bookmark! The design workload of the system UI interface is relatively large, and the interface beautification needs to be carefully crafted. If you have any suggestions or opinions, you can comment and exchange them below.


1. Effect demonstration

        Whether the software is easy to use or not, appearance is very important. First of all, let’s take a look at the recognition effect through the animation. The main function of the system is to recognize blood cells in pictures, videos and camera images. The recognition results are visually displayed on the interface and images In addition, the display selection function of multiple targets is provided, and the demonstration effect is as follows.

(1) System introduction

        The blood cell intelligent detection and counting software is mainly used for the detection and counting of blood cells under the microscope. Based on deep learning technology, it can identify 4 common blood cells in the image, including platelets, red blood cells, white blood cells, sickle cells, etc., and output the coordinates and categories of the marked frame of the cells , to assist automatic cell statistics and medical research; the software provides login and registration functions for user management; the software can effectively identify cell pictures, videos and other file forms collected by electron microscopes, detect various cell shapes, and record the recognition results in the interface table It is convenient to view; the camera can be turned on to monitor and count the number of various types of cells in the current field of view in real time, and support result recording, display and storage.

(2) Technical features

         (1) The YoloV5 target detection algorithm detects blood cells, and the model supports replacement;
         (2) The camera detects blood cells in real time, displays, records and saves the results;
         (3) detects individual blood cells in pictures, videos and other images;
         (4) supports user login, Registration, detection result visualization function;

(3) User registration and login interface

        A login interface is designed here, where you can register an account and password, and then log in. The interface still refers to the current popular UI design. The left side is a moving picture, and the right side enters the account number, password, verification code, etc.

insert image description here

(4) Select image recognition

        The system allows you to select a picture file for recognition. After clicking the picture selection button icon to select a picture, all the recognition results will be displayed. You can check a single result through the drop-down box to judge a specific target. The interface display of this function is shown in the figure below:

insert image description here

(5) Video recognition effect display

        Many times we need to identify blood cells in a video, here is a video selection function. Click the video button to select the video to be detected. The system will automatically analyze the video frame by frame to identify multiple blood cells, and record the classification and counting results of blood cells in the table at the lower right corner. The effect is shown in the figure below:

insert image description here

(6) Display of camera detection effect
        In real scenes, we often use cameras to obtain real-time images, and at the same time need to identify blood cells, so this article takes this function into consideration. As shown in the figure below, after clicking the camera button, the system enters the ready state, the system displays the real-time image and starts to detect the blood cells in the image, and the recognition result is displayed as shown in the figure below:

insert image description here


2. Blood cell data set and model training

(1) Data set production

        The blood cell data set in this experiment includes 2853 pictures in the training set, 219 pictures in the verification set, and 81 pictures in the test set, with a total of 3153 pictures. Some of the selected data and some sample data sets are shown in the figure.

insert image description here

        Each image provides the image class label information, the bounding box of the cells in the image, the attribute information of the target, and the data set is decompressed to get the following picture

insert image description here
         The category information of the ship dataset is as follows, including platelets, red blood cells, white blood cells, sickle cells, etc.

Chinese_name = {
    
    'Platelets': "血小板", 'RBC': "红细胞", 'WBC': "白细胞", 'sickle cell': "镰状细胞"}

         The original data format is the annotation of the target cell in the xml file, and now this annotation needs to be converted into the format required by yolov5. That is, each image corresponds to a txt file, which stores the categories and coordinates of all cells in the image, and stores the information of one cell in one line, as shown in the figure below

insert image description here
         Create a dataset storage path in the same level directory of this project, create a training set, test set path and configuration file dataset.yaml in the path

insert image description here

         Images store image data, labels store label data, and the file names correspond to the same. Set the data set configuration file as follows

train: ./Haemocytes/images/train
val: ./Haemocytes/images/valid
test: ./Haemocytes/images/test

nc: 4
names: ['Platelets', 'RBC', 'WBC', 'sickle cell']

         (1) train specifies the image path of the training set
         (2) val specifies the image path of the verification set
         (3) nc specifies the number of target categories, here are platelets, red blood cells, white blood cells, a total of 3 types
         (4) target corresponding category name

         Next, you need to configure the relevant parameters of train.py in the project path, specify the pre-trained model weight and model structure file and data set configuration file, the number of training rounds, the size of batch_size and the resolution of the input image.

    parser = argparse.ArgumentParser()
    parser.add_argument('--weights', type=str, default='yolov5s.pt', help='initial weights path')
    parser.add_argument('--cfg', type=str, default='models/yolov5s.yaml', help='model.yaml path')
    parser.add_argument('--data', type=str, default='dataset.yaml', help='data.yaml path')
    parser.add_argument('--hyp', type=str, default='data/hyp.scratch.yaml', help='hyperparameters path')
    parser.add_argument('--epochs', type=int, default=300)
    parser.add_argument('--batch-size', type=int, default=1, help='total batch size for all GPUs')
    parser.add_argument('--img-size', nargs='+', type=int, default=[640, 640], help='[train, test] image sizes')

        During our training process, mAP50, as a commonly used target detection evaluation indicator, quickly reached a high level, and mAP50:95 also continued to improve during the training process, indicating that our model performed well from the perspective of training-validation. Read in a test folder for prediction, select the weight best.pt with the best effect on the verification set obtained through training to conduct experiments, and obtain the PR curve as shown in the figure below.
insert image description here

        In deep learning, we usually observe the model training situation through the curve of the loss function decline. The YOLOv5 training mainly includes three aspects of loss: rectangular box loss (box_loss), confidence loss (obj_loss) and classification loss (cls_loss). After the training is over, we can also find some training processes in the logs directory. summary graph. The figure below shows the model training curve for bloggers training blood cell recognition.
insert image description here

        Taking PR-curve as an example, we can see that the mean average accuracy of our model on the validation set is 0.794.

3. Blood cell detection and identification

        Run testVideo to get the prediction result, we can frame the blood cells in the frame image, and then use opencv drawing operation on the picture to output the category of blood cells and the prediction score of blood cells. The following is the script for reading and detecting blood cell videos. First, the image data is preprocessed and sent to predict for detection, and then the position of the marker frame is calculated and marked in the image.

while True:
        # 从视频文件中逐帧读取画面
        (grabbed, image) = vs.read()
        # 若grabbed为空,表示视频到达最后一帧,退出
        if not grabbed:
            print("[INFO] 运行结束...")
            output_video.release()
            vs.release()
            exit()
 
        # 获取画面长宽
        if W is None or H is None:
            (H, W) = image.shape[:2]
 
        image = cv2.resize(image, (850, 500))
        img0 = image.copy()
        img = letterbox(img0, new_shape=imgsz)[0]
        img = np.stack(img, 0)
        img = img[:, :, ::-1].transpose(2, 0, 1)  # BGR to RGB, to 3x416x416
        img = np.ascontiguousarray(img)
 
        pred, useTime = predict(img)
 
        det = pred[0]
        p, s, im0 = None, '', img0
        if det is not None and len(det):  # 如果有检测信息则进入
            det[:, :4] = scale_coords(img.shape[1:], det[:, :4], im0.shape).round()  # 把图像缩放至im0的尺寸
            number_i = 0  # 类别预编号
            detInfo = []
            for *xyxy, conf, cls in reversed(det):  # 遍历检测信息
                c1, c2 = (int(xyxy[0]), int(xyxy[1])), (int(xyxy[2]), int(xyxy[3]))
                # 将检测信息添加到字典中
                detInfo.append([names[int(cls)], [c1[0], c1[1], c2[0], c2[1]], '%.2f' % conf])
                number_i += 1  # 编号数+1
 
                label = '%s %.2f' % (names[int(cls)], conf)
 
                # 画出检测到的目标物
                plot_one_box(image, xyxy, label=label, color=colors[int(cls)])
 
        # 实时显示检测画面
        cv2.imshow('Stream', image)
        image = cv2.resize(image, (vw, vh))
        output_video.write(image)  # 保存标记后的视频
        if cv2.waitKey(1) & 0xFF == ord('q'):
            break
 
        # print("FPS:{}".format(int(0.6/(end-start))))
        frameIndex += 1

        The result of the execution is shown in the figure below. The types and confidence values ​​of blood cells are marked in the figure, and the prediction speed is fast. Based on this model, we can design it as a system with an interface, select a picture, video or camera on the interface and then call the model for detection.

insert image description here
        The blogger conducted a detailed test on the entire system, and finally developed a version with a smooth and refreshing interface, which is the display of the demo part of the blog post, complete UI interface, test picture video, code files, and Python offline dependency package (easy to install and run, but also You can configure the environment by yourself), all of which have been packaged and uploaded, and interested friends can obtain them through the download link.

insert image description here


download link

    If you want to obtain the complete and complete program files involved in the blog post (including test pictures, videos, py, UI files, etc., as shown in the figure below), they have been packaged and uploaded to the blogger’s Bread Multi-platform. See blogs and videos for reference. Package all the involved files into it at the same time, and click to run. The screenshot of the complete file is as follows:

insert image description here

    The resources under the folder are displayed as follows, and the offline dependency package of Python is also given in the link below. Readers can copy the offline dependency package to the project directory for installation after the correct installation of Anaconda and Pycharm software. The use of offline dependencies is detailed The demonstration can also be seen in my B station video: Win11 installs software from scratch and configures the environment to run deep learning projects , and uses pycharm and anaconda in Win10 for python environment configuration tutorials .

insert image description here

Note : This code is developed with Pycharm+Python3.8, and it can run successfully after testing. The main programs of the running interface are runMain.py and LoginUI.py. The test picture script can run testPicture.py, and the test video script can run testVideo.py. To ensure that the program runs smoothly, please configure the version of the Python dependency package according to requirements.txt. Python version: 3.8 , do not use other versions, see requirements.txt file for details;

The complete resource includes data sets and training codes. For environment configuration and how to modify text, pictures, logos, etc. in the interface, please refer to the video. To download the complete file of the project, please refer to the reference blog post, or refer to the introduction of the video : ➷➷ ➷

Reference blog post: https://zhuanlan.zhihu.com/p/615303348

Reference video demonstration: https://www.bilibili.com/video/BV1p84y1A7Yj/

Offline dependency library download link : https://pan.baidu.com/s/1hW9z9ofV1FRSezTSj59JSg?pwd=oy4n (extraction code: oy4n)


Methods for modifying text, icons and background images in the interface:

        In Qt Designer, you can completely modify the various controls and settings of the interface, and then convert the ui file into a py file to call and display the interface. If you only need to modify the text, icons and background images in the interface, you can modify them directly in the ConfigUI.config file. The steps are as follows:
        (1) Open the UI_rec/tools/ConfigUI.config file. If there are garbled characters, please select GBK code to open.
        (2) If you need to modify the interface text, just select the character you want to change and replace it with your own.
        (3) If you need to modify the background, icons, etc., you only need to modify the path of the picture. For example, the background image in the original file is set as follows:

mainWindow = :/images/icons/back-image.png

        It can be modified to your own image named background2.png (located in the UI_rec/icons/ folder), and the background image can be modified by setting this item as follows:

mainWindow = ./icons/background2.png

conclusion

        Due to the limited ability of the blogger, even if the method mentioned in the blog post has been tested, it is inevitable that there will be omissions. I hope you can enthusiastically point out the mistakes, so that the next revision can be presented to everyone in a more perfect and rigorous manner. At the same time, if there is a better way to achieve it, please let me know.

Guess you like

Origin blog.csdn.net/qq_32892383/article/details/129395726