Detailed steps to encrypt and package the yolov5 project code with pyinstaller on the ubuntu system

0. Background

Recently, I need to package some yolov5-based project codes written by myself into binary files on ubuntu 18.04, which is convenient for deployment and minimizes exposure of source code.

Refer to many tutorials on the Internet, most of them are done on win, it seems that there are no detailed packaging steps on ubuntu 18.04.

1. Create a virtual environment

Here I choose anaconda to create a clean Python environment. My python version here is 3.8, and other python versions should have little impact. What is packaged later with pyinstaller is the dependency in this environment.

Some libraries will be used in the following operations. If an error is reported when executing the command, install the library yourself.

First, download yolov5-v4.0. There is no special meaning in choosing v4.0 here. It is just that I use v4.0. Other versions should also work.

git clone https://gitee.com/monkeycc/yolov5.git -b v4.0

Secondly, create a virtual environment. Here, you need to confirm 你的显卡型号、cuda版本是否和pytorch版本适配(可以去Pytorch官网查看). If it is not suitable, an error may be reported later. My cuda version here is 11.1, I choose here pytorch 1.10.0.

Don't be direct pip install -r requirements.txt, it will damage the torch of your other virtual environments!

conda create -n pyinstaller python=3.8
conda activate pyinstaller 

pip install torch==1.10.0+cu111 torchvision==0.11.0+cu111 torchaudio==0.10.0 -f https://download.pytorch.org/whl/torch_stable.html
pip install pyyaml numpy opencv-python matplotlib scipy tqdm pandas seaborn -i https://pypi.tuna.tsinghua.edu.cn/simple
pip install pyinstaller -i https://pypi.tuna.tsinghua.edu.cn/simple

cd yolov5

Finally, download weights (link: https://pan.baidu.com/s/1uVa5eylGETYN0tUB2adjcQ
extraction code: ruwf), here I will download yolov5s.pt to ./yolov5/weights. Put a picture test.png into the project address ./yolov5 by yourself, and then execute the following command to test, generally there is no problem

python detect.py --source test.png --weights weights/yolov5s.pt

2. pyinstaller packaging

detect.pyThe pyinstaller command needs to be executed in the directory where the target python code is located ./yolov5. Here, we are divided into two steps, the first step is to generate the spec file, and the second step is to modify the parameters of the spec file and regenerate the binary file.

2.1. Generate and modify the spec file

Execute the following command

cd yolov5 

pyinstaller -D detect.py

The generated spec file detect.specis located in the directory yolov5, and the next step is to detect.specmodify it, mainly to specify external libraries and external resources.

The meaning of spec file parameters here can refer to the blog " [Python third-party library] pyinstaller tutorial and spec resource file introduction " and blog " pyinstaller spec file detailed explanation ", here I introduce the parameters we need to modify.

  • scriptsThe parameters in Analysis are as shown in the figure below.
    insert image description here
    The scripts parameter here is a list of .py files, and the default is the target python code detect.py. Since I only need to execute detect.py here, there is no need to write redundant python code. If you want to execute other python codes in addition to detect.py, you can add the python code to the list.

  • The parameters in Analysis pathexare as shown in the figure below.
    insert image description here
    The pathex parameter here is a list of folders. When it is empty, it defaults to the absolute address of detect.pythe directory where the target python code is located ./yolov5; here, the address of the directory where the custom library is usually added. Because the custom library required by the yolov5 project utilsis located ./yolov5in , the parameters here pathexcan be defaulted.

  • The parameters in Analysis datas, as shown in the figure
    insert image description here
    below, the datas parameter here is a list of resource directories/resources, when there are other resources other than python code, such as picture/picture directory, database/database directory, configuration/configuration directory, weight file/ Weight file directory, etc., need to be written here. For example, weight files yolov5s.ptare stored in ./yolov5/weightsa directory, where the root directory is ./yolov5; the root directory where the actual binary file runs is ./yolov5/dist/detect, so it needs to be copied in this root directory weights.

  • hiddenimportsThe parameters in Analysis are as shown in the figure below
    insert image description here
    . The hiddenimports parameter here is a list of third-party library names. When an error is reported ModuleNotFoundError: No module named 'xxx', that is, the third-party library cannot be used import, add the module name to the list.

The final complete detect.specfile is as follows,

# -*- mode: python ; coding: utf-8 -*-


block_cipher = None


a = Analysis(
    ['detect.py'],
    pathex=[],
    binaries=[],
    datas=[('models','./models'), ('weights', './weights'), ('data', './data')],
    hiddenimports=['utils', 'utils.autoanchor'],
    hookspath=[],
    hooksconfig={
    
    },
    runtime_hooks=[],
    excludes=[],
    win_no_prefer_redirects=False,
    win_private_assemblies=False,
    cipher=block_cipher,
    noarchive=False,
)
pyz = PYZ(a.pure, a.zipped_data, cipher=block_cipher)

exe = EXE(
    pyz,
    a.scripts,
    [],
    exclude_binaries=True,
    name='detect',
    debug=False,
    bootloader_ignore_signals=False,
    strip=False,
    upx=True,
    console=True,
    disable_windowed_traceback=False,
    argv_emulation=False,
    target_arch=None,
    codesign_identity=None,
    entitlements_file=None,
)
coll = COLLECT(
    exe,
    a.binaries,
    a.zipfiles,
    a.datas,
    strip=False,
    upx=True,
    upx_exclude=[],
    name='detect',
)
2.2. Rebuild binaries

After modification detect.spec, regenerate the binary file as follows

pyinstaller detect.spec

During the generation process, it will prompt yes and delete ./yolov5/dist/detectall the files in the original directory, just enter it directly y, as shown in the figure below
insert image description here
If there is no accident, you can see that it has been successfully completed soon.

The directory where the generated binaries are located is yolov5/dist/detect.

3. Test

Here, exit the python environment, and then execute the binary file, as follows

conda deactivate pyinstaller

cd dist/detect

./detect --source ../../test.png --weights weights/yolov5s.pt

Here, the test results are located yolov5/dist/detect/runsin .

4. Encrypted packaging

Encrypted compilation to prevent decompilation.

Install the pycrypto third-party library.

pip install pycrypto -i https://pypi.tuna.tsinghua.edu.cn/simple
pip install tinyaes -i https://pypi.tuna.tsinghua.edu.cn/simple

Encryption and packaging follow the steps below

  • Create the entry function main.py, which calls other python code (detect.py) as a third-party library;
  • Modify detect.py to facilitate the call of the entry function main.py;
  • pyinstaller -D main.pyGenerate main.specfiles using the command line ;
  • Modify main.specthe file, regenerate the binary
4.1. Create the entry function main.py

The reason why the entry function is created is mainly to introduce the business code as a third-party library without exposing any business code; at the same time, the encryption process is mainly for the business code of the third-party library, which can ensure that the business code cannot be decompiled.

# coding=utf8

from mydetect import detect
import argparse
import torch


if __name__ == '__main__':
    parser = argparse.ArgumentParser()
    parser.add_argument('--weights', nargs='+', type=str, default='yolov5s.pt', help='model.pt path(s)')
    parser.add_argument('--source', type=str, default='data/images', help='source')  # file/folder, 0 for webcam
    parser.add_argument('--img-size', type=int, default=640, help='inference size (pixels)')
    parser.add_argument('--conf-thres', type=float, default=0.25, help='object confidence threshold')
    parser.add_argument('--iou-thres', type=float, default=0.45, help='IOU threshold for NMS')
    parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
    parser.add_argument('--view-img', action='store_true', help='display results')
    parser.add_argument('--save-txt', action='store_true', help='save results to *.txt')
    parser.add_argument('--save-conf', action='store_true', help='save confidences in --save-txt labels')
    parser.add_argument('--classes', nargs='+', type=int, help='filter by class: --class 0, or --class 0 2 3')
    parser.add_argument('--agnostic-nms', action='store_true', help='class-agnostic NMS')
    parser.add_argument('--augment', action='store_true', help='augmented inference')
    parser.add_argument('--update', action='store_true', help='update all models')
    parser.add_argument('--project', default='runs/detect', help='save results to project/name')
    parser.add_argument('--name', default='exp', help='save results to project/name')
    parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment')
    opt = parser.parse_args()
    print(opt)

    with torch.no_grad():
        weights = opt.weights
        source = opt.source 
        img_size = opt.img_size
        conf_thres = opt.conf_thres
        iou_thres = opt.iou_thres
        device = opt.device
        view_img = opt.view_img
        save_txt = opt.save_txt
        save_conf = opt.save_conf
        classes = opt.classes
        agnostic_nms = opt.agnostic_nms
        augment = opt.augment
        update = opt.update
        project = opt.project
        name = opt.name
        exist_ok = opt.exist_ok
        
        detect(weights, source, img_size, conf_thres, iou_thres, device, view_img, save_txt, save_conf, classes, agnostic_nms, augment, update, project, name, exist_ok)
4.2. Modify detect.py

Copy detect.py as mydetect.py, modify mydetect.py as follows

# coding=utf8
import argparse
import time
from pathlib import Path

import cv2
import torch
import torch.backends.cudnn as cudnn
from numpy import random

from models.experimental import attempt_load
from utils.datasets import LoadStreams, LoadImages
from utils.general import check_img_size, non_max_suppression, apply_classifier, scale_coords, xyxy2xywh, \
    strip_optimizer, set_logging, increment_path
from utils.plots import plot_one_box
from utils.torch_utils import select_device, load_classifier, time_synchronized

# 这里主要将detect()的参数修改下
def detect(weights, source, img_size, conf_thres, iou_thres, device, view_img, save_txt, save_conf, classes, agnostic_nms, augment, update, project, name, exist_ok, save_img=False):
    #source, weights, view_img, save_txt, imgsz = opt.source, opt.weights, opt.view_img, opt.save_txt, opt.img_size
    imgsz = img_size
    webcam = source.isnumeric() or source.endswith('.txt') or source.lower().startswith(
        ('rtsp://', 'rtmp://', 'http://'))

    # Directories
    save_dir = Path(increment_path(Path(project) / name, exist_ok=exist_ok))  # increment run
    (save_dir / 'labels' if save_txt else save_dir).mkdir(parents=True, exist_ok=True)  # make dir

    # Initialize
    set_logging()
    device = select_device(device)
    half = device.type != 'cpu'  # half precision only supported on CUDA

    # Load model
    model = attempt_load(weights, map_location=device)  # load FP32 model
    imgsz = check_img_size(imgsz, s=model.stride.max())  # check img_size
    if half:
        model.half()  # to FP16

    # Second-stage classifier
    classify = False
    if classify:
        modelc = load_classifier(name='resnet101', n=2)  # initialize
        modelc.load_state_dict(torch.load('weights/resnet101.pt', map_location=device)['model']).to(device).eval()

    # Set Dataloader
    vid_path, vid_writer = None, None
    if webcam:
        view_img = True
        cudnn.benchmark = True  # set True to speed up constant image size inference
        dataset = LoadStreams(source, img_size=imgsz)
    else:
        save_img = True
        dataset = LoadImages(source, img_size=imgsz)

    # Get names and colors
    names = model.module.names if hasattr(model, 'module') else model.names
    colors = [[random.randint(0, 255) for _ in range(3)] for _ in names]

    # Run inference
    t0 = time.time()
    img = torch.zeros((1, 3, imgsz, imgsz), device=device)  # init img
    _ = model(img.half() if half else img) if device.type != 'cpu' else None  # run once
    for path, img, im0s, vid_cap in dataset:
        img = torch.from_numpy(img).to(device)
        img = img.half() if half else img.float()  # uint8 to fp16/32
        img /= 255.0  # 0 - 255 to 0.0 - 1.0
        if img.ndimension() == 3:
            img = img.unsqueeze(0)

        # Inference
        t1 = time_synchronized()
        pred = model(img, augment=augment)[0]

        # Apply NMS
        pred = non_max_suppression(pred, conf_thres, iou_thres, classes=classes, agnostic=agnostic_nms)
        t2 = time_synchronized()

        # Apply Classifier
        if classify:
            pred = apply_classifier(pred, modelc, img, im0s)

        # Process detections
        for i, det in enumerate(pred):  # detections per image
            if webcam:  # batch_size >= 1
                p, s, im0, frame = path[i], '%g: ' % i, im0s[i].copy(), dataset.count
            else:
                p, s, im0, frame = path, '', im0s, getattr(dataset, 'frame', 0)

            p = Path(p)  # to Path
            save_path = str(save_dir / p.name)  # img.jpg
            txt_path = str(save_dir / 'labels' / p.stem) + ('' if dataset.mode == 'image' else f'_{frame}')  # img.txt
            s += '%gx%g ' % img.shape[2:]  # print string
            gn = torch.tensor(im0.shape)[[1, 0, 1, 0]]  # normalization gain whwh
            if len(det):
                # Rescale boxes from img_size to im0 size
                det[:, :4] = scale_coords(img.shape[2:], det[:, :4], im0.shape).round()

                # Print results
                for c in det[:, -1].unique():
                    n = (det[:, -1] == c).sum()  # detections per class
                    s += f'{n} {names[int(c)]}s, '  # add to string

                # Write results
                for *xyxy, conf, cls in reversed(det):
                    if save_txt:  # Write to file
                        xywh = (xyxy2xywh(torch.tensor(xyxy).view(1, 4)) / gn).view(-1).tolist()  # normalized xywh
                        line = (cls, *xywh, conf) if opt.save_conf else (cls, *xywh)  # label format
                        with open(txt_path + '.txt', 'a') as f:
                            f.write(('%g ' * len(line)).rstrip() % line + '\n')

                    if save_img or view_img:  # Add bbox to image
                        label = f'{names[int(cls)]} {conf:.2f}'
                        plot_one_box(xyxy, im0, label=label, color=colors[int(cls)], line_thickness=3)

            # Print time (inference + NMS)
            print(f'{s}Done. ({t2 - t1:.3f}s)')

            # Stream results
            if view_img:
                cv2.imshow(str(p), im0)

            # Save results (image with detections)
            if save_img:
                if dataset.mode == 'image':
                    cv2.imwrite(save_path, im0)
                else:  # 'video'
                    if vid_path != save_path:  # new video
                        vid_path = save_path
                        if isinstance(vid_writer, cv2.VideoWriter):
                            vid_writer.release()  # release previous video writer

                        fourcc = 'mp4v'  # output video codec
                        fps = vid_cap.get(cv2.CAP_PROP_FPS)
                        w = int(vid_cap.get(cv2.CAP_PROP_FRAME_WIDTH))
                        h = int(vid_cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
                        vid_writer = cv2.VideoWriter(save_path, cv2.VideoWriter_fourcc(*fourcc), fps, (w, h))
                    vid_writer.write(im0)

    if save_txt or save_img:
        s = f"\n{len(list(save_dir.glob('labels/*.txt')))} labels saved to {save_dir / 'labels'}" if save_txt else ''
        print(f"Results saved to {save_dir}{s}")

    print(f'Done. ({time.time() - t0:.3f}s)')


if __name__ == '__main__':
    print("运行mydetect.py!")

4.3. Encrypted to generate main.spec file

Note that the pyinstaller command needs to be run in a python environment.

pyinstaller -D  main.py

The generated main.spec file is located in the ./yolov5 directory.

4.4. Modify the main.spec file

Compared with the case of no encryption, here block_cipher = pyi_crypto.PyiBlockCipher(key='123456'), the overall main.spec file is as follows

# -*- mode: python ; coding: utf-8 -*-


block_cipher = pyi_crypto.PyiBlockCipher(key='123456')


a = Analysis(
    ['main.py'],
    pathex=[],
    binaries=[],
    datas=[('models','./models'), ('weights', './weights'), ('data', './data')],
    hiddenimports=['utils', 'utils.autoanchor'],
    hookspath=[],
    hooksconfig={
    
    },
    runtime_hooks=[],
    excludes=[],
    win_no_prefer_redirects=False,
    win_private_assemblies=False,
    cipher=block_cipher,
    noarchive=False,
)
pyz = PYZ(a.pure, a.zipped_data, cipher=block_cipher)

exe = EXE(
    pyz,
    a.scripts,
    [],
    exclude_binaries=True,
    name='main',
    debug=False,
    bootloader_ignore_signals=False,
    strip=False,
    upx=True,
    console=True,
    disable_windowed_traceback=False,
    argv_emulation=False,
    target_arch=None,
    codesign_identity=None,
    entitlements_file=None,
)
coll = COLLECT(
    exe,
    a.binaries,
    a.zipfiles,
    a.datas,
    strip=False,
    upx=True,
    upx_exclude=[],
    name='detect',
)
4.5. Generate binaries
pyinstaller main.spec
4.6. Testing

Enter yolov5/dist/main, you can see

./main --source ../../test.png --weights weights/yolov5s.pt

If there is no problem, you can find the inference results in ./yolov5/dist/main/runs.

Guess you like

Origin blog.csdn.net/qq_30841655/article/details/128583336