零基础入门深度学习(九):目标检测之常用数据预处理与增广方法

课程名称 | 零基础入门深度学习

授课讲师 | 孙高峰 百度深度学习技术平台部资深研发工程师

授课时间 | 每周二、周四晚20:00-21:00

01

导读

本课程是百度官方开设的零基础入门深度学习课程,主要面向没有深度学习技术基础或者基础薄弱的同学,帮助大家在深度学习领域实现从0到1+的跨越。从本课程中,你将学习到:

  1. 深度学习基础知识

  2. numpy实现神经网络构建和梯度下降算法

  3. 计算机视觉领域主要方向的原理、实践

  4. 自然语言处理领域主要方向的原理、实践

  5. 个性化推荐算法的原理、实践

百度深度学习技术平台部资深研发工程师孙高峰,上一讲为大家介绍了目标检测的基本概念,本讲将以林业病虫害数据集为例,继续为大家介绍目标检测中的常用数据预处理与增广方法

02

林业病虫害数据集和数据预处理方法介绍

在本次的课程中,将使用百度与林业大学合作开发的林业病虫害防治项目中用到昆虫数据集,关于该项目和数据集的更多信息,可以参考相关报道。在这一小节中将为读者介绍该数据集,以及计算机视觉任务中常用的数据预处理方法。

读取AI识虫数据集标注信息

AI识虫数据集结构如下:

  • 提供了2183张图片,其中训练集1693张,验证集245,测试集245张。

  • 包含7种昆虫,分别是Boerner、Leconte、Linnaeus、acuminatus、armandi、coleoptera和linnaeus。

  • 包含了图片和标注,请读者先将数据解压,并存放在insects目录下。

# 解压数据脚本,第一次运行时打开注释,将文件解压到work目录下# !unzip -d /home/aistudio/work /home/aistudio/data/data19638/insects.zip

将数据解压之后,可以看到insects目录下的结构如下所示。




insects包含train、val和test三个文件夹。train/annotations/xmls目录下存放着图片的标注。每个xml文件是对一张图片的说明,包括图片尺寸、包含的昆虫名称、在图片上出现的位置等信息。

<annotation>        <folder>刘霏霏</folder>        <filename>100.jpeg</filename>        <path>/home/fion/桌面/刘霏霏/100.jpeg</path>        <source>                <database>Unknown</database>        </source>        <size>                <width>1336</width>                <height>1336</height>                <depth>3</depth>        </size>        <segmented>0</segmented>        <object>                <name>Boerner</name>                <pose>Unspecified</pose>                <truncated>0</truncated>                <difficult>0</difficult>                <bndbox>                        <xmin>500</xmin>                        <ymin>893</ymin>                        <xmax>656</xmax>                        <ymax>966</ymax>                </bndbox>        </object>        <object>                <name>Leconte</name>                <pose>Unspecified</pose>                <truncated>0</truncated>                <difficult>0</difficult>                <bndbox>                        <xmin>622</xmin>                        <ymin>490</ymin>                        <xmax>756</xmax>                        <ymax>610</ymax>                </bndbox>        </object>        <object>                <name>armandi</name>                <pose>Unspecified</pose>                <truncated>0</truncated>                <difficult>0</difficult>                <bndbox>                        <xmin>432</xmin>                        <ymin>663</ymin>                        <xmax>517</xmax>                        <ymax>729</ymax>                </bndbox>        </object>        <object>                <name>coleoptera</name>                <pose>Unspecified</pose>                <truncated>0</truncated>                <difficult>0</difficult>                <bndbox>                        <xmin>624</xmin>                        <ymin>685</ymin>                        <xmax>697</xmax>                        <ymax>771</ymax>                </bndbox>        </object>        <object>                <name>linnaeus</name>                <pose>Unspecified</pose>                <truncated>0</truncated>                <difficult>0</difficult>                <bndbox>                        <xmin>783</xmin>                        <ymin>700</ymin>                        <xmax>856</xmax>                        <ymax>802</ymax>                </bndbox>        </object></annotation>

上面列出的xml文件中的主要参数说明如下:

-size:图片尺寸

-object:图片中包含的物体,一张图片可能中包含多个物体

  • name:昆虫名称

  • bndbox:物体真实框

  • difficult:识别是否困难

下面我们将从数据集中读取xml文件,将每张图片的标注信息读取出来。在读取具体的标注文件之前,我们先完成一件事情,就是将昆虫的类别名字(字符串)转化成数字表示的类别。因为神经网络里面计算时需要的输入类型是数值型的,所以需要将字符串表示的类别转化成具体的数字。昆虫类别名称的列表是:['Boerner', 'Leconte', 'Linnaeus', 'acuminatus', 'armandi', 'coleoptera', 'linnaeus'],这里我们约定此列表中:'Boerner'对应类别0,'Leconte'对应类别1,...,'linnaeus'对应类别6。使用下面的程序可以得到表示名称字符串和数字类别之间映射关系的字典。

INSECT_NAMES = ['Boerner', 'Leconte', 'Linnaeus',                 'acuminatus', 'armandi', 'coleoptera', 'linnaeus']
def get_insect_names():    """    return a dict, as following,        {'Boerner': 0,         'Leconte': 1,         'Linnaeus': 2,          'acuminatus': 3,         'armandi': 4,         'coleoptera': 5,         'linnaeus': 6        }    It can map the insect name into an integer label.    """    insect_category2id = {}    for i, item in enumerate(INSECT_NAMES):        insect_category2id[item] = i
    return insect_category2id
cname2cid = get_insect_names()cname2cid
{'Boerner': 0,
 'Leconte': 1,
 'Linnaeus': 2,
 'acuminatus': 3,
 'armandi': 4,
 'coleoptera': 5,
 'linnaeus': 6}

调用get_insect_names函数返回一个dict,其键-值对描述了昆虫名称-数字类别之间的映射关系。

下面的程序从annotations/xml目录下面读取所有文件标注信息。

import osimport numpy as npimport xml.etree.ElementTree as ETdef get_annotations(cname2cid, datadir):    filenames = os.listdir(os.path.join(datadir, 'annotations', 'xmls'))    records = []    ct = 0    for fname in filenames:        fid = fname.split('.')[0]        fpath = os.path.join(datadir, 'annotations', 'xmls', fname)        img_file = os.path.join(datadir, 'images', fid + '.jpeg')        tree = ET.parse(fpath)        if tree.find('id') is None:            im_id = np.array([ct])        else:            im_id = np.array([int(tree.find('id').text)])        objs = tree.findall('object')        im_w = float(tree.find('size').find('width').text)        im_h = float(tree.find('size').find('height').text)        gt_bbox = np.zeros((len(objs), 4), dtype=np.float32)        gt_class = np.zeros((len(objs), ), dtype=np.int32)        is_crowd = np.zeros((len(objs), ), dtype=np.int32)        difficult = np.zeros((len(objs), ), dtype=np.int32)        for i, obj in enumerate(objs):            cname = obj.find('name').text            gt_class[i] = cname2cid[cname]            _difficult = int(obj.find('difficult').text)            x1 = float(obj.find('bndbox').find('xmin').text)            y1 = float(obj.find('bndbox').find('ymin').text)            x2 = float(obj.find('bndbox').find('xmax').text)            y2 = float(obj.find('bndbox').find('ymax').text)            x1 = max(0, x1)            y1 = max(0, y1)            x2 = min(im_w - 1, x2)            y2 = min(im_h - 1, y2)            # 这里使用xywh格式来表示目标物体真实框            gt_bbox[i] = [(x1+x2)/2.0 , (y1+y2)/2.0, x2-x1+1., y2-y1+1.]            is_crowd[i] = 0            difficult[i] = _difficult        voc_rec = {            'im_file': img_file,            'im_id': im_id,            'h': im_h,            'w': im_w,            'is_crowd': is_crowd,            'gt_class': gt_class,            'gt_bbox': gt_bbox,            'gt_poly': [],            'difficult': difficult            }        if len(objs) != 0:            records.append(voc_rec)        ct += 1    return records
TRAINDIR = '/home/aistudio/work/insects/train'TESTDIR = '/home/aistudio/work/insects/test'VALIDDIR = '/home/aistudio/work/insects/val'cname2cid = get_insect_names()records = get_annotations(cname2cid, TRAINDIR)
len(records)
1693

records[0]
{'difficult': array([0, 0, 0, 0, 0], dtype=int32),
 'gt_bbox': array([[600. , 344.5, 135. , 172. ],
        [540.5, 705. ,  56. , 129. ],
        [661. , 831. ,  81. ,  71. ],
        [782.5, 545.5,  48. ,  82. ],
        [823. , 678. ,  59. ,  75. ]], dtype=float32),
 'gt_class': array([1, 0, 4, 2, 5], dtype=int32),
 'gt_poly': [],
 'h': 1224.0,
 'im_file': '/home/aistudio/work/insects/train/images/693.jpeg',
 'im_id': array([0]),
 'is_crowd': array([0, 0, 0, 0, 0], dtype=int32),
 'w': 1224.0}

通过上面的程序,将所有训练数据集的标注数据全部读取出来了,存放在records列表下面,其中每一个元素是一张图片的标注数据,包含了图片存放地址,图片id,图片高度和宽度,图片中所包含的目标物体的种类和位置。

数据读取和预处理

数据预处理是训练神经网络时非常重要的步骤。合适的预处理方法,可以帮助模型更好的收敛并防止过拟合。首先我们需要从磁盘读入数据,然后需要对这些数据进行预处理,为了保证网络运行的速度通常还要对数据预处理进行加速。

数据读取

前面已经将图片的所有描述信息保存在records中了,其中的每一个元素包含了一张图片的描述,下面的程序展示了如何根据records里面的描述读取图片及标注。

### 数据读取import cv2
def get_bbox(gt_bbox, gt_class):    # 对于一般的检测任务来说,一张图片上往往会有多个目标物体    # 设置参数MAX_NUM = 50, 即一张图片最多取50个真实框;如果真实    # 框的数目少于50个,则将不足部分的gt_bbox, gt_class和gt_score的各项数值全设置为0    MAX_NUM = 50    gt_bbox2 = np.zeros((MAX_NUM, 4))    gt_class2 = np.zeros((MAX_NUM,))    for i in range(len(gt_bbox)):        gt_bbox2[i, :] = gt_bbox[i, :]        gt_class2[i] = gt_class[i]        if i >= MAX_NUM:            break    return gt_bbox2, gt_class2
def get_img_data_from_file(record):    """    record is a dict as following,      record = {            'im_file': img_file,            'im_id': im_id,            'h': im_h,            'w': im_w,            'is_crowd': is_crowd,            'gt_class': gt_class,            'gt_bbox': gt_bbox,            'gt_poly': [],            'difficult': difficult            }    """    im_file = record['im_file']    h = record['h']    w = record['w']    is_crowd = record['is_crowd']    gt_class = record['gt_class']    gt_bbox = record['gt_bbox']    difficult = record['difficult']
    img = cv2.imread(im_file)    img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
    # check if h and w in record equals that read from img    assert img.shape[0] == int(h), \             "image height of {} inconsistent in record({}) and img file({})".format(               im_file, h, img.shape[0])
    assert img.shape[1] == int(w), \             "image width of {} inconsistent in record({}) and img file({})".format(               im_file, w, img.shape[1])
    gt_boxes, gt_labels = get_bbox(gt_bbox, gt_class)
    # gt_bbox 用相对值    gt_boxes[:, 0] = gt_boxes[:, 0] / float(w)    gt_boxes[:, 1] = gt_boxes[:, 1] / float(h)    gt_boxes[:, 2] = gt_boxes[:, 2] / float(w)    gt_boxes[:, 3] = gt_boxes[:, 3] / float(h)
    return img, gt_boxes, gt_labels, (h, w)
record = records[0]img, gt_boxes, gt_labels, scales = get_img_data_from_file(record)
img.shape
(1224, 1224, 3)

gt_boxes.shape
(50, 4)

gt_labels
array([1., 0., 4., 2., 5., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
       0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
       0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])

scales
(1224.0, 1224.0)

get_img_data_from_file()函数可以返回图片数据的数据,它们是图像数据img, 真实框坐标gt_boxes, 真实框包含的物体类别gt_labels, 图像尺寸scales。

数据预处理

在计算机视觉中,通常会对图像做一些随机的变化,产生相似但又不完全相同的样本。主要作用是扩大训练数据集,抑制过拟合,提升模型的泛化能力,常用的方法见下面的程序。

随机改变亮暗、对比度和颜色等

import numpy as npimport cv2from PIL import Image, ImageEnhanceimport random
# 随机改变亮暗、对比度和颜色等def random_distort(img):    # 随机改变亮度    def random_brightness(img, lower=0.5, upper=1.5):        e = np.random.uniform(lower, upper)        return ImageEnhance.Brightness(img).enhance(e)    # 随机改变对比度    def random_contrast(img, lower=0.5, upper=1.5):        e = np.random.uniform(lower, upper)        return ImageEnhance.Contrast(img).enhance(e)    # 随机改变颜色    def random_color(img, lower=0.5, upper=1.5):        e = np.random.uniform(lower, upper)        return ImageEnhance.Color(img).enhance(e)
    ops = [random_brightness, random_contrast, random_color]    np.random.shuffle(ops)
    img = Image.fromarray(img)    img = ops[0](img)    img = ops[1](img)    img = ops[2](img)    img = np.asarray(img)
    return img

随机填充

# 随机填充def random_expand(img,                  gtboxes,                  max_ratio=4.,                  fill=None,                  keep_ratio=True,                  thresh=0.5):    if random.random() > thresh:        return img, gtboxes
    if max_ratio < 1.0:        return img, gtboxes
    h, w, c = img.shape    ratio_x = random.uniform(1, max_ratio)    if keep_ratio:        ratio_y = ratio_x    else:        ratio_y = random.uniform(1, max_ratio)    oh = int(h * ratio_y)    ow = int(w * ratio_x)    off_x = random.randint(0, ow - w)    off_y = random.randint(0, oh - h)
    out_img = np.zeros((oh, ow, c))    if fill and len(fill) == c:        for i in range(c):            out_img[:, :, i] = fill[i] * 255.0
    out_img[off_y:off_y + h, off_x:off_x + w, :] = img    gtboxes[:, 0] = ((gtboxes[:, 0] * w) + off_x) / float(ow)    gtboxes[:, 1] = ((gtboxes[:, 1] * h) + off_y) / float(oh)    gtboxes[:, 2] = gtboxes[:, 2] / ratio_x    gtboxes[:, 3] = gtboxes[:, 3] / ratio_y
    return out_img.astype('uint8'), gtboxes

随机裁剪

随机裁剪之前需要先定义两个函数,multi_box_iou_xywh和box_crop这两个函数将被保存在box_utils.py文件中。

import numpy as np
def multi_box_iou_xywh(box1, box2):    """    In this case, box1 or box2 can contain multi boxes.    Only two cases can be processed in this method:       1, box1 and box2 have the same shape, box1.shape == box2.shape       2, either box1 or box2 contains only one box, len(box1) == 1 or len(box2) == 1    If the shape of box1 and box2 does not match, and both of them contain multi boxes, it will be wrong.    """    assert box1.shape[-1] == 4, "Box1 shape[-1] should be 4."    assert box2.shape[-1] == 4, "Box2 shape[-1] should be 4."

    b1_x1, b1_x2 = box1[:, 0] - box1[:, 2] / 2, box1[:, 0] + box1[:, 2] / 2    b1_y1, b1_y2 = box1[:, 1] - box1[:, 3] / 2, box1[:, 1] + box1[:, 3] / 2    b2_x1, b2_x2 = box2[:, 0] - box2[:, 2] / 2, box2[:, 0] + box2[:, 2] / 2    b2_y1, b2_y2 = box2[:, 1] - box2[:, 3] / 2, box2[:, 1] + box2[:, 3] / 2
    inter_x1 = np.maximum(b1_x1, b2_x1)    inter_x2 = np.minimum(b1_x2, b2_x2)    inter_y1 = np.maximum(b1_y1, b2_y1)    inter_y2 = np.minimum(b1_y2, b2_y2)    inter_w = inter_x2 - inter_x1    inter_h = inter_y2 - inter_y1    inter_w = np.clip(inter_w, a_min=0., a_max=None)    inter_h = np.clip(inter_h, a_min=0., a_max=None)
    inter_area = inter_w * inter_h    b1_area = (b1_x2 - b1_x1) * (b1_y2 - b1_y1)    b2_area = (b2_x2 - b2_x1) * (b2_y2 - b2_y1)
    return inter_area / (b1_area + b2_area - inter_area)
def box_crop(boxes, labels, crop, img_shape):    x, y, w, h = map(float, crop)    im_w, im_h = map(float, img_shape)
    boxes = boxes.copy()    boxes[:, 0], boxes[:, 2] = (boxes[:, 0] - boxes[:, 2] / 2) * im_w, (        boxes[:, 0] + boxes[:, 2] / 2) * im_w    boxes[:, 1], boxes[:, 3] = (boxes[:, 1] - boxes[:, 3] / 2) * im_h, (        boxes[:, 1] + boxes[:, 3] / 2) * im_h
    crop_box = np.array([x, y, x + w, y + h])    centers = (boxes[:, :2] + boxes[:, 2:]) / 2.0    mask = np.logical_and(crop_box[:2] <= centers, centers <= crop_box[2:]).all(        axis=1)
    boxes[:, :2] = np.maximum(boxes[:, :2], crop_box[:2])    boxes[:, 2:] = np.minimum(boxes[:, 2:], crop_box[2:])    boxes[:, :2] -= crop_box[:2]    boxes[:, 2:] -= crop_box[:2]
    mask = np.logical_and(mask, (boxes[:, :2] < boxes[:, 2:]).all(axis=1))    boxes = boxes * np.expand_dims(mask.astype('float32'), axis=1)    labels = labels * mask.astype('float32')    boxes[:, 0], boxes[:, 2] = (boxes[:, 0] + boxes[:, 2]) / 2 / w, (        boxes[:, 2] - boxes[:, 0]) / w    boxes[:, 1], boxes[:, 3] = (boxes[:, 1] + boxes[:, 3]) / 2 / h, (        boxes[:, 3] - boxes[:, 1]) / h
    return boxes, labels, mask.sum()
# 随机裁剪def random_crop(img,                boxes,                labels,                scales=[0.3, 1.0],                max_ratio=2.0,                constraints=None,                max_trial=50):    if len(boxes) == 0:        return img, boxes
    if not constraints:        constraints = [(0.1, 1.0), (0.3, 1.0), (0.5, 1.0), (0.7, 1.0),                       (0.9, 1.0), (0.0, 1.0)]
    img = Image.fromarray(img)    w, h = img.size    crops = [(0, 0, w, h)]    for min_iou, max_iou in constraints:        for _ in range(max_trial):            scale = random.uniform(scales[0], scales[1])            aspect_ratio = random.uniform(max(1 / max_ratio, scale * scale), \                                          min(max_ratio, 1 / scale / scale))            crop_h = int(h * scale / np.sqrt(aspect_ratio))            crop_w = int(w * scale * np.sqrt(aspect_ratio))            crop_x = random.randrange(w - crop_w)            crop_y = random.randrange(h - crop_h)            crop_box = np.array([[(crop_x + crop_w / 2.0) / w,                                  (crop_y + crop_h / 2.0) / h,                                  crop_w / float(w), crop_h / float(h)]])
            iou = multi_box_iou_xywh(crop_box, boxes)            if min_iou <= iou.min() and max_iou >= iou.max():                crops.append((crop_x, crop_y, crop_w, crop_h))                break
    while crops:        crop = crops.pop(np.random.randint(0, len(crops)))        crop_boxes, crop_labels, box_num = box_crop(boxes, labels, crop, (w, h))        if box_num < 1:            continue        img = img.crop((crop[0], crop[1], crop[0] + crop[2],                        crop[1] + crop[3])).resize(img.size, Image.LANCZOS)        img = np.asarray(img)        return img, crop_boxes, crop_labels    img = np.asarray(img)    return img, boxes, labels

随机缩放

# 随机缩放def random_interp(img, size, interp=None):    interp_method = [        cv2.INTER_NEAREST,        cv2.INTER_LINEAR,        cv2.INTER_AREA,        cv2.INTER_CUBIC,        cv2.INTER_LANCZOS4,    ]    if not interp or interp not in interp_method:        interp = interp_method[random.randint(0, len(interp_method) - 1)]    h, w, _ = img.shape    im_scale_x = size / float(w)    im_scale_y = size / float(h)    img = cv2.resize(        img, None, None, fx=im_scale_x, fy=im_scale_y, interpolation=interp)    return img

随机翻转

# 随机翻转def random_flip(img, gtboxes, thresh=0.5):    if random.random() > thresh:        img = img[:, ::-1, :]        gtboxes[:, 0] = 1.0 - gtboxes[:, 0]    return img, gtboxes

随机打乱真实框排列顺序

# 随机打乱真实框排列顺序def shuffle_gtbox(gtbox, gtlabel):    gt = np.concatenate(        [gtbox, gtlabel[:, np.newaxis]], axis=1)    idx = np.arange(gt.shape[0])    np.random.shuffle(idx)    gt = gt[idx, :]    return gt[:, :4], gt[:, 4]

图像增广方法

# 图像增广方法汇总def image_augment(img, gtboxes, gtlabels, size, means=None):    # 随机改变亮暗、对比度和颜色等    img = random_distort(img)    # 随机填充    img, gtboxes = random_expand(img, gtboxes, fill=means)    # 随机裁剪    img, gtboxes, gtlabels, = random_crop(img, gtboxes, gtlabels)    # 随机缩放    img = random_interp(img, size)    # 随机翻转    img, gtboxes = random_flip(img, gtboxes)    # 随机打乱真实框排列顺序    gtboxes, gtlabels = shuffle_gtbox(gtboxes, gtlabels)
    return img.astype('float32'), gtboxes.astype('float32'), gtlabels.astype('int32')

img, gt_boxes, gt_labels, scales = get_img_data_from_file(record)size = 512img, gt_boxes, gt_labels = image_augment(img, gt_boxes, gt_labels, size)
img.shape
(512, 512, 3)

gt_boxes.shape
(50, 4)

gt_labels.shape
(50,)

这里得到的img数据数值需要调整,需要除以255.,并且减去均值和方差,再将维度从[H, W, C]调整为[C, H, W]

img, gt_boxes, gt_labels, scales = get_img_data_from_file(record)size = 512img, gt_boxes, gt_labels = image_augment(img, gt_boxes, gt_labels, size)mean = [0.485, 0.456, 0.406]std = [0.229, 0.224, 0.225]mean = np.array(mean).reshape((1, 1, -1))std = np.array(std).reshape((1, 1, -1))img = (img / 255.0 - mean) / stdimg = img.astype('float32').transpose((2, 0, 1))img

将上面的过程整理成一个函数get_img_data

def get_img_data(record, size=640):    img, gt_boxes, gt_labels, scales = get_img_data_from_file(record)    img, gt_boxes, gt_labels = image_augment(img, gt_boxes, gt_labels, size)    mean = [0.485, 0.456, 0.406]    std = [0.229, 0.224, 0.225]    mean = np.array(mean).reshape((1, 1, -1))    std = np.array(std).reshape((1, 1, -1))    img = (img / 255.0 - mean) / std    img = img.astype('float32').transpose((2, 0, 1))    return img, gt_boxes, gt_labels, scales
TRAINDIR = '/home/aistudio/work/insects/train'TESTDIR = '/home/aistudio/work/insects/test'VALIDDIR = '/home/aistudio/work/insects/val'cname2cid = get_insect_names()records = get_annotations(cname2cid, TRAINDIR)
record = records[0]img, gt_boxes, gt_labels, scales = get_img_data(record, size=480)
img.shape
(3, 480, 480)

gt_boxes.shape
(50, 4)

gt_labels
array([0, 0, 0, 0, 1, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 4, 0, 0, 0, 0, 0, 0,
       5, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
       0, 0, 0, 0, 0, 0], dtype=int32)

scales
(1224.0, 1224.0)

批量数据读取与加速

上面的程序展示了如何读取一张图片的数据并加速,下面的代码实现了批量数据读取。

# 获取一个批次内样本随机缩放的尺寸def get_img_size(mode):    if (mode == 'train') or (mode == 'valid'):        inds = np.array([0,1,2,3,4,5,6,7,8,9])        ii = np.random.choice(inds)        img_size = 320 + ii * 32    else:        img_size = 608    return img_size
# 将 list形式的batch数据 转化成多个array构成的tupledef make_array(batch_data):    img_array = np.array([item[0] for item in batch_data], dtype = 'float32')    gt_box_array = np.array([item[1] for item in batch_data], dtype = 'float32')    gt_labels_array = np.array([item[2] for item in batch_data], dtype = 'int32')    img_scale = np.array([item[3] for item in batch_data], dtype='int32')    return img_array, gt_box_array, gt_labels_array, img_scale
# 批量读取数据,同一批次内图像的尺寸大小必须是一样的,# 不同批次之间的大小是随机的,# 由上面定义的get_img_size函数产生def data_loader(datadir, batch_size= 10, mode='train'):    cname2cid = get_insect_names()    records = get_annotations(cname2cid, datadir)
    def reader():        if mode == 'train':            np.random.shuffle(records)        batch_data = []        img_size = get_img_size(mode)        for record in records:            #print(record)            img, gt_bbox, gt_labels, im_shape = get_img_data(record,                                                              size=img_size)            batch_data.append((img, gt_bbox, gt_labels, im_shape))            if len(batch_data) == batch_size:                yield make_array(batch_data)                batch_data = []                img_size = get_img_size(mode)        if len(batch_data) > 0:            yield make_array(batch_data)
    return reader
d = data_loader('/home/aistudio/work/insects/train', batch_size=2, mode='train')
img, gt_boxes, gt_labels, im_shape = next(d())
img.shape, gt_boxes.shape, gt_labels.shape, im_shape.shape
((2, 3, 608, 608), (2, 50, 4), (2, 50), (2, 2))

由于在数据预处理耗时较长,可能会成为网络训练速度的瓶颈,所以需要对预处理部分进行优化。通过使用Paddle提供的API paddle.reader.xmap_readers可以开启多线程读取数据,具体实现代码如下。

import functoolsimport paddle
# 使用paddle.reader.xmap_readers实现多线程读取数据def multithread_loader(datadir, batch_size= 10, mode='train'):    cname2cid = get_insect_names()    records = get_annotations(cname2cid, datadir)    def reader():        if mode == 'train':            np.random.shuffle(records)        img_size = get_img_size(mode)        batch_data = []        for record in records:            batch_data.append((record, img_size))            if len(batch_data) == batch_size:                yield batch_data                batch_data = []                img_size = get_img_size(mode)        if len(batch_data) > 0:            yield batch_data
    def get_data(samples):        batch_data = []        for sample in samples:            record = sample[0]            img_size = sample[1]            img, gt_bbox, gt_labels, im_shape = get_img_data(record, size=img_size)            batch_data.append((img, gt_bbox, gt_labels, im_shape))        return make_array(batch_data)
    mapper = functools.partial(get_data, )
    return paddle.reader.xmap_readers(mapper, reader, 8, 10)
d = multithread_loader('/home/aistudio/work/insects/train', batch_size=2, mode='train')
img, gt_boxes, gt_labels, im_shape = next(d())
img.shape, gt_boxes.shape, gt_labels.shape, im_shape.shape
((2, 3, 480, 480), (2, 50, 4), (2, 50), (2, 2))

至此,我们完成了如何查看数据集中的数据、提取数据标注信息、从文件读取图像和标注数据、数据增多、批量读取和加速等过程,通过multithread_loader可以返回img, gt_boxes, gt_labels, im_shape等数据,接下来就可以将它们输入神经网络应用在具体算法上面了。

在开始具体的算法讲解之前,先补充一下测试数据的读取代码,测试数据没有标注信息,也不需要做图像增广,代码如下所示。

# 测试数据读取
# 将 list形式的batch数据 转化成多个array构成的tupledef make_test_array(batch_data):    img_name_array = np.array([item[0] for item in batch_data])    img_data_array = np.array([item[1] for item in batch_data], dtype = 'float32')    img_scale_array = np.array([item[2] for item in batch_data], dtype='int32')    return img_name_array, img_data_array, img_scale_array
# 测试数据读取def test_data_loader(datadir, batch_size= 10, test_image_size=608, mode='test'):    """    加载测试用的图片,测试数据没有groundtruth标签    """    image_names = os.listdir(datadir)    def reader():        batch_data = []        img_size = test_image_size        for image_name in image_names:            file_path = os.path.join(datadir, image_name)            img = cv2.imread(file_path)            img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)            H = img.shape[0]            W = img.shape[1]            img = cv2.resize(img, (img_size, img_size))
            mean = [0.485, 0.456, 0.406]            std = [0.229, 0.224, 0.225]            mean = np.array(mean).reshape((1, 1, -1))            std = np.array(std).reshape((1, 1, -1))            out_img = (img / 255.0 - mean) / std            out_img = out_img.astype('float32').transpose((2, 0, 1))            img = out_img #np.transpose(out_img, (2,0,1))            im_shape = [H, W]
            batch_data.append((image_name.split('.')[0], img, im_shape))            if len(batch_data) == batch_size:                yield make_test_array(batch_data)                batch_data = []        if len(batch_data) > 0:            yield make_test_array(batch_data)
    return reader


03

总结

本课程中孙老师以林业病虫害数据集为例,为大家讲解了目标检测中常用的数据预处理与增广方法,下节课开始,将为大家开始讲解YOLOv3算法的具体实现。在后期课程中,将继续为大家带来内容更丰富的课程,帮助学员快速掌握深度学习方法。

【如何学习】

1 如何观看配套视频?如何代码实践?

视频+代码已经发布在AI Studio实践平台上,视频支持PC端/手机端同步观看,也鼓励大家亲手体验运行代码哦。扫码或者打开以下链接:https://aistudio.baidu.com/aistudio/course/introduce/888

2 学习过程中,有疑问怎么办?

加入深度学习集训营QQ群:726887660,班主任与飞桨研发会在群里进行答疑与学习资料发放。

3 如何学习更多内容?

百度飞桨将通过飞桨深度学习集训营的形式,继续更新《零基础入门深度学习》课程,由百度深度学习高级研发工程师亲自授课,工作日每周二、每周四8:00-9:00不见不散,采用直播+录播+实践+答疑的形式,欢迎关注~

请搜索AI Studio,点击课程-百度架构师手把手教深度学习,或者点击文末「阅读原文」收看。

发布了456 篇原创文章 · 获赞 56 · 访问量 15万+

猜你喜欢

转载自blog.csdn.net/PaddlePaddle/article/details/104079068