ssd的官网中给出的数据集制作方法过于简单,没有细节,因此对这一制作方法进行研究。
官网VOC数据集制作教程第一步:
# Create the trainval.txt, test.txt, and test_name_size.txt in data/VOC0712/
./data/VOC0712/create_list.sh
通过注释可以了解这一步的功能是获取训练集、测试集的图片存放地址与数据标注存放地址。test_name_size.txt的功能不甚清晰,简单分析一下,create_list.sh中以下几句用于产生test_name_size.txt
# Generate image name and size infomation.
if [ $dataset == "test" ]
then
$bash_dir/../../build/tools/get_image_size $root_dir $dst_file $bash_dir/$dataset"_name_size.txt"
fi
通过注释可以了解这一文件记录的信息是测试文件与测试图片的大小。
至此create_list.sh的功能已经基本明确。
./data/VOC0712/create_data.sh
接下来是create_data.sh的功能。
cur_dir=$(cd $( dirname ${BASH_SOURCE[0]} ) && pwd )
root_dir=$cur_dir/../..
cd $root_dir
redo=1
data_root_dir="$HOME/dataset/VOC/VOCdevkit"
dataset_name="VOC0712"
mapfile="$root_dir/data/$dataset_name/labelmap_voc.prototxt"
anno_type="detection"
db="lmdb"
min_dim=0
max_dim=0
width=0
height=0
extra_cmd="--encode-type=jpg --encoded"
if [ $redo ]
then
extra_cmd="$extra_cmd --redo"
fi
for subset in test trainval
do
python2 $root_dir/scripts/create_annoset.py --anno-type=$anno_type --label-map-file=$mapfile --min-dim=$min_dim --max-dim=$max_dim --resize-width=$width --resize-height=$height --check-label $extra_cmd $data_root_dir $root_dir/data/$dataset_name/$subset.txt $data_root_dir/$dataset_name/$db/$dataset_name"_"$subset"_"$db examples/$dataset_name
done
核心功能是由create_annoset.py实现的,其他的只负责设置参数。
执行create_data.sh输出结果如下:
build/tools/convert_annoset
--anno_type=detection
--label_type=xml
--label_map_file=./../data/VOC0712/labelmap_voc.prototxt
--check_label=True
--min_dim=0
--max_dim=0
--resize_height=0
--resize_width=0
--backend=lmdb
--shuffle=False
--check_size=False
--encode_type=jpg
--encoded=True
--gray=False
/dataset/VOC/VOCdevkit/
./../data/VOC0712/test.txt
/dataset/VOC/VOCdevkit/VOC0712/lmdb/VOC0712_test_lmdb
上述参数又被放入tools/convert_annoset.cpp中执行,因此convert_annoset.cpp才是执行功能的软件,感觉这个地方的代码写的太麻烦了,完全可以一个sh+cpp,现在却变成了sh+python+cpp。
convert_annoset.cpp代码如下:
// This program converts a set of images and annotations to a lmdb/leveldb by
// storing them as AnnotatedDatum proto buffers.
// Usage:
// convert_annoset [FLAGS] ROOTFOLDER/ LISTFILE DB_NAME
//
// where ROOTFOLDER is the root folder that holds all the images and
// annotations, and LISTFILE should be a list of files as well as their labels
// or label files.
// For classification task, the file should be in the format as
// imgfolder1/img1.JPEG 7
// ....
// For detection task, the file should be in the format as
// imgfolder1/img1.JPEG annofolder1/anno1.xml
// ....
#include <algorithm>
#include <fstream> // NOLINT(readability/streams)
#include <map>
#include <string>
#include <utility>
#include <vector>
#include "boost/scoped_ptr.hpp"
#include "boost/variant.hpp"
#include "gflags/gflags.h"
#include "glog/logging.h"
#include "caffe/proto/caffe.pb.h"
#include "caffe/util/db.hpp"
#include "caffe/util/format.hpp"
#include "caffe/util/io.hpp"
#include "caffe/util/rng.hpp"
using namespace caffe; // NOLINT(build/namespaces)
using std::pair;
using boost::scoped_ptr;
DEFINE_bool(gray, false,
"When this option is on, treat images as grayscale ones");
DEFINE_bool(shuffle, false,
"Randomly shuffle the order of images and their labels");
DEFINE_string(backend, "lmdb",
"The backend {lmdb, leveldb} for storing the result");
DEFINE_string(anno_type, "classification",
"The type of annotation {classification, detection}.");
DEFINE_string(label_type, "xml",
"The type of annotation file format.");
DEFINE_string(label_map_file, "",
"A file with LabelMap protobuf message.");
DEFINE_bool(check_label, false,
"When this option is on, check that there is no duplicated name/label.");
DEFINE_int32(min_dim, 0,
"Minimum dimension images are resized to (keep same aspect ratio)");
DEFINE_int32(max_dim, 0,
"Maximum dimension images are resized to (keep same aspect ratio)");
DEFINE_int32(resize_width, 0, "Width images are resized to");
DEFINE_int32(resize_height, 0, "Height images are resized to");
DEFINE_bool(check_size, false,
"When this option is on, check that all the datum have the same size");
DEFINE_bool(encoded, false,
"When this option is on, the encoded image will be save in datum");
DEFINE_string(encode_type, "",
"Optional: What type should we encode the image as ('png','jpg',...).");
int main(int argc, char** argv) {
#ifdef USE_OPENCV
::google::InitGoogleLogging(argv[0]);
// Print output to stderr (while still logging)
FLAGS_alsologtostderr = 1;
#ifndef GFLAGS_GFLAGS_H_
namespace gflags = google;
#endif
gflags::SetUsageMessage("Convert a set of images and annotations to the "
"leveldb/lmdb format used as input for Caffe.\n"
"Usage:\n"
" convert_annoset [FLAGS] ROOTFOLDER/ LISTFILE DB_NAME\n");
gflags::ParseCommandLineFlags(&argc, &argv, true);
if (argc < 4) {
gflags::ShowUsageWithFlagsRestrict(argv[0], "tools/convert_annoset");
return 1;
}
const bool is_color = !FLAGS_gray;
const bool check_size = FLAGS_check_size;
const bool encoded = FLAGS_encoded;
const string encode_type = FLAGS_encode_type;
const string anno_type = FLAGS_anno_type;
AnnotatedDatum_AnnotationType type;
const string label_type = FLAGS_label_type;
const string label_map_file = FLAGS_label_map_file;
const bool check_label = FLAGS_check_label;
std::map<std::string, int> name_to_label;
std::ifstream infile(argv[2]);
std::vector<std::pair<std::string, boost::variant<int, std::string> > > lines;
std::string filename;
int label;
std::string labelname;
if (anno_type == "classification") {
while (infile >> filename >> label) {
lines.push_back(std::make_pair(filename, label));
}
} else if (anno_type == "detection") {
type = AnnotatedDatum_AnnotationType_BBOX;
LabelMap label_map;
CHECK(ReadProtoFromTextFile(label_map_file, &label_map))
<< "Failed to read label map file.";
CHECK(MapNameToLabel(label_map, check_label, &name_to_label))
<< "Failed to convert name to label.";
while (infile >> filename >> labelname) {
lines.push_back(std::make_pair(filename, labelname));
}
}
if (FLAGS_shuffle) {
// randomly shuffle data
LOG(INFO) << "Shuffling data";
shuffle(lines.begin(), lines.end());
}
LOG(INFO) << "A total of " << lines.size() << " images.";
if (encode_type.size() && !encoded)
LOG(INFO) << "encode_type specified, assuming encoded=true.";
int min_dim = std::max<int>(0, FLAGS_min_dim);
int max_dim = std::max<int>(0, FLAGS_max_dim);
int resize_height = std::max<int>(0, FLAGS_resize_height);
int resize_width = std::max<int>(0, FLAGS_resize_width);
// Create new DB
scoped_ptr<db::DB> db(db::GetDB(FLAGS_backend));
db->Open(argv[3], db::NEW);
scoped_ptr<db::Transaction> txn(db->NewTransaction());
// Storing to db
std::string root_folder(argv[1]);
AnnotatedDatum anno_datum;
Datum* datum = anno_datum.mutable_datum();
int count = 0;
int data_size = 0;
bool data_size_initialized = false;
for (int line_id = 0; line_id < lines.size(); ++line_id) {
bool status = true;
std::string enc = encode_type;
if (encoded && !enc.size()) {
// Guess the encoding type from the file name
string fn = lines[line_id].first;
size_t p = fn.rfind('.');
if ( p == fn.npos )
LOG(WARNING) << "Failed to guess the encoding of '" << fn << "'";
enc = fn.substr(p);
std::transform(enc.begin(), enc.end(), enc.begin(), ::tolower);
}
filename = root_folder + lines[line_id].first;
if (anno_type == "classification") {
label = boost::get<int>(lines[line_id].second);
status = ReadImageToDatum(filename, label, resize_height, resize_width,
min_dim, max_dim, is_color, enc, datum);
} else if (anno_type == "detection") {
labelname = root_folder + boost::get<std::string>(lines[line_id].second);
status = ReadRichImageToAnnotatedDatum(filename, labelname, resize_height,
resize_width, min_dim, max_dim, is_color, enc, type, label_type,
name_to_label, &anno_datum);
anno_datum.set_type(AnnotatedDatum_AnnotationType_BBOX);
}
if (status == false) {
LOG(WARNING) << "Failed to read " << lines[line_id].first;
continue;
}
if (check_size) {
if (!data_size_initialized) {
data_size = datum->channels() * datum->height() * datum->width();
data_size_initialized = true;
} else {
const std::string& data = datum->data();
CHECK_EQ(data.size(), data_size) << "Incorrect data field size "
<< data.size();
}
}
// sequential
string key_str = caffe::format_int(line_id, 8) + "_" + lines[line_id].first;
// Put in db
string out;
CHECK(anno_datum.SerializeToString(&out));
txn->Put(key_str, out);
if (++count % 1000 == 0) {
// Commit db
txn->Commit();
txn.reset(db->NewTransaction());
LOG(INFO) << "Processed " << count << " files.";
}
}
// write the last batch
if (count % 1000 != 0) {
txn->Commit();
LOG(INFO) << "Processed " << count << " files.";
}
#else
LOG(FATAL) << "This tool requires OpenCV; compile with USE_OPENCV.";
#endif // USE_OPENCV
return 0;
}
从上面的c++代码直接分析可能比较麻烦,因此换个角度,直接从lmdb文件入手分析,
主要参考这篇文章:https://blog.csdn.net/Touch_Dream/article/details/80598901
分析过程在此不再详述,可以参考以上文章。直接给出结论,VOC lmdb数据集主要由三部分组成:
1)datum,用于存储图片的相关信息
2)annotation_group,用于存储标注信息
3)type,
结合上述文章中的python代码通过实验验证。
1)datum的内容:
channels: 3
height: 500
width: 353
data:...
label: -1
encoded: true
datum用于存储图片的信息,data部分省略,用于保存图片的像素数据。这个label此处没有作用,因此设置为-1(用于标注分类数据)。encode为真是由于jpeg经过编码,需要解码。
2)annotation_group的内容:
[group_label: 12
annotation {
instance_id: 0
bbox {
xmin: 0.135977342725
ymin: 0.479999989271
xmax: 0.552407920361
ymax: 0.741999983788
difficult: false
}
}
, group_label: 15
annotation {
instance_id: 0
bbox {
xmin: 0.0226628892124
ymin: 0.0240000002086
xmax: 0.997167110443
ymax: 0.995999991894
difficult: false
}
}
]
使用list的方式存储标注信息,group_label表示物体类别,instance_id此处没有作用因此设置为0,bbox用于存储标注框坐标与difficult相关信息。此处还要注意标注框是相对于图片长度与高度的坐标。
3)type的内容
type的值直接为0,根据注释:
// If there are "rich" annotations, specify the type of annotation.
// Currently it only supports bounding box.
// If there are no "rich" annotations, use label in datum instead.
此处type用于标注是否为“富”标注,当前仅支持标注框类型,如果不是“富”标注,则使用datum中的label信息。