A situation in which caffe has a high training accuracy rate when classify.py is classified as class 1

First of all, there is a classification.cpp under caffe/examples/cpp_classification/ that can also be used for classification. When compiling the caffe environment at the beginning, the build all command will generate a classification.bin under build/examples/cpp_classification. The
command format is

/path/to/caffe/build/examples/cpp_classification/classification.bin  
/path/to/deploy.prototxt
/path/to/model/_iter_300.caffemodel   
/path/to/meanfile/mean.binaryproto    
/path/to/wordstxt/word.txt   
/path/to/image/0758.jpg 

You can use this test first. The problem I encountered at the time was that the detection accuracy rate of this command was 100%, and the classification.py test was classified into category 1. This error troubled me for more than a day. I checked various reasons, include

  1. Learning rate adjustment (tried before detection with cpp.bin, also applies to loss always equal to 83..., loss value is negative)
  2. Change the layer (used lenet, alexnet, failed)
  3. Tried to use matcaffe (sorry.. gave up because of GCC version issues)
  4. The lmdb file problem (really let me find one, when generating lmdb, I have to use a list of marked pictures, and I didn't bring it, so all pictures are classified as 1 category... This is a problem, but I solved it Still no success, so continue to investigate)
  5. The problem of the mean file, this is said to be a different method of processing the mean file, I haven't tested it, and I'm really desperate. I can check this direction.
  6. The problem with the code itself
  7. Regarding the problem of the caffe.io.load_image() method, please refer to the relevant introduction http://blog.csdn.net/smf0504/article/details/60138863

    In the end, I really checked out the code problem, which puzzled me. The details are as follows:
    Because the image channel read by caffe through opencv is BGR, when we manually read the image, we need to imitate opencv. This is in classify.py performance in

parser.add_argument(
        "--channel_swap",
        default='2,1,0',
        help="Order to permute input channels. The default converts " +
             "RGB -> BGR since BGR is the Caffe default by way of OpenCV."
    )

The default is 2, 1, 0, so there is no need to adjust the parameters, but the problem is here, this problem that I ignored until I compared the read image with the processed image and found that the
image was not exchanged Channel... When outputting channel_swap, it is displayed as None ...
I manually added a line of
channel_swap = [2, 1, 0]
to solve it, I have to cry for a while... This is nothing
because I tested mnist before and there was no problem at all, so I have always been looking for problems on the layers defined by myself. I didn't expect it to be the problem of the code itself. mnist itself is a grayscale image, so it doesn't matter whether the channel is exchanged or not, but the color image I used is very pitted.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325848889&siteId=291194637