caffe run mnist

http://caffe.berkeleyvision.org/gathered/examples/mnist.html

jwy@jwy:~/caffe$ ./data/mnist/get_mnist.sh


Downloading...
jwy@jwy:~/caffe$ ./examples/mnist/create_mnist.sh
Creating lmdb...
I0526 20:47:44.249351  2884 db_lmdb.cpp:35] Opened lmdb examples/mnist/mnist_train_lmdb
I0526 20:47:44.249537  2884 convert_mnist_data.cpp:88] A total of 60000 items.
I0526 20:47:44.249545  2884 convert_mnist_data.cpp:89] Rows: 28 Cols: 28
I0526 20:47:49.688385  2884 convert_mnist_data.cpp:108] Processed 60000 files.
I0526 20:47:50.344051  2908 db_lmdb.cpp:35] Opened lmdb examples/mnist/mnist_test_lmdb
I0526 20:47:50.344348  2908 convert_mnist_data.cpp:88] A total of 10000 items.
I0526 20:47:50.344357  2908 convert_mnist_data.cpp:89] Rows: 28 Cols: 28
I0526 20:47:51.249704  2908 convert_mnist_data.cpp:108] Processed 10000 files.
Done.
jwy@jwy:~/caffe$ ./examples/mnist/train_lenet.sh
I0526 20:53:03.452682  2966 caffe.cpp:204] Using GPUs 0
I0526 20:53:03.653373  2966 caffe.cpp:209] GPU 0: GeForce GTX 1050 Ti
I0526 20:53:04.087340  2966 solver.cpp:45] Initializing solver from parameters:
test_iter: 100
test_interval: 500
base_lr: 0.01
display: 100
max_iter: 10000
lr_policy: "inv"
gamma: 0.0001
power: 0.75
momentum: 0.9
weight_decay: 0.0005
snapshot: 5000
snapshot_prefix: "examples/mnist/lenet"
solver_mode: GPU
device_id: 0
net: "examples/mnist/lenet_train_test.prototxt"
train_state {
  level: 0
  stage: ""
}
I0526 20:53:04.087513  2966 solver.cpp:102] Creating training net from net file: examples/mnist/lenet_train_test.prototxt
I0526 20:53:04.095394  2966 net.cpp:294] The NetState phase (0) differed from the phase (1) specified by a rule in layer mnist
I0526 20:53:04.095438  2966 net.cpp:294] The NetState phase (0) differed from the phase (1) specified by a rule in layer accuracy
I0526 20:53:04.095557  2966 net.cpp:51] Initializing net from parameters:
name: "LeNet"
state {
  phase: TRAIN
  level: 0
  stage: ""
}
layer {
  name: "mnist"
  type: "Data"
  top: "data"
  top: "label"
  include {
    phase: TRAIN
  }
  transform_param {
    scale: 0.00390625
  }
  data_param {
    source: "examples/mnist/mnist_train_lmdb"
    batch_size: 64
    backend: LMDB
  }
}
layer {
  name: "conv1"
  type: "Convolution"
  bottom: "data"
  top: "conv1"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  convolution_param {
    num_output: 20
    kernel_size: 5
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "pool1"
  type: "Pooling"
  bottom: "conv1"
  top: "pool1"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
layer {
  name: "conv2"
  type: "Convolution"
  bottom: "pool1"
  top: "conv2"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  convolution_param {
    num_output: 50
    kernel_size: 5
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "pool2"
  type: "Pooling"
  bottom: "conv2"
  top: "pool2"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
layer {
  name: "ip1"
  type: "InnerProduct"
  bottom: "pool2"
  top: "ip1"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 500
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "relu1"
  type: "ReLU"
  bottom: "ip1"
  top: "ip1"
}
layer {
  name: "ip2"
  type: "InnerProduct"
  bottom: "ip1"
  top: "ip2"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 10
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "loss"
  type: "SoftmaxWithLoss"
  bottom: "ip2"
  bottom: "label"
  top: "loss"
}
I0526 20:53:04.095737  2966 layer_factory.hpp:77] Creating layer mnist
I0526 20:53:04.095877  2966 db_lmdb.cpp:35] Opened lmdb examples/mnist/mnist_train_lmdb
I0526 20:53:04.095918  2966 net.cpp:84] Creating Layer mnist
I0526 20:53:04.095927  2966 net.cpp:380] mnist -> data
I0526 20:53:04.095945  2966 net.cpp:380] mnist -> label
I0526 20:53:04.102393  2966 data_layer.cpp:45] output data size: 64,1,28,28
I0526 20:53:04.104282  2966 net.cpp:122] Setting up mnist
I0526 20:53:04.104310  2966 net.cpp:129] Top shape: 64 1 28 28 (50176)
I0526 20:53:04.104315  2966 net.cpp:129] Top shape: 64 (64)
I0526 20:53:04.104317  2966 net.cpp:137] Memory required for data: 200960
I0526 20:53:04.104326  2966 layer_factory.hpp:77] Creating layer conv1
I0526 20:53:04.104351  2966 net.cpp:84] Creating Layer conv1
I0526 20:53:04.104357  2966 net.cpp:406] conv1 <- data
I0526 20:53:04.104368  2966 net.cpp:380] conv1 -> conv1
I0526 20:53:05.566699  2966 net.cpp:122] Setting up conv1
I0526 20:53:05.566740  2966 net.cpp:129] Top shape: 64 20 24 24 (737280)
I0526 20:53:05.566743  2966 net.cpp:137] Memory required for data: 3150080
I0526 20:53:05.566781  2966 layer_factory.hpp:77] Creating layer pool1
I0526 20:53:05.566792  2966 net.cpp:84] Creating Layer pool1
I0526 20:53:05.566882  2966 net.cpp:406] pool1 <- conv1
I0526 20:53:05.566889  2966 net.cpp:380] pool1 -> pool1
I0526 20:53:05.566968  2966 net.cpp:122] Setting up pool1
I0526 20:53:05.566973  2966 net.cpp:129] Top shape: 64 20 12 12 (184320)
I0526 20:53:05.566975  2966 net.cpp:137] Memory required for data: 3887360
I0526 20:53:05.566978  2966 layer_factory.hpp:77] Creating layer conv2
I0526 20:53:05.566998  2966 net.cpp:84] Creating Layer conv2
I0526 20:53:05.567000  2966 net.cpp:406] conv2 <- pool1
I0526 20:53:05.567021  2966 net.cpp:380] conv2 -> conv2
I0526 20:53:05.595108  2966 net.cpp:122] Setting up conv2
I0526 20:53:05.595130  2966 net.cpp:129] Top shape: 64 50 8 8 (204800)
I0526 20:53:05.595134  2966 net.cpp:137] Memory required for data: 4706560
I0526 20:53:05.595166  2966 layer_factory.hpp:77] Creating layer pool2
I0526 20:53:05.595178  2966 net.cpp:84] Creating Layer pool2
I0526 20:53:05.595183  2966 net.cpp:406] pool2 <- conv2
I0526 20:53:05.595190  2966 net.cpp:380] pool2 -> pool2
I0526 20:53:05.595263  2966 net.cpp:122] Setting up pool2
I0526 20:53:05.595269  2966 net.cpp:129] Top shape: 64 50 4 4 (51200)
I0526 20:53:05.595273  2966 net.cpp:137] Memory required for data: 4911360
I0526 20:53:05.595293  2966 layer_factory.hpp:77] Creating layer ip1
I0526 20:53:05.595299  2966 net.cpp:84] Creating Layer ip1
I0526 20:53:05.595304  2966 net.cpp:406] ip1 <- pool2
I0526 20:53:05.595309  2966 net.cpp:380] ip1 -> ip1
I0526 20:53:05.597863  2966 net.cpp:122] Setting up ip1
I0526 20:53:05.597903  2966 net.cpp:129] Top shape: 64 500 (32000)
I0526 20:53:05.597905  2966 net.cpp:137] Memory required for data: 5039360
I0526 20:53:05.597939  2966 layer_factory.hpp:77] Creating layer relu1
I0526 20:53:05.597962  2966 net.cpp:84] Creating Layer relu1
I0526 20:53:05.597966  2966 net.cpp:406] relu1 <- ip1
I0526 20:53:05.597971  2966 net.cpp:367] relu1 -> ip1 (in-place)
I0526 20:53:05.598219  2966 net.cpp:122] Setting up relu1
I0526 20:53:05.598227  2966 net.cpp:129] Top shape: 64 500 (32000)
I0526 20:53:05.598246  2966 net.cpp:137] Memory required for data: 5167360
I0526 20:53:05.598249  2966 layer_factory.hpp:77] Creating layer ip2
I0526 20:53:05.598275  2966 net.cpp:84] Creating Layer ip2
I0526 20:53:05.598279  2966 net.cpp:406] ip2 <- ip1
I0526 20:53:05.598285  2966 net.cpp:380] ip2 -> ip2
I0526 20:53:05.599239  2966 net.cpp:122] Setting up ip2
I0526 20:53:05.599251  2966 net.cpp:129] Top shape: 64 10 (640)
I0526 20:53:05.599256  2966 net.cpp:137] Memory required for data: 5169920
I0526 20:53:05.599261  2966 layer_factory.hpp:77] Creating layer loss
I0526 20:53:05.599269  2966 net.cpp:84] Creating Layer loss
I0526 20:53:05.599272  2966 net.cpp:406] loss <- ip2
I0526 20:53:05.599275  2966 net.cpp:406] loss <- label
I0526 20:53:05.599282  2966 net.cpp:380] loss -> loss
I0526 20:53:05.599297  2966 layer_factory.hpp:77] Creating layer loss
I0526 20:53:05.599552  2966 net.cpp:122] Setting up loss
I0526 20:53:05.599560  2966 net.cpp:129] Top shape: (1)
I0526 20:53:05.599565  2966 net.cpp:132]     with loss weight 1
I0526 20:53:05.599584  2966 net.cpp:137] Memory required for data: 5169924
I0526 20:53:05.599588  2966 net.cpp:198] loss needs backward computation.
I0526 20:53:05.599594  2966 net.cpp:198] ip2 needs backward computation.
I0526 20:53:05.599597  2966 net.cpp:198] relu1 needs backward computation.
I0526 20:53:05.599601  2966 net.cpp:198] ip1 needs backward computation.
I0526 20:53:05.599603  2966 net.cpp:198] pool2 needs backward computation.
I0526 20:53:05.599606  2966 net.cpp:198] conv2 needs backward computation.
I0526 20:53:05.599609  2966 net.cpp:198] pool1 needs backward computation.
I0526 20:53:05.599612  2966 net.cpp:198] conv1 needs backward computation.
I0526 20:53:05.599615  2966 net.cpp:200] mnist does not need backward computation.
I0526 20:53:05.599618  2966 net.cpp:242] This network produces output loss
I0526 20:53:05.599627  2966 net.cpp:255] Network initialization done.
I0526 20:53:05.599763  2966 solver.cpp:190] Creating test net (#0) specified by net file: examples/mnist/lenet_train_test.prototxt
I0526 20:53:05.599830  2966 net.cpp:294] The NetState phase (1) differed from the phase (0) specified by a rule in layer mnist
I0526 20:53:05.599902  2966 net.cpp:51] Initializing net from parameters:
name: "LeNet"
state {
  phase: TEST
}
layer {
  name: "mnist"
  type: "Data"
  top: "data"
  top: "label"
  include {
    phase: TEST
  }
  transform_param {
    scale: 0.00390625
  }
  data_param {
    source: "examples/mnist/mnist_test_lmdb"
    batch_size: 100
    backend: LMDB
  }
}
layer {
  name: "conv1"
  type: "Convolution"
  bottom: "data"
  top: "conv1"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  convolution_param {
    num_output: 20
    kernel_size: 5
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "pool1"
  type: "Pooling"
  bottom: "conv1"
  top: "pool1"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
layer {
  name: "conv2"
  type: "Convolution"
  bottom: "pool1"
  top: "conv2"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  convolution_param {
    num_output: 50
    kernel_size: 5
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "pool2"
  type: "Pooling"
  bottom: "conv2"
  top: "pool2"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
layer {
  name: "ip1"
  type: "InnerProduct"
  bottom: "pool2"
  top: "ip1"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 500
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "relu1"
  type: "ReLU"
  bottom: "ip1"
  top: "ip1"
}
layer {
  name: "ip2"
  type: "InnerProduct"
  bottom: "ip1"
  top: "ip2"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 10
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "accuracy"
  type: "Accuracy"
  bottom: "ip2"
  bottom: "label"
  top: "accuracy"
  include {
    phase: TEST
  }
}
layer {
  name: "loss"
  type: "SoftmaxWithLoss"
  bottom: "ip2"
  bottom: "label"
  top: "loss"
}
I0526 20:53:05.600005  2966 layer_factory.hpp:77] Creating layer mnist
I0526 20:53:05.600055  2966 db_lmdb.cpp:35] Opened lmdb examples/mnist/mnist_test_lmdb
I0526 20:53:05.600070  2966 net.cpp:84] Creating Layer mnist
I0526 20:53:05.600075  2966 net.cpp:380] mnist -> data
I0526 20:53:05.600100  2966 net.cpp:380] mnist -> label
I0526 20:53:05.600188  2966 data_layer.cpp:45] output data size: 100,1,28,28
I0526 20:53:05.601981  2966 net.cpp:122] Setting up mnist
I0526 20:53:05.602020  2966 net.cpp:129] Top shape: 100 1 28 28 (78400)
I0526 20:53:05.602025  2966 net.cpp:129] Top shape: 100 (100)
I0526 20:53:05.602027  2966 net.cpp:137] Memory required for data: 314000
I0526 20:53:05.602035  2966 layer_factory.hpp:77] Creating layer label_mnist_1_split
I0526 20:53:05.602051  2966 net.cpp:84] Creating Layer label_mnist_1_split
I0526 20:53:05.602054  2966 net.cpp:406] label_mnist_1_split <- label
I0526 20:53:05.602061  2966 net.cpp:380] label_mnist_1_split -> label_mnist_1_split_0
I0526 20:53:05.602071  2966 net.cpp:380] label_mnist_1_split -> label_mnist_1_split_1
I0526 20:53:05.602120  2966 net.cpp:122] Setting up label_mnist_1_split
I0526 20:53:05.602126  2966 net.cpp:129] Top shape: 100 (100)
I0526 20:53:05.602130  2966 net.cpp:129] Top shape: 100 (100)
I0526 20:53:05.602133  2966 net.cpp:137] Memory required for data: 314800
I0526 20:53:05.602135  2966 layer_factory.hpp:77] Creating layer conv1
I0526 20:53:05.602149  2966 net.cpp:84] Creating Layer conv1
I0526 20:53:05.602152  2966 net.cpp:406] conv1 <- data
I0526 20:53:05.602159  2966 net.cpp:380] conv1 -> conv1
I0526 20:53:05.605353  2966 net.cpp:122] Setting up conv1
I0526 20:53:05.605406  2966 net.cpp:129] Top shape: 100 20 24 24 (1152000)
I0526 20:53:05.605412  2966 net.cpp:137] Memory required for data: 4922800
I0526 20:53:05.605425  2966 layer_factory.hpp:77] Creating layer pool1
I0526 20:53:05.605515  2966 net.cpp:84] Creating Layer pool1
I0526 20:53:05.605525  2966 net.cpp:406] pool1 <- conv1
I0526 20:53:05.605532  2966 net.cpp:380] pool1 -> pool1
I0526 20:53:05.605584  2966 net.cpp:122] Setting up pool1
I0526 20:53:05.605592  2966 net.cpp:129] Top shape: 100 20 12 12 (288000)
I0526 20:53:05.605594  2966 net.cpp:137] Memory required for data: 6074800
I0526 20:53:05.605597  2966 layer_factory.hpp:77] Creating layer conv2
I0526 20:53:05.605614  2966 net.cpp:84] Creating Layer conv2
I0526 20:53:05.605618  2966 net.cpp:406] conv2 <- pool1
I0526 20:53:05.605625  2966 net.cpp:380] conv2 -> conv2
I0526 20:53:05.606948  2966 net.cpp:122] Setting up conv2
I0526 20:53:05.606976  2966 net.cpp:129] Top shape: 100 50 8 8 (320000)
I0526 20:53:05.606981  2966 net.cpp:137] Memory required for data: 7354800
I0526 20:53:05.606994  2966 layer_factory.hpp:77] Creating layer pool2
I0526 20:53:05.607010  2966 net.cpp:84] Creating Layer pool2
I0526 20:53:05.607015  2966 net.cpp:406] pool2 <- conv2
I0526 20:53:05.607024  2966 net.cpp:380] pool2 -> pool2
I0526 20:53:05.607066  2966 net.cpp:122] Setting up pool2
I0526 20:53:05.607072  2966 net.cpp:129] Top shape: 100 50 4 4 (80000)
I0526 20:53:05.607076  2966 net.cpp:137] Memory required for data: 7674800
I0526 20:53:05.607079  2966 layer_factory.hpp:77] Creating layer ip1
I0526 20:53:05.607089  2966 net.cpp:84] Creating Layer ip1
I0526 20:53:05.607092  2966 net.cpp:406] ip1 <- pool2
I0526 20:53:05.607100  2966 net.cpp:380] ip1 -> ip1
I0526 20:53:05.610023  2966 net.cpp:122] Setting up ip1
I0526 20:53:05.610049  2966 net.cpp:129] Top shape: 100 500 (50000)
I0526 20:53:05.610051  2966 net.cpp:137] Memory required for data: 7874800
I0526 20:53:05.610064  2966 layer_factory.hpp:77] Creating layer relu1
I0526 20:53:05.610082  2966 net.cpp:84] Creating Layer relu1
I0526 20:53:05.610088  2966 net.cpp:406] relu1 <- ip1
I0526 20:53:05.610095  2966 net.cpp:367] relu1 -> ip1 (in-place)
I0526 20:53:05.610596  2966 net.cpp:122] Setting up relu1
I0526 20:53:05.610606  2966 net.cpp:129] Top shape: 100 500 (50000)
I0526 20:53:05.610610  2966 net.cpp:137] Memory required for data: 8074800
I0526 20:53:05.610615  2966 layer_factory.hpp:77] Creating layer ip2
I0526 20:53:05.610625  2966 net.cpp:84] Creating Layer ip2
I0526 20:53:05.610628  2966 net.cpp:406] ip2 <- ip1
I0526 20:53:05.610635  2966 net.cpp:380] ip2 -> ip2
I0526 20:53:05.610752  2966 net.cpp:122] Setting up ip2
I0526 20:53:05.610759  2966 net.cpp:129] Top shape: 100 10 (1000)
I0526 20:53:05.610762  2966 net.cpp:137] Memory required for data: 8078800
I0526 20:53:05.610767  2966 layer_factory.hpp:77] Creating layer ip2_ip2_0_split
I0526 20:53:05.610774  2966 net.cpp:84] Creating Layer ip2_ip2_0_split
I0526 20:53:05.610776  2966 net.cpp:406] ip2_ip2_0_split <- ip2
I0526 20:53:05.610781  2966 net.cpp:380] ip2_ip2_0_split -> ip2_ip2_0_split_0
I0526 20:53:05.610787  2966 net.cpp:380] ip2_ip2_0_split -> ip2_ip2_0_split_1
I0526 20:53:05.610816  2966 net.cpp:122] Setting up ip2_ip2_0_split
I0526 20:53:05.610821  2966 net.cpp:129] Top shape: 100 10 (1000)
I0526 20:53:05.610824  2966 net.cpp:129] Top shape: 100 10 (1000)
I0526 20:53:05.610827  2966 net.cpp:137] Memory required for data: 8086800
I0526 20:53:05.610831  2966 layer_factory.hpp:77] Creating layer accuracy
I0526 20:53:05.610837  2966 net.cpp:84] Creating Layer accuracy
I0526 20:53:05.610841  2966 net.cpp:406] accuracy <- ip2_ip2_0_split_0
I0526 20:53:05.610844  2966 net.cpp:406] accuracy <- label_mnist_1_split_0
I0526 20:53:05.610849  2966 net.cpp:380] accuracy -> accuracy
I0526 20:53:05.610857  2966 net.cpp:122] Setting up accuracy
I0526 20:53:05.610860  2966 net.cpp:129] Top shape: (1)
I0526 20:53:05.610863  2966 net.cpp:137] Memory required for data: 8086804
I0526 20:53:05.610867  2966 layer_factory.hpp:77] Creating layer loss
I0526 20:53:05.610872  2966 net.cpp:84] Creating Layer loss
I0526 20:53:05.610875  2966 net.cpp:406] loss <- ip2_ip2_0_split_1
I0526 20:53:05.610879  2966 net.cpp:406] loss <- label_mnist_1_split_1
I0526 20:53:05.610904  2966 net.cpp:380] loss -> loss
I0526 20:53:05.610911  2966 layer_factory.hpp:77] Creating layer loss
I0526 20:53:05.611115  2966 net.cpp:122] Setting up loss
I0526 20:53:05.611124  2966 net.cpp:129] Top shape: (1)
I0526 20:53:05.611126  2966 net.cpp:132]     with loss weight 1
I0526 20:53:05.611135  2966 net.cpp:137] Memory required for data: 8086808
I0526 20:53:05.611140  2966 net.cpp:198] loss needs backward computation.
I0526 20:53:05.611143  2966 net.cpp:200] accuracy does not need backward computation.
I0526 20:53:05.611147  2966 net.cpp:198] ip2_ip2_0_split needs backward computation.
I0526 20:53:05.611151  2966 net.cpp:198] ip2 needs backward computation.
I0526 20:53:05.611155  2966 net.cpp:198] relu1 needs backward computation.
I0526 20:53:05.611158  2966 net.cpp:198] ip1 needs backward computation.
I0526 20:53:05.611161  2966 net.cpp:198] pool2 needs backward computation.
I0526 20:53:05.611166  2966 net.cpp:198] conv2 needs backward computation.
I0526 20:53:05.611168  2966 net.cpp:198] pool1 needs backward computation.
I0526 20:53:05.611172  2966 net.cpp:198] conv1 needs backward computation.
I0526 20:53:05.611176  2966 net.cpp:200] label_mnist_1_split does not need backward computation.
I0526 20:53:05.611181  2966 net.cpp:200] mnist does not need backward computation.
I0526 20:53:05.611183  2966 net.cpp:242] This network produces output accuracy
I0526 20:53:05.611187  2966 net.cpp:242] This network produces output loss
I0526 20:53:05.611197  2966 net.cpp:255] Network initialization done.
I0526 20:53:05.611235  2966 solver.cpp:57] Solver scaffolding done.
I0526 20:53:05.611456  2966 caffe.cpp:239] Starting Optimization
I0526 20:53:05.611460  2966 solver.cpp:293] Solving LeNet
I0526 20:53:05.611464  2966 solver.cpp:294] Learning Rate Policy: inv
I0526 20:53:05.612161  2966 solver.cpp:351] Iteration 0, Testing net (#0)
I0526 20:53:05.638193  2966 blocking_queue.cpp:49] Waiting for data
I0526 20:53:05.727545  2980 data_layer.cpp:73] Restarting data prefetching from start.
I0526 20:53:05.731555  2966 solver.cpp:418]     Test net output #0: accuracy = 0.1216
I0526 20:53:05.731581  2966 solver.cpp:418]     Test net output #1: loss = 2.36362 (* 1 = 2.36362 loss)
I0526 20:53:05.735931  2966 solver.cpp:239] Iteration 0 (-0.273247 iter/s, 0.124447s/100 iters), loss = 2.32661
I0526 20:53:05.735957  2966 solver.cpp:258]     Train net output #0: loss = 2.32661 (* 1 = 2.32661 loss)
I0526 20:53:05.735975  2966 sgd_solver.cpp:112] Iteration 0, lr = 0.01
I0526 20:53:06.072301  2966 solver.cpp:239] Iteration 100 (297.317 iter/s, 0.336342s/100 iters), loss = 0.213958
I0526 20:53:06.072345  2966 solver.cpp:258]     Train net output #0: loss = 0.213958 (* 1 = 0.213958 loss)
I0526 20:53:06.072353  2966 sgd_solver.cpp:112] Iteration 100, lr = 0.00992565
I0526 20:53:06.394886  2966 solver.cpp:239] Iteration 200 (310.036 iter/s, 0.322544s/100 iters), loss = 0.151827
I0526 20:53:06.394940  2966 solver.cpp:258]     Train net output #0: loss = 0.151827 (* 1 = 0.151827 loss)
I0526 20:53:06.394951  2966 sgd_solver.cpp:112] Iteration 200, lr = 0.00985258
I0526 20:53:06.702833  2966 solver.cpp:239] Iteration 300 (324.786 iter/s, 0.307895s/100 iters), loss = 0.199652
I0526 20:53:06.702881  2966 solver.cpp:258]     Train net output #0: loss = 0.199652 (* 1 = 0.199652 loss)
I0526 20:53:06.702906  2966 sgd_solver.cpp:112] Iteration 300, lr = 0.00978075
I0526 20:53:07.018010  2966 solver.cpp:239] Iteration 400 (317.336 iter/s, 0.315124s/100 iters), loss = 0.0894907
I0526 20:53:07.018054  2966 solver.cpp:258]     Train net output #0: loss = 0.0894907 (* 1 = 0.0894907 loss)
I0526 20:53:07.018064  2966 sgd_solver.cpp:112] Iteration 400, lr = 0.00971013
I0526 20:53:07.324208  2966 solver.cpp:351] Iteration 500, Testing net (#0)
I0526 20:53:07.428937  2980 data_layer.cpp:73] Restarting data prefetching from start.
I0526 20:53:07.430645  2966 solver.cpp:418]     Test net output #0: accuracy = 0.9724
I0526 20:53:07.430685  2966 solver.cpp:418]     Test net output #1: loss = 0.0885262 (* 1 = 0.0885262 loss)
I0526 20:53:07.433795  2966 solver.cpp:239] Iteration 500 (240.528 iter/s, 0.415753s/100 iters), loss = 0.186715
I0526 20:53:07.433852  2966 solver.cpp:258]     Train net output #0: loss = 0.186715 (* 1 = 0.186715 loss)
I0526 20:53:07.433861  2966 sgd_solver.cpp:112] Iteration 500, lr = 0.00964069
I0526 20:53:07.748772  2966 solver.cpp:239] Iteration 600 (317.54 iter/s, 0.314921s/100 iters), loss = 0.100644
I0526 20:53:07.748816  2966 solver.cpp:258]     Train net output #0: loss = 0.100644 (* 1 = 0.100644 loss)
I0526 20:53:07.748826  2966 sgd_solver.cpp:112] Iteration 600, lr = 0.0095724
I0526 20:53:08.069672  2966 solver.cpp:239] Iteration 700 (311.664 iter/s, 0.320859s/100 iters), loss = 0.161007
I0526 20:53:08.069715  2966 solver.cpp:258]     Train net output #0: loss = 0.161007 (* 1 = 0.161007 loss)
I0526 20:53:08.069725  2966 sgd_solver.cpp:112] Iteration 700, lr = 0.00950522
I0526 20:53:08.391367  2966 solver.cpp:239] Iteration 800 (310.892 iter/s, 0.321655s/100 iters), loss = 0.236455
I0526 20:53:08.391410  2966 solver.cpp:258]     Train net output #0: loss = 0.236456 (* 1 = 0.236456 loss)
I0526 20:53:08.391420  2966 sgd_solver.cpp:112] Iteration 800, lr = 0.00943913
I0526 20:53:08.708053  2966 solver.cpp:239] Iteration 900 (315.809 iter/s, 0.316647s/100 iters), loss = 0.177113
I0526 20:53:08.708115  2966 solver.cpp:258]     Train net output #0: loss = 0.177113 (* 1 = 0.177113 loss)
I0526 20:53:08.708128  2966 sgd_solver.cpp:112] Iteration 900, lr = 0.00937411
I0526 20:53:08.817097  2979 data_layer.cpp:73] Restarting data prefetching from start.
I0526 20:53:09.029856  2966 solver.cpp:351] Iteration 1000, Testing net (#0)
I0526 20:53:09.156496  2980 data_layer.cpp:73] Restarting data prefetching from start.
I0526 20:53:09.158327  2966 solver.cpp:418]     Test net output #0: accuracy = 0.9804
I0526 20:53:09.158361  2966 solver.cpp:418]     Test net output #1: loss = 0.0615653 (* 1 = 0.0615653 loss)
I0526 20:53:09.161300  2966 solver.cpp:239] Iteration 1000 (220.655 iter/s, 0.453195s/100 iters), loss = 0.123391
I0526 20:53:09.161393  2966 solver.cpp:258]     Train net output #0: loss = 0.123391 (* 1 = 0.123391 loss)
I0526 20:53:09.161434  2966 sgd_solver.cpp:112] Iteration 1000, lr = 0.00931012
I0526 20:53:09.510118  2966 solver.cpp:239] Iteration 1100 (286.763 iter/s, 0.34872s/100 iters), loss = 0.00871656
I0526 20:53:09.510197  2966 solver.cpp:258]     Train net output #0: loss = 0.00871659 (* 1 = 0.00871659 loss)
I0526 20:53:09.510212  2966 sgd_solver.cpp:112] Iteration 1100, lr = 0.00924715
I0526 20:53:09.855741  2966 solver.cpp:239] Iteration 1200 (289.396 iter/s, 0.345548s/100 iters), loss = 0.0137676
I0526 20:53:09.855787  2966 solver.cpp:258]     Train net output #0: loss = 0.0137676 (* 1 = 0.0137676 loss)
I0526 20:53:09.855795  2966 sgd_solver.cpp:112] Iteration 1200, lr = 0.00918515
I0526 20:53:10.174860  2966 solver.cpp:239] Iteration 1300 (313.404 iter/s, 0.319077s/100 iters), loss = 0.0125752
I0526 20:53:10.174901  2966 solver.cpp:258]     Train net output #0: loss = 0.0125753 (* 1 = 0.0125753 loss)
I0526 20:53:10.174909  2966 sgd_solver.cpp:112] Iteration 1300, lr = 0.00912412
I0526 20:53:10.491436  2966 solver.cpp:239] Iteration 1400 (315.917 iter/s, 0.316539s/100 iters), loss = 0.0048969
I0526 20:53:10.491477  2966 solver.cpp:258]     Train net output #0: loss = 0.00489691 (* 1 = 0.00489691 loss)
I0526 20:53:10.491487  2966 sgd_solver.cpp:112] Iteration 1400, lr = 0.00906403
I0526 20:53:10.801805  2966 solver.cpp:351] Iteration 1500, Testing net (#0)
I0526 20:53:10.910562  2980 data_layer.cpp:73] Restarting data prefetching from start.
I0526 20:53:10.913337  2966 solver.cpp:418]     Test net output #0: accuracy = 0.9831
I0526 20:53:10.913365  2966 solver.cpp:418]     Test net output #1: loss = 0.0542353 (* 1 = 0.0542353 loss)
I0526 20:53:10.916178  2966 solver.cpp:239] Iteration 1500 (235.455 iter/s, 0.42471s/100 iters), loss = 0.0902566
I0526 20:53:10.916203  2966 solver.cpp:258]     Train net output #0: loss = 0.0902567 (* 1 = 0.0902567 loss)
I0526 20:53:10.916213  2966 sgd_solver.cpp:112] Iteration 1500, lr = 0.00900485
I0526 20:53:11.231348  2966 solver.cpp:239] Iteration 1600 (317.312 iter/s, 0.315147s/100 iters), loss = 0.123736
I0526 20:53:11.231379  2966 solver.cpp:258]     Train net output #0: loss = 0.123736 (* 1 = 0.123736 loss)
I0526 20:53:11.231386  2966 sgd_solver.cpp:112] Iteration 1600, lr = 0.00894657
I0526 20:53:11.538249  2966 solver.cpp:239] Iteration 1700 (325.867 iter/s, 0.306873s/100 iters), loss = 0.0267345
I0526 20:53:11.538297  2966 solver.cpp:258]     Train net output #0: loss = 0.0267346 (* 1 = 0.0267346 loss)
I0526 20:53:11.538327  2966 sgd_solver.cpp:112] Iteration 1700, lr = 0.00888916
I0526 20:53:11.854401  2966 solver.cpp:239] Iteration 1800 (316.345 iter/s, 0.31611s/100 iters), loss = 0.0102682
I0526 20:53:11.854444  2966 solver.cpp:258]     Train net output #0: loss = 0.0102682 (* 1 = 0.0102682 loss)
I0526 20:53:11.854454  2966 sgd_solver.cpp:112] Iteration 1800, lr = 0.0088326
I0526 20:53:12.081883  2979 data_layer.cpp:73] Restarting data prefetching from start.
I0526 20:53:12.174082  2966 solver.cpp:239] Iteration 1900 (312.847 iter/s, 0.319645s/100 iters), loss = 0.131612
I0526 20:53:12.174130  2966 solver.cpp:258]     Train net output #0: loss = 0.131612 (* 1 = 0.131612 loss)
I0526 20:53:12.174139  2966 sgd_solver.cpp:112] Iteration 1900, lr = 0.00877687
I0526 20:53:12.488847  2966 solver.cpp:351] Iteration 2000, Testing net (#0)
I0526 20:53:12.595489  2980 data_layer.cpp:73] Restarting data prefetching from start.
I0526 20:53:12.597185  2966 solver.cpp:418]     Test net output #0: accuracy = 0.9852
I0526 20:53:12.597208  2966 solver.cpp:418]     Test net output #1: loss = 0.0458715 (* 1 = 0.0458715 loss)
I0526 20:53:12.600069  2966 solver.cpp:239] Iteration 2000 (234.768 iter/s, 0.425952s/100 iters), loss = 0.0130547
I0526 20:53:12.600131  2966 solver.cpp:258]     Train net output #0: loss = 0.0130547 (* 1 = 0.0130547 loss)
I0526 20:53:12.600138  2966 sgd_solver.cpp:112] Iteration 2000, lr = 0.00872196
I0526 20:53:12.911358  2966 solver.cpp:239] Iteration 2100 (321.262 iter/s, 0.311272s/100 iters), loss = 0.0510326
I0526 20:53:12.911401  2966 solver.cpp:258]     Train net output #0: loss = 0.0510326 (* 1 = 0.0510326 loss)
I0526 20:53:12.911411  2966 sgd_solver.cpp:112] Iteration 2100, lr = 0.00866784
I0526 20:53:13.226441  2966 solver.cpp:239] Iteration 2200 (317.416 iter/s, 0.315044s/100 iters), loss = 0.0169382
I0526 20:53:13.226485  2966 solver.cpp:258]     Train net output #0: loss = 0.0169383 (* 1 = 0.0169383 loss)
I0526 20:53:13.226495  2966 sgd_solver.cpp:112] Iteration 2200, lr = 0.0086145
I0526 20:53:13.541839  2966 solver.cpp:239] Iteration 2300 (317.101 iter/s, 0.315357s/100 iters), loss = 0.116517
I0526 20:53:13.541883  2966 solver.cpp:258]     Train net output #0: loss = 0.116517 (* 1 = 0.116517 loss)
I0526 20:53:13.541893  2966 sgd_solver.cpp:112] Iteration 2300, lr = 0.00856192
I0526 20:53:13.856551  2966 solver.cpp:239] Iteration 2400 (317.793 iter/s, 0.31467s/100 iters), loss = 0.0162969
I0526 20:53:13.856595  2966 solver.cpp:258]     Train net output #0: loss = 0.0162969 (* 1 = 0.0162969 loss)
I0526 20:53:13.856604  2966 sgd_solver.cpp:112] Iteration 2400, lr = 0.00851008
I0526 20:53:14.175319  2966 solver.cpp:351] Iteration 2500, Testing net (#0)
I0526 20:53:14.326673  2980 data_layer.cpp:73] Restarting data prefetching from start.
I0526 20:53:14.329370  2966 solver.cpp:418]     Test net output #0: accuracy = 0.9847
I0526 20:53:14.329412  2966 solver.cpp:418]     Test net output #1: loss = 0.0517607 (* 1 = 0.0517607 loss)
I0526 20:53:14.332283  2966 solver.cpp:239] Iteration 2500 (210.216 iter/s, 0.475701s/100 iters), loss = 0.0305351
I0526 20:53:14.332320  2966 solver.cpp:258]     Train net output #0: loss = 0.0305351 (* 1 = 0.0305351 loss)
I0526 20:53:14.332332  2966 sgd_solver.cpp:112] Iteration 2500, lr = 0.00845897
I0526 20:53:14.678824  2966 solver.cpp:239] Iteration 2600 (288.597 iter/s, 0.346504s/100 iters), loss = 0.056123
I0526 20:53:14.678854  2966 solver.cpp:258]     Train net output #0: loss = 0.056123 (* 1 = 0.056123 loss)
I0526 20:53:14.678879  2966 sgd_solver.cpp:112] Iteration 2600, lr = 0.00840857
I0526 20:53:14.998436  2966 solver.cpp:239] Iteration 2700 (312.908 iter/s, 0.319583s/100 iters), loss = 0.0520153
I0526 20:53:14.998476  2966 solver.cpp:258]     Train net output #0: loss = 0.0520154 (* 1 = 0.0520154 loss)
I0526 20:53:14.998486  2966 sgd_solver.cpp:112] Iteration 2700, lr = 0.00835886
I0526 20:53:15.315748  2966 solver.cpp:239] Iteration 2800 (315.184 iter/s, 0.317275s/100 iters), loss = 0.00551482
I0526 20:53:15.315791  2966 solver.cpp:258]     Train net output #0: loss = 0.00551484 (* 1 = 0.00551484 loss)
I0526 20:53:15.315800  2966 sgd_solver.cpp:112] Iteration 2800, lr = 0.00830984
I0526 20:53:15.345508  2979 data_layer.cpp:73] Restarting data prefetching from start.
I0526 20:53:15.648234  2966 solver.cpp:239] Iteration 2900 (300.799 iter/s, 0.332447s/100 iters), loss = 0.021568
I0526 20:53:15.648273  2966 solver.cpp:258]     Train net output #0: loss = 0.0215681 (* 1 = 0.0215681 loss)
I0526 20:53:15.648283  2966 sgd_solver.cpp:112] Iteration 2900, lr = 0.00826148
I0526 20:53:15.964462  2966 solver.cpp:351] Iteration 3000, Testing net (#0)
I0526 20:53:16.078405  2980 data_layer.cpp:73] Restarting data prefetching from start.
I0526 20:53:16.085981  2966 solver.cpp:418]     Test net output #0: accuracy = 0.9861
I0526 20:53:16.086019  2966 solver.cpp:418]     Test net output #1: loss = 0.0425739 (* 1 = 0.0425739 loss)
I0526 20:53:16.089234  2966 solver.cpp:239] Iteration 3000 (226.777 iter/s, 0.440963s/100 iters), loss = 0.00921144
I0526 20:53:16.090874  2966 solver.cpp:258]     Train net output #0: loss = 0.00921145 (* 1 = 0.00921145 loss)
I0526 20:53:16.090970  2966 sgd_solver.cpp:112] Iteration 3000, lr = 0.00821377
I0526 20:53:16.443243  2966 solver.cpp:239] Iteration 3100 (283.748 iter/s, 0.352426s/100 iters), loss = 0.00934141
I0526 20:53:16.443294  2966 solver.cpp:258]     Train net output #0: loss = 0.0093414 (* 1 = 0.0093414 loss)
I0526 20:53:16.443300  2966 sgd_solver.cpp:112] Iteration 3100, lr = 0.0081667
I0526 20:53:16.759610  2966 solver.cpp:239] Iteration 3200 (316.119 iter/s, 0.316336s/100 iters), loss = 0.0120144
I0526 20:53:16.759650  2966 solver.cpp:258]     Train net output #0: loss = 0.0120144 (* 1 = 0.0120144 loss)
I0526 20:53:16.759660  2966 sgd_solver.cpp:112] Iteration 3200, lr = 0.00812025
I0526 20:53:17.101521  2966 solver.cpp:239] Iteration 3300 (292.52 iter/s, 0.341857s/100 iters), loss = 0.0458635
I0526 20:53:17.101608  2966 solver.cpp:258]     Train net output #0: loss = 0.0458635 (* 1 = 0.0458635 loss)
I0526 20:53:17.101622  2966 sgd_solver.cpp:112] Iteration 3300, lr = 0.00807442
I0526 20:53:17.443616  2966 solver.cpp:239] Iteration 3400 (292.383 iter/s, 0.342017s/100 iters), loss = 0.0129833
I0526 20:53:17.443660  2966 solver.cpp:258]     Train net output #0: loss = 0.0129833 (* 1 = 0.0129833 loss)
I0526 20:53:17.443670  2966 sgd_solver.cpp:112] Iteration 3400, lr = 0.00802918
I0526 20:53:17.761767  2966 solver.cpp:351] Iteration 3500, Testing net (#0)
I0526 20:53:17.897162  2980 data_layer.cpp:73] Restarting data prefetching from start.
I0526 20:53:17.898878  2966 solver.cpp:418]     Test net output #0: accuracy = 0.9857
I0526 20:53:17.899029  2966 solver.cpp:418]     Test net output #1: loss = 0.0437298 (* 1 = 0.0437298 loss)
I0526 20:53:17.902154  2966 solver.cpp:239] Iteration 3500 (218.099 iter/s, 0.458507s/100 iters), loss = 0.00534368
I0526 20:53:17.902294  2966 solver.cpp:258]     Train net output #0: loss = 0.00534369 (* 1 = 0.00534369 loss)
I0526 20:53:17.902312  2966 sgd_solver.cpp:112] Iteration 3500, lr = 0.00798454
I0526 20:53:18.232357  2966 solver.cpp:239] Iteration 3600 (302.994 iter/s, 0.330039s/100 iters), loss = 0.0281394
I0526 20:53:18.232447  2966 solver.cpp:258]     Train net output #0: loss = 0.0281394 (* 1 = 0.0281394 loss)
I0526 20:53:18.232465  2966 sgd_solver.cpp:112] Iteration 3600, lr = 0.00794046
I0526 20:53:18.607868  2966 solver.cpp:239] Iteration 3700 (266.36 iter/s, 0.375431s/100 iters), loss = 0.0125563
I0526 20:53:18.607975  2966 solver.cpp:258]     Train net output #0: loss = 0.0125563 (* 1 = 0.0125563 loss)
I0526 20:53:18.608063  2966 sgd_solver.cpp:112] Iteration 3700, lr = 0.00789695
I0526 20:53:18.771580  2979 data_layer.cpp:73] Restarting data prefetching from start.
I0526 20:53:18.973671  2966 solver.cpp:239] Iteration 3800 (273.446 iter/s, 0.365703s/100 iters), loss = 0.00594559
I0526 20:53:18.973739  2966 solver.cpp:258]     Train net output #0: loss = 0.00594559 (* 1 = 0.00594559 loss)
I0526 20:53:18.973745  2966 sgd_solver.cpp:112] Iteration 3800, lr = 0.007854
I0526 20:53:19.298178  2966 solver.cpp:239] Iteration 3900 (308.201 iter/s, 0.324463s/100 iters), loss = 0.0212954
I0526 20:53:19.298220  2966 solver.cpp:258]     Train net output #0: loss = 0.0212954 (* 1 = 0.0212954 loss)
I0526 20:53:19.298229  2966 sgd_solver.cpp:112] Iteration 3900, lr = 0.00781158
I0526 20:53:19.611858  2966 solver.cpp:351] Iteration 4000, Testing net (#0)
I0526 20:53:19.726984  2980 data_layer.cpp:73] Restarting data prefetching from start.
I0526 20:53:19.728912  2966 solver.cpp:418]     Test net output #0: accuracy = 0.9891
I0526 20:53:19.728941  2966 solver.cpp:418]     Test net output #1: loss = 0.0322071 (* 1 = 0.0322071 loss)
I0526 20:53:19.731956  2966 solver.cpp:239] Iteration 4000 (230.55 iter/s, 0.433746s/100 iters), loss = 0.0188879
I0526 20:53:19.731986  2966 solver.cpp:258]     Train net output #0: loss = 0.0188879 (* 1 = 0.0188879 loss)
I0526 20:53:19.731994  2966 sgd_solver.cpp:112] Iteration 4000, lr = 0.0077697
I0526 20:53:20.042589  2966 solver.cpp:239] Iteration 4100 (321.962 iter/s, 0.310596s/100 iters), loss = 0.0131814
I0526 20:53:20.042637  2966 solver.cpp:258]     Train net output #0: loss = 0.0131814 (* 1 = 0.0131814 loss)
I0526 20:53:20.042644  2966 sgd_solver.cpp:112] Iteration 4100, lr = 0.00772833
I0526 20:53:20.361515  2966 solver.cpp:239] Iteration 4200 (313.599 iter/s, 0.318879s/100 iters), loss = 0.0151852
I0526 20:53:20.361557  2966 solver.cpp:258]     Train net output #0: loss = 0.0151852 (* 1 = 0.0151852 loss)
I0526 20:53:20.361567  2966 sgd_solver.cpp:112] Iteration 4200, lr = 0.00768748
I0526 20:53:20.674365  2966 solver.cpp:239] Iteration 4300 (319.681 iter/s, 0.312812s/100 iters), loss = 0.0477351
I0526 20:53:20.674412  2966 solver.cpp:258]     Train net output #0: loss = 0.0477351 (* 1 = 0.0477351 loss)
I0526 20:53:20.674419  2966 sgd_solver.cpp:112] Iteration 4300, lr = 0.00764712
I0526 20:53:20.994652  2966 solver.cpp:239] Iteration 4400 (312.263 iter/s, 0.320242s/100 iters), loss = 0.00818447
I0526 20:53:20.994691  2966 solver.cpp:258]     Train net output #0: loss = 0.00818447 (* 1 = 0.00818447 loss)
I0526 20:53:20.994699  2966 sgd_solver.cpp:112] Iteration 4400, lr = 0.00760726
I0526 20:53:21.305419  2966 solver.cpp:351] Iteration 4500, Testing net (#0)
I0526 20:53:21.413980  2980 data_layer.cpp:73] Restarting data prefetching from start.
I0526 20:53:21.415732  2966 solver.cpp:418]     Test net output #0: accuracy = 0.9865
I0526 20:53:21.415756  2966 solver.cpp:418]     Test net output #1: loss = 0.0391329 (* 1 = 0.0391329 loss)
I0526 20:53:21.418720  2966 solver.cpp:239] Iteration 4500 (235.827 iter/s, 0.424039s/100 iters), loss = 0.00663482
I0526 20:53:21.418745  2966 solver.cpp:258]     Train net output #0: loss = 0.00663481 (* 1 = 0.00663481 loss)
I0526 20:53:21.418756  2966 sgd_solver.cpp:112] Iteration 4500, lr = 0.00756788
I0526 20:53:21.738775  2966 solver.cpp:239] Iteration 4600 (312.481 iter/s, 0.320019s/100 iters), loss = 0.0199677
I0526 20:53:21.738818  2966 solver.cpp:258]     Train net output #0: loss = 0.0199676 (* 1 = 0.0199676 loss)
I0526 20:53:21.738826  2966 sgd_solver.cpp:112] Iteration 4600, lr = 0.00752897
I0526 20:53:22.012771  2979 data_layer.cpp:73] Restarting data prefetching from start.
I0526 20:53:22.069990  2966 solver.cpp:239] Iteration 4700 (301.953 iter/s, 0.331177s/100 iters), loss = 0.00672691
I0526 20:53:22.070024  2966 solver.cpp:258]     Train net output #0: loss = 0.00672689 (* 1 = 0.00672689 loss)
I0526 20:53:22.070032  2966 sgd_solver.cpp:112] Iteration 4700, lr = 0.00749052
I0526 20:53:22.381690  2966 solver.cpp:239] Iteration 4800 (320.855 iter/s, 0.311668s/100 iters), loss = 0.0105764
I0526 20:53:22.381767  2966 solver.cpp:258]     Train net output #0: loss = 0.0105764 (* 1 = 0.0105764 loss)
I0526 20:53:22.381775  2966 sgd_solver.cpp:112] Iteration 4800, lr = 0.00745253
I0526 20:53:22.707139  2966 solver.cpp:239] Iteration 4900 (307.336 iter/s, 0.325377s/100 iters), loss = 0.0102207
I0526 20:53:22.707185  2966 solver.cpp:258]     Train net output #0: loss = 0.0102206 (* 1 = 0.0102206 loss)
I0526 20:53:22.707195  2966 sgd_solver.cpp:112] Iteration 4900, lr = 0.00741498
I0526 20:53:23.023452  2966 solver.cpp:468] Snapshotting to binary proto file examples/mnist/lenet_iter_5000.caffemodel
I0526 20:53:23.032524  2966 sgd_solver.cpp:280] Snapshotting solver state to binary proto file examples/mnist/lenet_iter_5000.solverstate
I0526 20:53:23.035620  2966 solver.cpp:351] Iteration 5000, Testing net (#0)
I0526 20:53:23.145226  2980 data_layer.cpp:73] Restarting data prefetching from start.
I0526 20:53:23.146982  2966 solver.cpp:418]     Test net output #0: accuracy = 0.9886
I0526 20:53:23.147011  2966 solver.cpp:418]     Test net output #1: loss = 0.0334444 (* 1 = 0.0334444 loss)
I0526 20:53:23.150020  2966 solver.cpp:239] Iteration 5000 (225.812 iter/s, 0.442847s/100 iters), loss = 0.0506286
I0526 20:53:23.150048  2966 solver.cpp:258]     Train net output #0: loss = 0.0506285 (* 1 = 0.0506285 loss)
I0526 20:53:23.150058  2966 sgd_solver.cpp:112] Iteration 5000, lr = 0.00737788
I0526 20:53:23.465286  2966 solver.cpp:239] Iteration 5100 (317.216 iter/s, 0.315242s/100 iters), loss = 0.0253424
I0526 20:53:23.465337  2966 solver.cpp:258]     Train net output #0: loss = 0.0253424 (* 1 = 0.0253424 loss)
I0526 20:53:23.465343  2966 sgd_solver.cpp:112] Iteration 5100, lr = 0.0073412
I0526 20:53:23.784103  2966 solver.cpp:239] Iteration 5200 (313.721 iter/s, 0.318755s/100 iters), loss = 0.00687777
I0526 20:53:23.784148  2966 solver.cpp:258]     Train net output #0: loss = 0.00687772 (* 1 = 0.00687772 loss)
I0526 20:53:23.784157  2966 sgd_solver.cpp:112] Iteration 5200, lr = 0.00730495
I0526 20:53:24.097098  2966 solver.cpp:239] Iteration 5300 (319.533 iter/s, 0.312957s/100 iters), loss = 0.00152222
I0526 20:53:24.097147  2966 solver.cpp:258]     Train net output #0: loss = 0.00152217 (* 1 = 0.00152217 loss)
I0526 20:53:24.097156  2966 sgd_solver.cpp:112] Iteration 5300, lr = 0.00726911
I0526 20:53:24.409426  2966 solver.cpp:239] Iteration 5400 (320.225 iter/s, 0.31228s/100 iters), loss = 0.00699226
I0526 20:53:24.409498  2966 solver.cpp:258]     Train net output #0: loss = 0.0069922 (* 1 = 0.0069922 loss)
I0526 20:53:24.409510  2966 sgd_solver.cpp:112] Iteration 5400, lr = 0.00723368
I0526 20:53:24.718976  2966 solver.cpp:351] Iteration 5500, Testing net (#0)
I0526 20:53:24.834223  2980 data_layer.cpp:73] Restarting data prefetching from start.
I0526 20:53:24.837232  2966 solver.cpp:418]     Test net output #0: accuracy = 0.9883
I0526 20:53:24.837262  2966 solver.cpp:418]     Test net output #1: loss = 0.0351423 (* 1 = 0.0351423 loss)
I0526 20:53:24.840135  2966 solver.cpp:239] Iteration 5500 (232.207 iter/s, 0.43065s/100 iters), loss = 0.00880486
I0526 20:53:24.840165  2966 solver.cpp:258]     Train net output #0: loss = 0.00880479 (* 1 = 0.00880479 loss)
I0526 20:53:24.840175  2966 sgd_solver.cpp:112] Iteration 5500, lr = 0.00719865
I0526 20:53:25.150647  2966 solver.cpp:239] Iteration 5600 (322.075 iter/s, 0.310487s/100 iters), loss = 0.000624476
I0526 20:53:25.150697  2966 solver.cpp:258]     Train net output #0: loss = 0.000624411 (* 1 = 0.000624411 loss)
I0526 20:53:25.150703  2966 sgd_solver.cpp:112] Iteration 5600, lr = 0.00716402
I0526 20:53:25.219350  2979 data_layer.cpp:73] Restarting data prefetching from start.
I0526 20:53:25.475374  2966 solver.cpp:239] Iteration 5700 (307.996 iter/s, 0.32468s/100 iters), loss = 0.00480867
I0526 20:53:25.475419  2966 solver.cpp:258]     Train net output #0: loss = 0.00480861 (* 1 = 0.00480861 loss)
I0526 20:53:25.475426  2966 sgd_solver.cpp:112] Iteration 5700, lr = 0.00712977
I0526 20:53:25.797672  2966 solver.cpp:239] Iteration 5800 (310.308 iter/s, 0.322261s/100 iters), loss = 0.0414849
I0526 20:53:25.797731  2966 solver.cpp:258]     Train net output #0: loss = 0.0414848 (* 1 = 0.0414848 loss)
I0526 20:53:25.797739  2966 sgd_solver.cpp:112] Iteration 5800, lr = 0.0070959
I0526 20:53:26.110270  2966 solver.cpp:239] Iteration 5900 (319.955 iter/s, 0.312544s/100 iters), loss = 0.0081392
I0526 20:53:26.110317  2966 solver.cpp:258]     Train net output #0: loss = 0.00813913 (* 1 = 0.00813913 loss)
I0526 20:53:26.110325  2966 sgd_solver.cpp:112] Iteration 5900, lr = 0.0070624
I0526 20:53:26.423714  2966 solver.cpp:351] Iteration 6000, Testing net (#0)
I0526 20:53:26.531909  2980 data_layer.cpp:73] Restarting data prefetching from start.
I0526 20:53:26.534649  2966 solver.cpp:418]     Test net output #0: accuracy = 0.9903
I0526 20:53:26.534677  2966 solver.cpp:418]     Test net output #1: loss = 0.0299221 (* 1 = 0.0299221 loss)
I0526 20:53:26.537436  2966 solver.cpp:239] Iteration 6000 (234.12 iter/s, 0.427131s/100 iters), loss = 0.00366409
I0526 20:53:26.537466  2966 solver.cpp:258]     Train net output #0: loss = 0.00366403 (* 1 = 0.00366403 loss)
I0526 20:53:26.537473  2966 sgd_solver.cpp:112] Iteration 6000, lr = 0.00702927
I0526 20:53:26.851928  2966 solver.cpp:239] Iteration 6100 (317.997 iter/s, 0.314468s/100 iters), loss = 0.00321985
I0526 20:53:26.851976  2966 solver.cpp:258]     Train net output #0: loss = 0.00321979 (* 1 = 0.00321979 loss)
I0526 20:53:26.852001  2966 sgd_solver.cpp:112] Iteration 6100, lr = 0.0069965
I0526 20:53:27.172431  2966 solver.cpp:239] Iteration 6200 (312.051 iter/s, 0.32046s/100 iters), loss = 0.010021
I0526 20:53:27.172482  2966 solver.cpp:258]     Train net output #0: loss = 0.0100209 (* 1 = 0.0100209 loss)
I0526 20:53:27.172489  2966 sgd_solver.cpp:112] Iteration 6200, lr = 0.00696408
I0526 20:53:27.490736  2966 solver.cpp:239] Iteration 6300 (314.195 iter/s, 0.318274s/100 iters), loss = 0.0118692
I0526 20:53:27.490779  2966 solver.cpp:258]     Train net output #0: loss = 0.0118691 (* 1 = 0.0118691 loss)
I0526 20:53:27.490787  2966 sgd_solver.cpp:112] Iteration 6300, lr = 0.00693201
I0526 20:53:27.810791  2966 solver.cpp:239] Iteration 6400 (312.481 iter/s, 0.320019s/100 iters), loss = 0.00745714
I0526 20:53:27.810840  2966 solver.cpp:258]     Train net output #0: loss = 0.00745708 (* 1 = 0.00745708 loss)
I0526 20:53:27.810847  2966 sgd_solver.cpp:112] Iteration 6400, lr = 0.00690029
I0526 20:53:28.127219  2966 solver.cpp:351] Iteration 6500, Testing net (#0)
I0526 20:53:28.235361  2980 data_layer.cpp:73] Restarting data prefetching from start.
I0526 20:53:28.238287  2966 solver.cpp:418]     Test net output #0: accuracy = 0.9898
I0526 20:53:28.238328  2966 solver.cpp:418]     Test net output #1: loss = 0.033051 (* 1 = 0.033051 loss)
I0526 20:53:28.241364  2966 solver.cpp:239] Iteration 6500 (232.262 iter/s, 0.430548s/100 iters), loss = 0.0104185
I0526 20:53:28.241413  2966 solver.cpp:258]     Train net output #0: loss = 0.0104185 (* 1 = 0.0104185 loss)
I0526 20:53:28.241423  2966 sgd_solver.cpp:112] Iteration 6500, lr = 0.0068689
I0526 20:53:28.453863  2979 data_layer.cpp:73] Restarting data prefetching from start.
I0526 20:53:28.598322  2966 solver.cpp:239] Iteration 6600 (280.18 iter/s, 0.356913s/100 iters), loss = 0.0180983
I0526 20:53:28.598382  2966 solver.cpp:258]     Train net output #0: loss = 0.0180983 (* 1 = 0.0180983 loss)
I0526 20:53:28.598393  2966 sgd_solver.cpp:112] Iteration 6600, lr = 0.00683784
I0526 20:53:28.924755  2966 solver.cpp:239] Iteration 6700 (306.396 iter/s, 0.326375s/100 iters), loss = 0.0056943
I0526 20:53:28.924806  2966 solver.cpp:258]     Train net output #0: loss = 0.00569423 (* 1 = 0.00569423 loss)
I0526 20:53:28.924818  2966 sgd_solver.cpp:112] Iteration 6700, lr = 0.00680711
I0526 20:53:29.277735  2966 solver.cpp:239] Iteration 6800 (283.344 iter/s, 0.352928s/100 iters), loss = 0.00439319
I0526 20:53:29.277822  2966 solver.cpp:258]     Train net output #0: loss = 0.00439311 (* 1 = 0.00439311 loss)
I0526 20:53:29.277834  2966 sgd_solver.cpp:112] Iteration 6800, lr = 0.0067767
I0526 20:53:29.596837  2966 solver.cpp:239] Iteration 6900 (313.454 iter/s, 0.319026s/100 iters), loss = 0.00596661
I0526 20:53:29.596886  2966 solver.cpp:258]     Train net output #0: loss = 0.00596654 (* 1 = 0.00596654 loss)
I0526 20:53:29.596894  2966 sgd_solver.cpp:112] Iteration 6900, lr = 0.0067466
I0526 20:53:29.944209  2966 solver.cpp:351] Iteration 7000, Testing net (#0)
I0526 20:53:30.065789  2980 data_layer.cpp:73] Restarting data prefetching from start.
I0526 20:53:30.067525  2966 solver.cpp:418]     Test net output #0: accuracy = 0.9895
I0526 20:53:30.067554  2966 solver.cpp:418]     Test net output #1: loss = 0.0316801 (* 1 = 0.0316801 loss)
I0526 20:53:30.070441  2966 solver.cpp:239] Iteration 7000 (211.164 iter/s, 0.473566s/100 iters), loss = 0.00476039
I0526 20:53:30.070467  2966 solver.cpp:258]     Train net output #0: loss = 0.00476033 (* 1 = 0.00476033 loss)
I0526 20:53:30.070477  2966 sgd_solver.cpp:112] Iteration 7000, lr = 0.00671681
I0526 20:53:30.427279  2966 solver.cpp:239] Iteration 7100 (280.256 iter/s, 0.356817s/100 iters), loss = 0.0145102
I0526 20:53:30.427325  2966 solver.cpp:258]     Train net output #0: loss = 0.0145101 (* 1 = 0.0145101 loss)
I0526 20:53:30.427336  2966 sgd_solver.cpp:112] Iteration 7100, lr = 0.00668733
I0526 20:53:30.771559  2966 solver.cpp:239] Iteration 7200 (290.495 iter/s, 0.34424s/100 iters), loss = 0.0032593
I0526 20:53:30.771603  2966 solver.cpp:258]     Train net output #0: loss = 0.00325925 (* 1 = 0.00325925 loss)
I0526 20:53:30.771613  2966 sgd_solver.cpp:112] Iteration 7200, lr = 0.00665815
I0526 20:53:31.105365  2966 solver.cpp:239] Iteration 7300 (299.609 iter/s, 0.333769s/100 iters), loss = 0.0249073
I0526 20:53:31.105412  2966 solver.cpp:258]     Train net output #0: loss = 0.0249072 (* 1 = 0.0249072 loss)
I0526 20:53:31.105419  2966 sgd_solver.cpp:112] Iteration 7300, lr = 0.00662927
I0526 20:53:31.426230  2966 solver.cpp:239] Iteration 7400 (311.699 iter/s, 0.320822s/100 iters), loss = 0.00565549
I0526 20:53:31.426262  2966 solver.cpp:258]     Train net output #0: loss = 0.00565543 (* 1 = 0.00565543 loss)
I0526 20:53:31.426270  2966 sgd_solver.cpp:112] Iteration 7400, lr = 0.00660067
I0526 20:53:31.739367  2979 data_layer.cpp:73] Restarting data prefetching from start.
I0526 20:53:31.751914  2966 solver.cpp:351] Iteration 7500, Testing net (#0)
I0526 20:53:31.870637  2980 data_layer.cpp:73] Restarting data prefetching from start.
I0526 20:53:31.872447  2966 solver.cpp:418]     Test net output #0: accuracy = 0.9893
I0526 20:53:31.872550  2966 solver.cpp:418]     Test net output #1: loss = 0.0326281 (* 1 = 0.0326281 loss)
I0526 20:53:31.875960  2966 solver.cpp:239] Iteration 7500 (222.371 iter/s, 0.449699s/100 iters), loss = 0.00179423
I0526 20:53:31.876109  2966 solver.cpp:258]     Train net output #0: loss = 0.00179417 (* 1 = 0.00179417 loss)
I0526 20:53:31.876158  2966 sgd_solver.cpp:112] Iteration 7500, lr = 0.00657236
I0526 20:53:32.204651  2966 solver.cpp:239] Iteration 7600 (304.35 iter/s, 0.328569s/100 iters), loss = 0.00562589
I0526 20:53:32.204694  2966 solver.cpp:258]     Train net output #0: loss = 0.00562585 (* 1 = 0.00562585 loss)
I0526 20:53:32.204702  2966 sgd_solver.cpp:112] Iteration 7600, lr = 0.00654433
I0526 20:53:32.530794  2966 solver.cpp:239] Iteration 7700 (306.648 iter/s, 0.326107s/100 iters), loss = 0.0345423
I0526 20:53:32.530840  2966 solver.cpp:258]     Train net output #0: loss = 0.0345422 (* 1 = 0.0345422 loss)
I0526 20:53:32.530869  2966 sgd_solver.cpp:112] Iteration 7700, lr = 0.00651658
I0526 20:53:32.903666  2966 solver.cpp:239] Iteration 7800 (268.217 iter/s, 0.372833s/100 iters), loss = 0.00340463
I0526 20:53:32.903729  2966 solver.cpp:258]     Train net output #0: loss = 0.00340458 (* 1 = 0.00340458 loss)
I0526 20:53:32.903745  2966 sgd_solver.cpp:112] Iteration 7800, lr = 0.00648911
I0526 20:53:33.268342  2966 solver.cpp:239] Iteration 7900 (274.247 iter/s, 0.364634s/100 iters), loss = 0.00657078
I0526 20:53:33.268391  2966 solver.cpp:258]     Train net output #0: loss = 0.00657073 (* 1 = 0.00657073 loss)
I0526 20:53:33.268440  2966 sgd_solver.cpp:112] Iteration 7900, lr = 0.0064619
I0526 20:53:33.611172  2966 solver.cpp:351] Iteration 8000, Testing net (#0)
I0526 20:53:33.722162  2980 data_layer.cpp:73] Restarting data prefetching from start.
I0526 20:53:33.723891  2966 solver.cpp:418]     Test net output #0: accuracy = 0.991
I0526 20:53:33.723929  2966 solver.cpp:418]     Test net output #1: loss = 0.0299871 (* 1 = 0.0299871 loss)
I0526 20:53:33.726871  2966 solver.cpp:239] Iteration 8000 (218.104 iter/s, 0.458498s/100 iters), loss = 0.0046433
I0526 20:53:33.726907  2966 solver.cpp:258]     Train net output #0: loss = 0.00464324 (* 1 = 0.00464324 loss)
I0526 20:53:33.726915  2966 sgd_solver.cpp:112] Iteration 8000, lr = 0.00643496
I0526 20:53:34.063024  2966 solver.cpp:239] Iteration 8100 (297.524 iter/s, 0.336107s/100 iters), loss = 0.00935549
I0526 20:53:34.063122  2966 solver.cpp:258]     Train net output #0: loss = 0.00935543 (* 1 = 0.00935543 loss)
I0526 20:53:34.063135  2966 sgd_solver.cpp:112] Iteration 8100, lr = 0.00640827
I0526 20:53:34.396461  2966 solver.cpp:239] Iteration 8200 (299.986 iter/s, 0.333349s/100 iters), loss = 0.00689925
I0526 20:53:34.396505  2966 solver.cpp:258]     Train net output #0: loss = 0.0068992 (* 1 = 0.0068992 loss)
I0526 20:53:34.396514  2966 sgd_solver.cpp:112] Iteration 8200, lr = 0.00638185
I0526 20:53:34.730932  2966 solver.cpp:239] Iteration 8300 (299.018 iter/s, 0.334428s/100 iters), loss = 0.0434783
I0526 20:53:34.730994  2966 solver.cpp:258]     Train net output #0: loss = 0.0434782 (* 1 = 0.0434782 loss)
I0526 20:53:34.731003  2966 sgd_solver.cpp:112] Iteration 8300, lr = 0.00635567
I0526 20:53:35.075024  2966 solver.cpp:239] Iteration 8400 (290.667 iter/s, 0.344036s/100 iters), loss = 0.00595309
I0526 20:53:35.075070  2966 solver.cpp:258]     Train net output #0: loss = 0.00595304 (* 1 = 0.00595304 loss)
I0526 20:53:35.075081  2966 sgd_solver.cpp:112] Iteration 8400, lr = 0.00632975
I0526 20:53:35.198303  2979 data_layer.cpp:73] Restarting data prefetching from start.
I0526 20:53:35.417398  2966 solver.cpp:351] Iteration 8500, Testing net (#0)
I0526 20:53:35.527163  2980 data_layer.cpp:73] Restarting data prefetching from start.
I0526 20:53:35.530733  2966 solver.cpp:418]     Test net output #0: accuracy = 0.9895
I0526 20:53:35.530761  2966 solver.cpp:418]     Test net output #1: loss = 0.0312877 (* 1 = 0.0312877 loss)
I0526 20:53:35.533597  2966 solver.cpp:239] Iteration 8500 (218.083 iter/s, 0.458541s/100 iters), loss = 0.00686427
I0526 20:53:35.533622  2966 solver.cpp:258]     Train net output #0: loss = 0.00686422 (* 1 = 0.00686422 loss)
I0526 20:53:35.533639  2966 sgd_solver.cpp:112] Iteration 8500, lr = 0.00630407
I0526 20:53:35.851109  2966 solver.cpp:239] Iteration 8600 (314.971 iter/s, 0.31749s/100 iters), loss = 0.000872626
I0526 20:53:35.851148  2966 solver.cpp:258]     Train net output #0: loss = 0.000872567 (* 1 = 0.000872567 loss)
I0526 20:53:35.851156  2966 sgd_solver.cpp:112] Iteration 8600, lr = 0.00627864
I0526 20:53:36.169440  2966 solver.cpp:239] Iteration 8700 (314.174 iter/s, 0.318295s/100 iters), loss = 0.00289198
I0526 20:53:36.169492  2966 solver.cpp:258]     Train net output #0: loss = 0.00289192 (* 1 = 0.00289192 loss)
I0526 20:53:36.169502  2966 sgd_solver.cpp:112] Iteration 8700, lr = 0.00625344
I0526 20:53:36.483927  2966 solver.cpp:239] Iteration 8800 (318.025 iter/s, 0.314441s/100 iters), loss = 0.000743419
I0526 20:53:36.483978  2966 solver.cpp:258]     Train net output #0: loss = 0.000743358 (* 1 = 0.000743358 loss)
I0526 20:53:36.483983  2966 sgd_solver.cpp:112] Iteration 8800, lr = 0.00622847
I0526 20:53:36.799194  2966 solver.cpp:239] Iteration 8900 (317.221 iter/s, 0.315237s/100 iters), loss = 0.000569501
I0526 20:53:36.799244  2966 solver.cpp:258]     Train net output #0: loss = 0.000569436 (* 1 = 0.000569436 loss)
I0526 20:53:36.799257  2966 sgd_solver.cpp:112] Iteration 8900, lr = 0.00620374
I0526 20:53:37.115048  2966 solver.cpp:351] Iteration 9000, Testing net (#0)
I0526 20:53:37.223479  2980 data_layer.cpp:73] Restarting data prefetching from start.
I0526 20:53:37.225188  2966 solver.cpp:418]     Test net output #0: accuracy = 0.9906
I0526 20:53:37.225229  2966 solver.cpp:418]     Test net output #1: loss = 0.0301926 (* 1 = 0.0301926 loss)
I0526 20:53:37.228317  2966 solver.cpp:239] Iteration 9000 (233.053 iter/s, 0.429087s/100 iters), loss = 0.0162924
I0526 20:53:37.228353  2966 solver.cpp:258]     Train net output #0: loss = 0.0162924 (* 1 = 0.0162924 loss)
I0526 20:53:37.228361  2966 sgd_solver.cpp:112] Iteration 9000, lr = 0.00617924
I0526 20:53:37.545066  2966 solver.cpp:239] Iteration 9100 (315.737 iter/s, 0.316719s/100 iters), loss = 0.00710421
I0526 20:53:37.545107  2966 solver.cpp:258]     Train net output #0: loss = 0.00710414 (* 1 = 0.00710414 loss)
I0526 20:53:37.545115  2966 sgd_solver.cpp:112] Iteration 9100, lr = 0.00615496
I0526 20:53:37.865255  2966 solver.cpp:239] Iteration 9200 (312.351 iter/s, 0.320152s/100 iters), loss = 0.00479173
I0526 20:53:37.865294  2966 solver.cpp:258]     Train net output #0: loss = 0.00479167 (* 1 = 0.00479167 loss)
I0526 20:53:37.865303  2966 sgd_solver.cpp:112] Iteration 9200, lr = 0.0061309
I0526 20:53:38.185989  2966 solver.cpp:239] Iteration 9300 (311.819 iter/s, 0.320699s/100 iters), loss = 0.00574932
I0526 20:53:38.186033  2966 solver.cpp:258]     Train net output #0: loss = 0.00574926 (* 1 = 0.00574926 loss)
I0526 20:53:38.186043  2966 sgd_solver.cpp:112] Iteration 9300, lr = 0.00610706
I0526 20:53:38.417049  2979 data_layer.cpp:73] Restarting data prefetching from start.
I0526 20:53:38.515170  2966 solver.cpp:239] Iteration 9400 (303.817 iter/s, 0.329145s/100 iters), loss = 0.0248899
I0526 20:53:38.515221  2966 solver.cpp:258]     Train net output #0: loss = 0.0248899 (* 1 = 0.0248899 loss)
I0526 20:53:38.515228  2966 sgd_solver.cpp:112] Iteration 9400, lr = 0.00608343
I0526 20:53:38.829449  2966 solver.cpp:351] Iteration 9500, Testing net (#0)
I0526 20:53:38.939790  2980 data_layer.cpp:73] Restarting data prefetching from start.
I0526 20:53:38.942500  2966 solver.cpp:418]     Test net output #0: accuracy = 0.9892
I0526 20:53:38.942539  2966 solver.cpp:418]     Test net output #1: loss = 0.0346093 (* 1 = 0.0346093 loss)
I0526 20:53:38.945417  2966 solver.cpp:239] Iteration 9500 (232.435 iter/s, 0.430228s/100 iters), loss = 0.00292056
I0526 20:53:38.945453  2966 solver.cpp:258]     Train net output #0: loss = 0.00292051 (* 1 = 0.00292051 loss)
I0526 20:53:38.945461  2966 sgd_solver.cpp:112] Iteration 9500, lr = 0.00606002
I0526 20:53:39.259587  2966 solver.cpp:239] Iteration 9600 (318.317 iter/s, 0.314152s/100 iters), loss = 0.00238432
I0526 20:53:39.259625  2966 solver.cpp:258]     Train net output #0: loss = 0.00238427 (* 1 = 0.00238427 loss)
I0526 20:53:39.259632  2966 sgd_solver.cpp:112] Iteration 9600, lr = 0.00603682
I0526 20:53:39.582947  2966 solver.cpp:239] Iteration 9700 (309.285 iter/s, 0.323326s/100 iters), loss = 0.00316796
I0526 20:53:39.582990  2966 solver.cpp:258]     Train net output #0: loss = 0.00316791 (* 1 = 0.00316791 loss)
I0526 20:53:39.582999  2966 sgd_solver.cpp:112] Iteration 9700, lr = 0.00601382
I0526 20:53:39.913578  2966 solver.cpp:239] Iteration 9800 (302.487 iter/s, 0.330592s/100 iters), loss = 0.0155842
I0526 20:53:39.913622  2966 solver.cpp:258]     Train net output #0: loss = 0.0155842 (* 1 = 0.0155842 loss)
I0526 20:53:39.913633  2966 sgd_solver.cpp:112] Iteration 9800, lr = 0.00599102
I0526 20:53:40.224758  2966 solver.cpp:239] Iteration 9900 (321.395 iter/s, 0.311143s/100 iters), loss = 0.00502401
I0526 20:53:40.224807  2966 solver.cpp:258]     Train net output #0: loss = 0.00502396 (* 1 = 0.00502396 loss)
I0526 20:53:40.224841  2966 sgd_solver.cpp:112] Iteration 9900, lr = 0.00596843
I0526 20:53:40.542547  2966 solver.cpp:468] Snapshotting to binary proto file examples/mnist/lenet_iter_10000.caffemodel
I0526 20:53:40.550356  2966 sgd_solver.cpp:280] Snapshotting solver state to binary proto file examples/mnist/lenet_iter_10000.solverstate
I0526 20:53:40.554579  2966 solver.cpp:331] Iteration 10000, loss = 0.00530681
I0526 20:53:40.554625  2966 solver.cpp:351] Iteration 10000, Testing net (#0)
I0526 20:53:40.663066  2980 data_layer.cpp:73] Restarting data prefetching from start.
I0526 20:53:40.664521  2966 solver.cpp:418]     Test net output #0: accuracy = 0.99
I0526 20:53:40.664681  2966 solver.cpp:418]     Test net output #1: loss = 0.0302442 (* 1 = 0.0302442 loss)
I0526 20:53:40.664695  2966 solver.cpp:336] Optimization Done.
I0526 20:53:40.664700  2966 caffe.cpp:250] Optimization Done.

猜你喜欢

转载自blog.csdn.net/jwy2014/article/details/80464928
今日推荐