Running torch version of neural style in Docker

The relevant codes are all on Github, please refer to my Github, https://github.com/lijingpeng/deep-learning-notes
Please pay more attention~~~

Running torch version of neural style in Docker

TensorFlow neural-style , the implementation of the TensorFlow version is much slower than the Torch version, so this article describes how to run the torch version of the neural style. In order to avoid solving various annoying dependencies when building the environment, the Docker environment is still used here, and the Dockerfile comes from here . The environment construction can refer to here

The torch version of neural style comes from jcjohnson-neural-style , supports CPU and GPU, depends on torch7 and loadcaffe, and these dependencies are already installed in the Docker environment.

Download the trained VGG network

First clone the code:

https://github.com/jcjohnson/neural-style

neural style requires trained VGG network results, which need to be downloaded in advance:

sh models/download_models.sh

The following two files need to be loaded:
VGG_ILSVRC_19_layers.caffemodel
VGG_ILSVRC_19_layers_deploy.prototxt The
caffemodel file is relatively large, it is recommended to use the download tool to download it locally.

run in Docker

Step 1: Run Docker

Since VGG training result files, image files, etc. are stored on the local computer, we need to map these files to Docker when starting docker

docker run -it -p 8888:8888 -p 6006:6006 -v /Users/frank:/root/sharedfolder floydhub/dl-docker-load:cpu

I have directly mapped the user's home folder here, which can actually be adjusted according to the storage location of your own files.

Step 2: Enter the directory of the neural style code

Assuming that the cloned code is stored in /Users/frank/Downloads/neural-style, then:

cd ~/sharedfolder/Downloads/neural-style

Step 3: Execute

th neural_style.lua -style_image examples/inputs/starry_night.jpg -content_image ~/sharedfolder/Downloads/content.png -output_image ~/sharedfolder/Downloads/nn_out.png -model_file ~/sharedfolder/Downloads/VGG_ILSVRC_19_layers.caffemodel -proto_file ~/sharedfolder/Downloads/VGG_ILSVRC_19_layers_deploy.prototxt -gpu -1 -optimizer adam -num_iterations 800 -print_iter 1
  • -style_image indicates the location of the style image file
  • -content_image indicates the location of the content image, which is the file you want to change the style of
  • -output_image indicates the output file location
  • -model_file indicates the downloaded caffemodel file
  • -proto_file Configuration file for caffemodel model
  • -gpu -1 -1 means do not use GPU, use CPU version
  • -optimizer adam The optimization method chooses adam, which is faster, but the results are generally not as good as L-BFGS
  • -num_iterations number of iterations
  • -print_iter 1 print the result to the console once per iteration

For more parameter settings, please refer to: neural-style

Next is a long wait, which will be very long if the CPU is used...

Finally, view the data results of the model at the specified location of -output_image. It should be noted that please do not export the data results of the model to the internal folder of Docker, because once Docker crashes or shuts down during execution, all your content will be discarded. So be sure to put the result in the mapped host folder.

 

http://www.cnblogs.com/lijingpeng/p/6031634.html

http://www.cnblogs.com/lijingpeng/p/6009476.html

http://blog.csdn.net/lijingpengchina/article/details/53039051

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=326851506&siteId=291194637