centos 7 operating environment configuration pytorch

Huawei cloud server, 4-core 8G memory, no graphics, performance calculation improvise, to catch up with double 11 was less than 1000, it can also be cost-effective, plan to configure a set of training densenet environment.

First comes python version 2.7, due next year, it will no longer maintained, so installed a conda.

wget https://repo.continuum.io/archive/Anaconda3-5.3.0-Linux-x86_64.sh

I found too slow, looking for a source of Tsinghua University, a lot faster.

wget  https://mirrors.tuna.tsinghua.edu.cn/anaconda/archive/Anaconda3-4.3.0-Linux-x86_64.sh

(Here 4.3.0 can be replaced 5.3.0,4.3.0 the python version 3.6 seems to be, back when you install pytorch and torchvision may also need to upgrade python version)

chmod 777 anaconda3.4.3.0-Linux-x86_64.sh

./anaconda3.4.3.0-Linux-x86_64.sh

Has been yes, after installation, you will be prompted whether to join the system variables, click yes, and then execute the command: source ~ / .bachrc

Enter python --version to check whether the upgrade from version 2.7 to 3.6

Enter conda list, check whether the installed conda

Then install pytorch and torchvision

conda config --add channels https://repo.continuum.io/pkgs/free/ 
conda config --add channels https://repo.continuum.io/pkgs/main/ 
conda config --set show_channel_urls yes
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch/

conda install pytorch torchvision

Can also conda install pytorch torchvision cudatoolkit = 10.0.130, because there is no cuda graphics card, so do not take it does not matter.

Also this: Conda install pytorch -c pytorch  

conda install torchvision -c pytorch

After installed, use the test program

import torch
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('Training on CPU ...')
else:
print('Training on GPU ...')

The data sets uploaded to the server, start training.

1: lr_scheduler reported a fault, look to upgrade pytorch and torchvision

conda install pytorch==0.4.0

conda upgrade torchvision

2: Report a mistake, pytorch: ValueError: optimizing a parameter that does not require gradients

But this wrong, there has never been the same in the windows environment and centos7 + python 2.7 both environments.

There are two solutions, one is the param.requires_grad = False ➡param.requires_grad = True, but the memory may not carry.

Another option is to optimizer = optim.Adadelta (model.parameters ()) ➡ optimizer = optim.Adadelta (filter (lambda p: p.requires_grad, model.parameters ()))

The second method is recommended, because the first method the derivative accounting for slower memory.

3:报了个错:json.decoder.JSONDecodeError: Expecting property name enclosed in double quo

Json file seems to last more than a "," No, exclude this log to get the wrong watch.

 

Guess you like

Origin www.cnblogs.com/marszhw/p/11918815.html