Server conda environment using python virtual environment tutorial

The code downloaded on github, if it is a python code or project, usually has an environment dependency. Each time a different package is downloaded, the previous package version will be overwritten. Therefore, it often takes an algorithm and a virtual environment to run the code.

 

Take this github project code as an example to introduce how to run a project ( code address , this algorithm is Source code for CIKM 2020 paper " Fast Attributed Multiplex Heterogeneous Network Embedding ", download it and unzip it to the server)

1 Connect to the server

2 View the virtual environment of the server

conda env list 
或 conda info -e 

3 Create a virtual environment

conda create -n env_name python=X.X

The name I used is FAME_py36, and the selected version is 3.6. You will be asked later if you want to download some built-in libraries, choose y

4 Activate the virtual environment

source activate FAME_py36

5 Install dependent packages 

pip install -r requirments.txt

ps: First come to the FAME-master path, otherwise the requirements file will not be found

6 Run the code

python main.py

 

7 Close the virtual environment 

source deactivate

Or deactivate (windows system)

If you need to delete the FAME_py36 virtual environment: conda remove -n FAME_py36 --all

If you need to delete a package in the FAME_py36 virtual environment: conda remove --name FAME_py36 package_name

 

If you need to install different versions of CUDA acceleration, you can refer to the following command: Reference URL

Linux and Windows
# CUDA 9.2
conda install pytorch==1.6.0 torchvision==0.7.0 cudatoolkit=9.2 -c pytorch

# CUDA 10.1
conda install pytorch==1.6.0 torchvision==0.7.0 cudatoolkit=10.1 -c pytorch

# CUDA 10.2
conda install pytorch==1.6.0 torchvision==0.7.0 cudatoolkit=10.2 -c pytorch

# CPU Only
conda install pytorch==1.6.0 torchvision==0.7.0 cpuonly -c pytorch

Check if there is a CUDA acceleration command: refer to the website

import torch
torch.cuda.is_available()

If it is false, then you must first look at your CUDA version

nvcc -V

Mine is 9.0, and now I have two ideas, one is to update the CUDA driver, the other is to install the corresponding pytorch and torchvision, download the corresponding version from the above reference website.


Check the driver version: nvida-smi

Tesla K40C configuration:

Guess you like

Origin blog.csdn.net/qq_39463175/article/details/111682530