The whole process of configuring the gpt2 environment on the server

configuration information

Tencent Cloud student price discount for 30 yuan for 3 months server

Operating system CentOS 7.6 64-bit
CPU 1 core
Memory 2GB
Public network bandwidth 1Mbps

Install python3.6.5

First check the python version python -Vand find that CentOS comes with python2.7.5,
we need to install python3, select version 3.6.5 here

  1. Install c language compilation and build tools
yum install gcc
  1. download
wget https://www.python.org/ftp/python/3.6.5/Python-3.6.5.tgz
  1. decompress
gunzip Python-3.6.5.tgz
  1. unarchive the file
tar -xvf Python-3.6.5.tar
  1. Dependent libraries that may be needed during the installation and build process
yum -y install zlib-devel bzip2-devel openssl-devel ncurses-devel sqlite-devel readline-devel tk-devel gdbm-devel db4-devel libpcap-devel xz-devel
  1. Execute the configuration and generate the Makefile (build file)
    into the python source code directory:cd Python-3.6.5
./configure --prefix=/usr/local/python36 --enable-optimizations
  1. build and install
make && make install
  1. Add environment variables
ln -s /usr/local/python36/bin/python3.6 /usr/bin/python3
ln -s /usr/local/python36/bin/pip3 /usr/bin/pip3

Check that the current python version python3 --versionis 3.6.5, indicating that the installation is successful.
Check the pip3 version pip3 --versionand find that it is version 9.0, which can be upgraded. Upgrade
to 20.0 pip3 install --upgrade pip3. Check that the current version is 20.0.2.
Now the python environment is configured.

Install the necessary libraries

Put in the gpt-2 package first, and the required libraries are written in requirements.txt,
so we enter the folder cd gpt-2
and install the library

pip3 install -r requirements.txt

We are still missing numpy

pip3 install numpy

As well as tensorflow, the latest version of tensorflow does not support tensorflow.contrib.rnn required in gpt2, we need version 1.8.0

pip3 install tensorflow==1.8.0

run

After testing, this server can't run the 355M model, but can only run the 117M model.
Download the model first.

python3 download_model.py 117M

then run

python3 src/interactive_conditional_samples.py --top_k 40 --temperature 0.9 --model_name 117M

When you see the Model Prompt, it means the operation is successful.
Then enter the content you want to continue writing.

Guess you like

Origin blog.csdn.net/weixin_45766122/article/details/104249114