Caffe + Ubuntu14.04 64bit(位)+ Cuda6.5/Cuda7.0 安装配置教程

Caffe + Ubuntu14.04 64bit(位)+ Cuda6.5/Cuda7.0 安装配置教程

转载请注明:http://blog.csdn.net/wangpengfei163/article/details/50488079



随着深度学习快速发展的浪潮,许多有兴趣的工作者都转入了这个有着很好前景的研究中。工欲善其事,必先利其器。Caffe是一个很不错的深度学习框架,但它的安装步骤比较繁琐,将许多新手拒之门外,于是我就写了这篇博客,主要是我之前安装Caffe也是费了很多时间,由零基础慢慢学习,很羡慕那些有师兄师姐可以帮助的人。。。

下面开始正式介绍相关安装步骤:

暂停更新。。。2016-1-9起

该教程主要包括以下几方面的内容:

第一部分:安装所需要的包

二部分:NVIDIA 驱动和CUDA 安装

三部分:Caffe安装和测试


第一部分:安装所需要的包

sudo apt-get install build-essential  # basic requirement
sudo apt-get install libprotobuf-dev libleveldb-dev libsnappy-dev libopencv-dev libboost-all-dev libhdf5-serial-dev libgflags-dev libgoogle-glog-dev liblmdb-dev protobuf-compiler #required by caffe

提示:使用 sudo apt-get install libboost-all-dev ,默认安装boost1.54版本,如果想要使用1.55版本,可以使用命令:sudo apt-get install libboost1.55-all-dev(推荐)

二部分:NVIDIA 驱动和CUDA 安装

重要提示:安装完Ubuntu系统以及CUDA之后,切莫进行系统更新,会引起不能正常进入桌面的情况,会令你很烦恼的。

安装之前请进行md5检验,确保安装包完整,检验命令为:md5sum 文件名,查看输出的md5sum是否跟你有的相同。

以cuda-repo-ubuntu1404-7-0-local_7.0-28_amd64.deb为例

目前CUDA官网已经提供离线*.deb安装的方法,本教程提供两种安装方法(*.deb和*.run)


验证机器具有NVIDIA显卡

lspci | grep -i nvidia


(一)离线 *.deb 安装方法(推荐)

此方法不用切换到文本模型即可安装。

(2.1.1)首先下载 对应系统的 离线CUDA安装包   (*.deb) 链接:https://developer.nvidia.com/cuda-toolkit

(2.1.2)安装下载到的 CUDA离线包 (cuda-repo-ubuntu1404-7-0-local_7.0-28_amd64.deb)

添加软件源
sudo dpkg -i cuda-repo-<distro>_<version>_<architecture>.deb

更新软件源
sudo apt-get update

安装CUDA
sudo apt-get install cuda

重启计算机(通过boot设置独立显卡支持)
sudo reboot

(2.1.3)修改环境变量

1)在 /etc/profile 文件中添加以下内容:

export PATH=/usr/local/cuda-7.0/bin:$PATH

命令:

sudo vim /etc/profile

2)使环境变量生效

命令:

source /etc/profile

(2.1.4)添加lib库路径

1)在  /etc/ld.so.conf.d/  文件夹下添加 cuda.conf 文件,内容如下:

/usr/local/cuda-7.0/lib64

2)使库路径立即生效

sudo ldconfig  [-v,可选]

(2.1.5)安装CUDA Samples

命令:

sudo sh cuda-samples-linux-6.5.14-18745345.run
一直aceept就行,建议使用默认路径。


编译CUDA Samples

命令:

cd /usr/local/cuda-6.5/samples
sudo make


编译完成后,进入路径:/samples/bin/x86_64/linux/release

运行命令:

./deviceQuery


输出:

./deviceQuery Starting...

 CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: "Tesla K40c"
  CUDA Driver Version / Runtime Version          6.5 / 6.5
  CUDA Capability Major/Minor version number:    3.5
  Total amount of global memory:                 11520 MBytes (12079136768 bytes)
  (15) Multiprocessors, (192) CUDA Cores/MP:     2880 CUDA Cores
  GPU Clock rate:                                745 MHz (0.75 GHz)
  Memory Clock rate:                             3004 Mhz
  Memory Bus Width:                              384-bit
  L2 Cache Size:                                 1572864 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096)
  Maximum Layered 1D Texture Size, (num) layers  1D=(16384), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(16384, 16384), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  2048
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 2 copy engine(s)
  Run time limit on kernels:                     No
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Enabled
  Device supports Unified Addressing (UVA):      Yes
  Device PCI Bus ID / PCI location ID:           1 / 0
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 6.5, CUDA Runtime Version = 6.5, NumDevs = 1, Device0 = Tesla K40c
Result = PASS

如果输出上述信息,恭喜你,NVIDIA和CUDA安装成功,则可以继续进行下一步安装Caffe环境。


(2.1.6)验证NVIDIA 驱动和CUDA是否安装成功

查看安装NVIDIA驱动版本 命令:

cat /proc/driver/nvidia/version

输出

NVRM version: NVIDIA UNIX x86_64 Kernel Module  340.96  Sun Nov  8 22:33:28 PST 2015
GCC version:  gcc version 4.7.3 (Ubuntu/Linaro 4.7.3-12ubuntu1) 

从输出信息可以看出NVIDIA驱动版本为 340.96


安装完成后,就可以重新启动桌面服务了。

命令:

sudo start lightdm



(二)离线 *.run 安装方法


使用该方法安装,可能需要尝试多次安装


2.2.1)验证显卡是否支持CUDA

命令:

lspci | grep -i nvidia
查看该计算机显卡是否存在于 链接 https://developer.nvidia.com/cuda-gpus 中。


2.2.2)验证系统,确定为x86架构,64bit系统

命令:

uname -m && cat /etc/*release
输出:

x86_64
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=14.04
DISTRIB_CODENAME=trusty
DISTRIB_DESCRIPTION="Ubuntu 14.04.2 LTS"
NAME="Ubuntu"
VERSION="14.04.2 LTS, Trusty Tahr"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 14.04.2 LTS"
VERSION_ID="14.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"

2.2.3)验证系统中是否已经安装gcc,因为需要用gcc来编译CUDA和Caffe

命令:

gcc --version


2.2.4)NVIDIA和CUDA安装(*.run)

安装之前请进行md5sum检验,确保安装包完整检验命令为:md5sum 文件名,查看输出的md5sum是否跟你有的相同。

该方法以 CUDA6.5 为例。

2.2.4.1)首先下载 对应系统的 离线CUDA安装包   (*.run) 链接:https://developer.nvidia.com/cuda-toolkit

2.2.4.2)关闭桌面服务

进入Ubuntu, 按 Ctrl+Alt+F1  进入tty, 登录tty后输入如下命令:sudo service lightdm stop。

此命令会关闭lightdm服务,如果你使用的是gdm或者其他的桌面服务,请在安装NVIDIA显卡驱动前关闭它。

2.2.4.3关闭 Nouveau 开源驱动服务

Nouveau是一个开源的显卡驱动,Ubuntu 14.04 默认安装了,但是它会影响nVidia驱动的安装,启动时需要将这个驱动加入黑名单中。

1):修改nvidia-graphics-drivers.conf文件

sudo vim /etc/modprobe.d/nvidia-graphics-drivers.conf

写入:

blacklist nouveau

保存并退出: 

wq!

检查:

cat nvidia-graphics-drivers.conf

2):修改grub文件

sudo vim /etc/default/grub

末尾写入:

rdblacklist=nouveau nouveau.modeset=0

保存并退出: 

wq!

检查:

cat /etc/default/grub

2.2.4.4)安装下载到的 CUDA离线包 (*.run)

1):安装 *.run文件,可以直接使用命令 sudo sh cuda_6.5.14_linux_64.run 一直aceept就行。

或者


由于CUDA安装包中NVIDIA驱动的版本并不保证是最新的,也不一定适合你的计算机的显卡,所以建议使用下面这种方式分开安装,如果NVIDIA驱动版本和CUDA版本不对应的话,会导致CUDA安装失败,或者进入不了桌面服务。可以去NVIDIA官网 下载对应你的显卡的驱动的最新版,至少要高于CUDA安装包中自带的NVIDIA版本。


 通过下列命令

cuda_6.5.14_linux_64.run --extract=extract_path

 将下载得到的 *.run 文件解压成三个文件, 分别为

CUDA安装包: cuda-linux64-rel-6.5.14-18749181.run

NVIDIA安装包: NVIDIA-Linux-x86_64-340.65.run

CUDA Samples安装包:cuda-samples-linux-6.5.14-18745345.run

分别运行各个文件,运行前,需要将文件权限修改为可执行权限

命令:

chmod +x *.run

2):安装CUDA

命令:

sudo sh cuda-linux64-rel-6.5.14-18749181.run
一直aceept就行,建议使用默认路径。


安装NVIDIA(如果没有NVIDIA显卡,可跳过该步骤,仍可使用Caffe的CPU模式)


命令:(不建议使用)

sudo sh NVIDIA-Linux-x86_64-340.65.run

一直aceept就行,建议使用默认路径。


3):建议方法(仅限于使用CUDA6.5,如果你需要使用更新的CUDA版本,请NVIDIA官网 下载对应你的显卡的驱动的最新版,至少要高于CUDA安装包中自带的NVIDIA版本,然后单独安装显卡驱动。链接:http://www.nvidia.cn/Download/index.aspx?lang=cn)

1:添加驱动源

sudo add-apt-repository ppa:xorg-edgers/ppa
sudo apt-get update

2:安装340版驱动 (CUDA 6.5.14目前最高仅支持340版驱动, 343, 346版驱动暂不支持)
sudo apt-get install nvidia-340
3:安装完成后, 继续安装下列包 (否则在运行sample时会报错)

sudo apt-get install nvidia-340-uvm
4:安装完成后,最好重启计算机,让NVIDIA显卡工作

(2.2.4.5)安装CUDNN(可选)

1):下载 cudnn-6.5-linux-x64-v2 点击下载,然后执行以下命令安装

tar -zxvf cudnn-6.5-linux-x64-v2.tgz  
cd cudnn-6.5-linux-x64-v2  
sudo cp lib* /usr/local/cuda-6.5/lib64/
sudo cp cudnn.h /usr/local/cuda-6.5/include/

2):更新软连接

cd /usr/local/cuda-6.5/lib64/
sudo rm -rf libcudnn.so libcudnn.so.6.5
sudo ln -s libcudnn.so.6.5.48 libcudnn.so.6.5
sudo ln -s libcudnn.so.6.5 libcudnn.so


2.2.4.6)修改环境变量

1):在 /etc/profile 文件中添加以下内容:

export PATH=/usr/local/cuda-6.5/bin:$PATH

命令:

sudo vim /etc/profile

2):使环境变量生效

命令:

source /etc/profile

2.2.4.7)添加lib库路径

1):在  /etc/ld.so.conf.d/  文件夹下添加 cuda.conf 文件,内容如下:

/usr/local/cuda-6.5/lib64

2):使库路径立即生效

sudo ldconfig  [-v,可选]

2.2.4.8)安装CUDA Samples

命令:

sudo sh cuda-samples-linux-6.5.14-18745345.run
一直aceept就行,建议使用默认路径。


编译CUDA Samples

命令:

cd /usr/local/cuda-6.5/samples
sudo make


编译完成后,进入路径:/samples/bin/x86_64/linux/release

运行命令:

./deviceQuery


输出:

./deviceQuery Starting...

 CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: "Tesla K40c"
  CUDA Driver Version / Runtime Version          6.5 / 6.5
  CUDA Capability Major/Minor version number:    3.5
  Total amount of global memory:                 11520 MBytes (12079136768 bytes)
  (15) Multiprocessors, (192) CUDA Cores/MP:     2880 CUDA Cores
  GPU Clock rate:                                745 MHz (0.75 GHz)
  Memory Clock rate:                             3004 Mhz
  Memory Bus Width:                              384-bit
  L2 Cache Size:                                 1572864 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096)
  Maximum Layered 1D Texture Size, (num) layers  1D=(16384), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(16384, 16384), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  2048
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 2 copy engine(s)
  Run time limit on kernels:                     No
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Enabled
  Device supports Unified Addressing (UVA):      Yes
  Device PCI Bus ID / PCI location ID:           1 / 0
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 6.5, CUDA Runtime Version = 6.5, NumDevs = 1, Device0 = Tesla K40c
Result = PASS

如果输出上述信息,恭喜你,NVIDIA和CUDA安装成功,则可以继续进行下一步安装Caffe环境。


2.2.4.9)验证NVIDIA 驱动和CUDA是否安装成功

查看安装NVIDIA驱动版本 命令:

cat /proc/driver/nvidia/version

输出

NVRM version: NVIDIA UNIX x86_64 Kernel Module  340.96  Sun Nov  8 22:33:28 PST 2015
GCC version:  gcc version 4.7.3 (Ubuntu/Linaro 4.7.3-12ubuntu1) 

从输出信息可以看出NVIDIA驱动版本为 340.96


安装完成后,就可以重新启动桌面服务了。

命令:

sudo start lightdm

第三部分:Caffe安装和测试

该部分主要安装Caffe相关工具

(一)安装MATLAB

假设matlab安装路径为:/usr/local/MATLAB/R2014b

matlab破解文件路径为/home/

1)挂载iso文件

把matlab的ISO文件挂载上去,需要注意的是:挂载上去之前,需要将iso文件中的java/jar/install.jar文件替换为破解文件夹中的install.jar,可以使用UltraIso工具,替换保存后再使用文件挂载命令。为防止出现权限不够的问题,建议使用root用户。
文件挂载命令格式如下:
mount -o loop,rw /home/R2014b_glnxa64.iso /mnt
/mnt是你要挂载的文件夹目录,建议你自己新建一个文件夹。
2)安装matlab
进入解压后的matlab文件夹,运行 sudo ./install

选择"install manually without using the internet"项进行安装

输入"file installation key":12345-67890-12345-67890(随便都行)

激活:选择对应的”license.lic”文件进行激活(在Crack文件夹下面)


3)破解matlab
rm -rf /usr/local/MATLAB/R2014b/bin/glnxa64/libmwservices.so
把crack文件下的so文件拷贝过来:
cp /home/libmwservices.so /usr/local/MATLAB/R2014b/bin/glnxa64/
4 )安装成功,运行
/usr/local/MATLAB/R2014b/bin/matlab
5)配置环境变量,使得直接输入matlab即可启动软件
vi /etc/profile
在文件末尾添加
export PATH=/usr/local/MATLAB/R2014b/bin:$PATH
保存并退出后使设置生效
source /etc/profile


(二)安装Python

sudo apt-get install python-dev python-pip
安装Python依赖包
sudo apt-get install python-numpy python-scipy python-matplotlib ipython ipython-notebook python-pandas python-sympy python-nose 
python-sklearn python-skimage python-h5py python-protobuf python-leveldb python-networkx


(三)安装Intel MKL或者ATLAS

如果没有申请到Intel MKL,可以使用以下命令安装免费的ATLAS

sudo apt-get install libatlas-base-dev
如果申请到了MKL,解压下载到的文件,运行 install_GUI.sh,然后按照图形界面步骤安装即可。

安装完成后,需要执行下列两个步骤:

1):在  /etc/ld.so.conf.d/  文件夹下添加 intel_mkl.conf文件,内容如下:

/opt/intel/lib
/opt/intel/mkl/lib/intel64

2):使库路径立即生效

sudo ldconfig  [-v,可选]


(四)安装OpenCV(该选项【可选】)

安装2.4.10
1)下载 安装脚本
2)进入目录 Install-OpenCV/Ubuntu/2.4
3)执行脚本
chmod +x *.sh
sh ./opencv2_4_10.sh

(五)编译Caffe

1)下载Caffe源码包  Caffe源码包

2)进入caffe-master文件夹目录,复制一份 Makefile.config.examples

cp Makefile.config.example Makefile.config

3)修改Makefile.config文件中相关路径


## Refer to http://caffe.berkeleyvision.org/installation.html
# Contributions simplifying and improving our build system are welcome!

# cuDNN acceleration switch (uncomment to build with cuDNN).
# USE_CUDNN := 1

# CPU-only switch (uncomment to build without GPU support).
# CPU_ONLY := 1

# uncomment to disable IO dependencies and corresponding data layers
# USE_OPENCV := 0
# USE_LEVELDB := 0
# USE_LMDB := 0

# uncomment to allow MDB_NOLOCK when reading LMDB files (only if necessary)
#	You should not set this flag if you will be reading LMDBs with any
#	possibility of simultaneous read and write
# ALLOW_LMDB_NOLOCK := 1

# Uncomment if you're using OpenCV 3
# OPENCV_VERSION := 3

# To customize your choice of compiler, uncomment and set the following.
# N.B. the default for Linux is g++ and the default for OSX is clang++
# CUSTOM_CXX := g++

# CUDA directory contains bin/ and lib/ directories that we need.
#CUDA的安装目录
CUDA_DIR := /usr/local/cuda
# On Ubuntu 14.04, if cuda tools are installed via
# "sudo apt-get install nvidia-cuda-toolkit" then use this instead:
# CUDA_DIR := /usr

# CUDA architecture setting: going with all of them.
# For CUDA < 6.0, comment the *_50 lines for compatibility.
CUDA_ARCH := -gencode arch=compute_20,code=sm_20 \
		-gencode arch=compute_20,code=sm_21 \
		-gencode arch=compute_30,code=sm_30 \
		-gencode arch=compute_35,code=sm_35 \
		-gencode arch=compute_50,code=sm_50 \
		-gencode arch=compute_50,code=compute_50

# BLAS choice:
# atlas for ATLAS (default)
# mkl for MKL
# open for OpenBlas
BLAS := atlas
# Custom (MKL/ATLAS/OpenBLAS) include and lib directories.
# Leave commented to accept the defaults for your choice of BLAS
# (which should work)!
# BLAS_INCLUDE := /path/to/your/blas
# BLAS_LIB := /path/to/your/blas

# Homebrew puts openblas in a directory that is not on the standard search path
# BLAS_INCLUDE := $(shell brew --prefix openblas)/include
# BLAS_LIB := $(shell brew --prefix openblas)/lib

# This is required only if you will compile the matlab interface.
# MATLAB directory should contain the mex binary in /bin.
#MATLAB的安装目录

# MATLAB_DIR := /usr/local

# MATLAB_DIR := /Applications/MATLAB_R2012b.app

# NOTE: this is required only if you will compile the python interface.

# We need to be able to find Python.h and numpy/arrayobject.h.

 
 
#PYTHON的安装目录
PYTHON_INCLUDE := /usr/include/python2.7 \
		/usr/lib/python2.7/dist-packages/numpy/core/include
# Anaconda Python distribution is quite popular. Include path:
# Verify anaconda location, sometimes it's in root.
# ANACONDA_HOME := $(HOME)/anaconda
# PYTHON_INCLUDE := $(ANACONDA_HOME)/include \
		# $(ANACONDA_HOME)/include/python2.7 \
		# $(ANACONDA_HOME)/lib/python2.7/site-packages/numpy/core/include \

# Uncomment to use Python 3 (default is Python 2)
# PYTHON_LIBRARIES := boost_python3 python3.5m
# PYTHON_INCLUDE := /usr/include/python3.5m \
#                 /usr/lib/python3.5/dist-packages/numpy/core/include

# We need to be able to find libpythonX.X.so or .dylib.
PYTHON_LIB := /usr/lib
# PYTHON_LIB := $(ANACONDA_HOME)/lib

# Homebrew installs numpy in a non standard path (keg only)
# PYTHON_INCLUDE += $(dir $(shell python -c 'import numpy.core; print(numpy.core.__file__)'))/include
# PYTHON_LIB += $(shell brew --prefix numpy)/lib

# Uncomment to support layers written in Python (will link against Python libs)
# WITH_PYTHON_LAYER := 1

# Whatever else you find you need goes here.
INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include
LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib

# If Homebrew is installed at a non standard location (for example your home directory) and you use it for general dependencies
# INCLUDE_DIRS += $(shell brew --prefix)/include
# LIBRARY_DIRS += $(shell brew --prefix)/lib

# Uncomment to use `pkg-config` to specify OpenCV library paths.
# (Usually not necessary -- OpenCV libraries are normally installed in one of the above $LIBRARY_DIRS.)
# USE_PKG_CONFIG := 1

BUILD_DIR := build
DISTRIBUTE_DIR := distribute

# Uncomment for debugging. Does not work on OSX due to https://github.com/BVLC/caffe/issues/171
# DEBUG := 1

# The ID of the GPU that 'make runtest' will use to run unit tests.
TEST_GPUID := 0

# enable pretty build (comment to see full commands)
Q ?= @


4)编译Caffe


make all -j4 

make test  

make runtest


5)编译Matlab接口


make matcaffe


6)编译Python接口


make  pycaffe


如果编译过程中出现错误,可以使用make clean命令清除之前的编译结果,重新进行编译。



至此,就可以学着运行Caffe中的Demo了。



如果你的机器没有可用的NVIDIA显卡,仍然可以使用Caffe,可以配置Caffe的CPU(无GPU)模式,在整个安装过程中,请跳过【二部分:NVIDIA 驱动和CUDA 安装】,其他部分保持不变进行操作。


按照该教程在安装的过程中,有什么问题,欢迎在评论区留言,共同进步。


期待着NVIDIA将Caffe,Digits等一系列深度学习工具包集成到CUDA中吧,直接一键安装,相信美好的事情即将发生。


共同交流深度学习技术,群号:152768871


扫码方式:

               


资源链接:

cudnn-7.0-win-x64-v4.0-rc.zip    http://download.csdn.net/detail/wangpengfei163/9411940

cudnn-7.0-win-x64-v3.0-prod.zip   http://download.csdn.net/detail/wangpengfei163/9411944

cudnn-6.5-linux-x64-v2.tgz     http://download.csdn.net/detail/wangpengfei163/9411930

MKL免费申请链接    https://software.intel.com/en-us/qualify-for-free-software

猜你喜欢

转载自blog.csdn.net/wangpengfei163/article/details/50488079
今日推荐