Introduction to Libtorch

Libtorch overview

Libtorch is the C++ interface of Pytorch, which implements the functions of network training and network reasoning in C++.

Since most of the interfaces in Libtorch are consistent with Pytorch, Libtorch is still a very powerful tensor library with a clear interface similar to Pytorch, which is rare in C++. If you have used the Tensor library of C++, you will find that the writing method is more complicated and the learning cost is higher. Due to the limitation of strong types and the lack of general container types, C++ is more complex than Python. Library designers generally design interfaces that are efficient but difficult to use due to language usage habits and performance factors. of. Libtorch uses a function interface similar to Pytorch. If you have used Pytorch, the learning cost of using Libtorch will be relatively low.

Another problem is that many basic operations in the Python library, such as numpy.einsumfunctions, have no suitable replacements in the C++ Tensor library, and the process of migrating to C++ will be troublesome. Libtorch solves this problem, and it has everything in Pytorch, so it can be easily torch::einsumused einsum function, which is simply a boon for C++ developers.

In addition, Libtorch supports GPU and is mainly used for model inference process. Libtorch's Tensor operation may have advantages in speed compared with other C++ Tensor libraries. The specific speed needs to be tested and compared. Of course, if you use C++ code, the speed is not the bottleneck, and the CPU code itself is fast enough.

Another advantage of Libtorch is that it is easy to compile. As long as you have installed Pytorch, Libtorch can be used directly, eliminating the need for complicated installation and configuration, and a simple sample program can be run within a minute.

In summary, Libtorch has the following attractive features:

  • C++ Tensor library as powerful as Numpy and Pytorch, elegant and silky in writing, and supports GPU
  • Neural Networks can be trained
  • Able to infer neural network models, used in model deployment scenarios in C++ environment
  • easy to compile

Libtorch installation

You can download the corresponding version of the Libtorch compressed package on the homepage of the Pytorch official website. After downloading, you can unzip it and configure the environment variables yourself.

Compile Libtorch program using CMake

I wrote a small program and saved it in the example.cpp file:

// 引入Torch头文件,Tensor类定义在此头文件中
#include <torch/torch.h>
#include <iostream>

int main() {
    
    
  // 使用arange构造一个一维向量,再用reshape变换到5x5的矩阵,这里需要用到花括号,因为C++没有tuple类型
  torch::Tensor foo = torch::arange(25).reshape({
    
    5, 5});

  // 计算矩阵的迹
  torch::Tensor bar  = torch::einsum("ii", foo);

  // 输出矩阵和对应的迹
  std::cout << "==> matrix is:\n " << foo << std::endl;
  std::cout << "==> trace of it is:\n " << bar << std::endl;
}

buildThen create a folder and a file in the same directory CMakeLists.txt, and write the following content in the file:

# 指定要求的CMake的最低版本
cmake_minimum_required(VERSION 3.0 FATAL_ERROR)

# 定义项目的名称,会在build目录中生成Project_Name.sln --> OXI_Model_Project.sln
project(OXI_Model_Project)

# 设置Libtorch的路径,具体到Torch目录
set(Torch_DIR D:/LibTorch/libtorch/share/cmake/Torch)

# 查找并加载Torch库
find_package(Torch REQUIRED)

# 将TORCH_CXX_FLAGS添加到CMAKE_CXX_FLAGS中,以确保正确的编译选项和标志
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${TORCH_CXX_FLAGS}")

# 将源文件example.cpp添加到可执行文件oxi_model中,可执行文件的名字自己设置
add_executable(oxi_model example.cpp)

# 将TORCH_LIBRARIES链接到可执行文件oxi_model中,以确保正确的链接Torch库
target_link_libraries(oxi_model "${TORCH_LIBRARIES}")

# 将目标可执行文件oxi_model的C++标准设置为C++14;Libtorch是基于C++14实现的
set_property(TARGET oxi_model PROPERTY CXX_STANDARD 14)

Then open Developer Command Prompt for vs 2022the command line window, cd to the build directory, and enter the following commands in sequence:

cmake -DCMAKE_PREFIX_PATH=`D:\LibTorch\libtorch\share\cmake\Torch` ..  // 编译配置文件
msbuild OXI_Model_Project.sln /p:Configuration=Release /m  // 生成可执行文件,放在Release目录中

Finally, double-click the executable file in the Release directory, or enter the Release directory with cd in the command line window, and then enter oxi_model to run the executable file oxi_model.exe. The result of the operation is as follows:

==> matrix is:
   0   1   2   3   4
  5   6   7   8   9
 10  11  12  13  14
 15  16  17  18  19
 20  21  22  23  24
[ CPULongType{
    
    5,5} ]
==> trace of it is:
 60
[ CPULongType{
    
    } ]

You can see that the 5x5 tensor object can be std::coutprinted directly with , and the backend of the data (here is CPU) and data type (LongType) and corresponding dimensions are also displayed behind the data.

Configure Libtorch in VS

Right-click on the project, Properties -> Configuration Properties -> VC++ Directory, add the following content to the include directory:

D:\LibTorch\libtorch\include
D:\LibTorch\libtorch\include\torch\csrc\api\include

Add the following to the library directory:

D:\LibTorch\libtorch\lib

Right click on the project, Properties -> Linker -> Input, add the following in Additional Dependencies:

asmjit.lib
c10.lib
clog.lib
cpuinfo.lib
dnnl.lib
fbgemm.lib
fbjni.lib
kineto.lib
libprotobuf-lite.lib
libprotobuf.lib
libprotoc.lib
pthreadpool.lib
pytorch_jni.lib
torch.lib
torch_cpu.lib
XNNPACK.lib

If LINK: fatal error LNK1104: 无法打开文件“c10_cuda.lib”such problems occur, the corresponding files in the additional dependencies can be deleted.

If 由于找不到c10.dll,无法继续执行代码this happens, you can right click on the project, Properties -> Debug, add something in the environment PATH=D:\LibTorch\libtorch\lib;%PATH%.

Guess you like

Origin blog.csdn.net/weixin_48158964/article/details/132302065