1. libtorch C++(Tensor Indexing API & Factory Functions)

本文将介绍如何通过PyTorch C ++ API来创建张量,并对Factory Functions进行解析.

转载请贴链接:https://blog.csdn.net/u013271656/article/details/106791654


 

1. Factory Functions:

PyTorch中有许多可用的Factory Functions(在Python和C ++中),它们初始化 Tensor 的方式不同,但他们都遵循一个模板:

torch::<function-name>(<function-specific-options>, <sizes>, <tensor-options>)
1. <function-name> : 函数的名称

2. <functions-specific-options> : 函数的必需或可选参数

3. <sizes> : shape of the resulting tensor

4. <tensor-options> : is an instance of TensorOptions and configures the data type, 
   device, layout and other properties of the resulting tensor.

 

2. Picking a Factory Function

以下函数都可以使用上述模板,先罗列出来再一一介绍:

  • arange: Returns a tensor with a sequence of integers,

  • empty: Returns a tensor with uninitialized values,

  • eye: Returns an identity matrix,

  • full: Returns a tensor filled with a single value,

  • linspace: Returns a tensor with values linearly spaced in some interval,

  • logspace: Returns a tensor with values logarithmically spaced in some interval,

  • ones: Returns a tensor filled with all ones,

  • rand: Returns a tensor filled with values drawn from a uniform distribution on [0, 1).

  • randint: Returns a tensor with integers randomly drawn from an interval,

  • randn: Returns a tensor filled with values drawn from a unit normal distribution,

  • randperm: Returns a tensor filled with a random permutation of integers in some interval,

  • zeros: Returns a tensor filled with all zeros.


 

2.1 指定尺寸创建Tensor - Specifying a Size

// 创建一个包含5个元素的Tensor,元素全部设置为1: 
torch::Tensor tensor_a = torch::ones(5);

// 创建一个二维的Tensor,元素全部设置为1: 
torch::Tensor tensor_b = torch::ones({2,3});

// print
std::cout << "tensor_a: \n" << tensor_a << endl;
std::cout << "tensor_b: \n" << tensor_b << endl;

2.2 传递参数-Passing Function-Specific Parameters

进一步介绍的一个函数是randint(),该函数为其生成的整数的值取一个上限,以及一个可选的下限(默认为零)。在这里,我们创建一个 整数矩阵,其整数在0到10之间:

// 在这里,我们shang限设置为10:
torch::Tensor tensor_a = torch::randint(/*high=*/10, {5, 5});
std::cout << "tensor_a: \n" << tensor_a << endl;

// 在这里,我们将下限提高到3:
torch::Tensor tensor_b = torch::randint(/*low=*/3, /*high=*/10, {5, 5});
std::cout << "tensor_b: \n" << tensor_b << endl;

 

3.3 配置Tensor的属性-Configuring Properties of the Tensor

上一面讨论了特定于函数的参数。特定于函数的参数只能更改Tensorshape,有时还可以修改Tensor的value,但他们从不更改Tensor的数据类型(例如float32int64),接下来介绍 TensorOptions:

TensorOptions是一个封装Tensor的构造轴的类。 对于构造轴,我们指的是张量的特殊属性,可以在构造之前进行配置(有时在之后进行更改)。(TensorOptions is a class that encapsulates the construction axes of a Tensor. With construction axis we mean a particular property of a Tensor that can be configured before its construction (and sometimes changed afterwards). These construction axes are:)

  • dtype它控制存储在所述张量元素的数据类型
  • layout其或者跨距(密实)或稀疏
  • device它表示在其上张量被存储(如CPU或GPU CUDA)一个计算装置
  • requires_grad启用或禁用张量梯度记录的布尔值
  • ------------------------
  • The allowed values for these axes at the moment are:(可选值)
  • dtypekUInt8kInt8kInt16kInt32kInt64kFloat32kFloat64
  • layoutkStridedkSparse
  • devicekCPUkCUDA(接受可选的设备索引)
  • requires_gradtruefalse
  • ------------------------
  • Fortunately, the answer is “no”, as every axis has a default value. These defaults are(默认值)
  • kFloat32 for the dtype,

  • kStrided for the layout,

  • kCPU for the device,

  • false for requires_grad.

// 1. 这是创建一个TensorOptions对象的示例,该对象表示64位浮点,需要渐变的跨度张量并位于CUDA设备1上:
auto options =
  torch::TensorOptions()
    .dtype(torch::kFloat32)
    .layout(torch::kStrided)
    .device(torch::kCUDA, 1)
    .requires_grad(true);


// 2. 使用“ builder”风格的方法来TensorOptions逐步构建对象
torch::Tensor tensor = torch::full({3, 4}, /*value=*/123, options);

assert(tensor.dtype() == torch::kFloat32);
assert(tensor.layout() == torch::kStrided);
assert(tensor.device().type() == torch::kCUDA); // or device().is_cuda()
assert(tensor.device().index() == 1);
assert(tensor.requires_grad());


// 3. 这是我们以前的TensorOptions对象,但是带有dtype和layout 默认值:
auto options = torch::TensorOptions().device(torch::kCUDA, 1).requires_grad(true);


// 4. 实际上,我们甚至可以省略所有轴以获取完全默认的 TensorOptions对象:
auto options = torch::TensorOptions(); // or `torch::TensorOptions options;


// 5. 当然,TensorOptions 也可以省略
torch::Tensor tensor = torch::randn({3, 4});
torch::Tensor range = torch::arange(5, 10);

到目前为止,在这里提供的API中,您可能已经注意到,torch::TensorOptions()写起来很麻烦。

 there is one free function in the torch:: namespace which you can pass a value for that axis.  Each function then returns a TensorOptions object preconfigured with that axis, but allowing even further modification via the builder-style methods shown above. 简而言之就是有一个简单的办法,例如:

torch::ones(10, torch::TensorOptions().dtype(torch::kFloat32))

// 相当于
torch::ones(10, torch::dtype(torch::kFloat32))

// 进一步代替
torch::ones(10,torch::TensorOptions().dtype(torch::kFloat32).layout(torch::kStrided))

// we can just write
torch::ones(10, torch::dtype(torch::kFloat32).layout(torch::kStrided))

这为我们节省了很多打字时间。这意味着几乎不必写torch::TensorOptions。而是使用torch::dtype(),torch::device(),torch::layout()和 torch::requires_grad()

总之,我们现在可以使用TensorOptions默认值 or 函数缩写API 来实现张量创建,我们可以得到与Python相同的便利性。Compare this call in Python:

# python code
torch.randn(3, 4, dtype=torch.float32, device=torch.device('cuda', 1), requires_grad=True)

// c++ code
// 与C ++中的等效调用:
torch::randn({3, 4}, torch::dtype(torch::kFloat32).device(torch::kCUDA, 1).requires_grad(true))

 

3.4 Conversion

正如我们可以TensorOptions用来配置应如何创建新的张量一样,我们也可以TensorOptions用于将张量从一组属性转换为一组新属性。这种转换通常会创建一个新的张量,例如,如果我们source_tensor 创建了:

torch::Tensor source_tensor = torch::randn({2, 3}, torch::kInt64);

// 我们可以将其从转换int64为float32:

torch::Tensor float_tensor = source_tensor.to(torch::kFloat32);

ps:转换的结果float_tensor是指向新内存的新张量,与source_tensor无关。

如果有多个可用的CUDA设备,则上面的代码会将张量复制到默认 CUDA设备,可以使用torch::DeviceGuard 进行配置 。如果没有可用的CUDA设备,DeviceGuard则为GPU1。如果要指定其他GPU索引,可以将其传递给Device构造函数:

 torch::Tensor gpu_two_tensor = float_tensor.to(torch::Device(torch::kCUDA, 1));

 

对于CPU到GPU的复制和反向复制,我们可以通过最后一个参数传递将内存复制配置为异步:

torch::Tensor async_cpu_tensor = gpu_tensor.to(torch::kCPU, /*non_blocking=*/true);

 学习过程中的笔记,后续查看方便。参考:Libtorch Docs: https://pytorch.org/cppdocs/notes/tensor_creation.html

猜你喜欢

转载自blog.csdn.net/u013271656/article/details/106791654
今日推荐