Conversion of data types in Numpy and Pytorch

Data Type Conversion

  1. numpy: numpyFor details on the data types in: https://numpy.org/doc/stable/reference/arrays.dtypes.html,
    you can directly astype()convert them with

  2. Torch: For torchthe data types in torch, see:
    https://pytorch.org/docs/stable/tensors.html
    For Tensorthe type of data, you can directly add .long(), .float(), .double(), .int()etc. after it. It can also be used .to(), and the parameters inside can be dtypethe contained data type, or it can be cpuor cuda.

numpyThe default data type in is float64, while the default data type in torch is float32.
Example 1:

import numpy as np
arr = np.random.randn(2,2)
arr_float32 = arr.astype(np.float32)
print(arr.dtype, arr_float32.dtype)
#>>> dtype('float64') dtype('float32')

import torch
tensor = torch.randn(2,2)
tensor_long = tensor.long()
tensor_double = tensor.to(torch.double)
print(tensor.dtype,tensor_long.dtype,tensor_double.dtype)
#>>> torch.float32 torch.int64 torch.float64

Conversion of storage location

  • CPU → \rightarrow GPU: tensor.cuda()
  • GPU → \rightarrow CPU: tensor.cpu()

There can be parameters of the device serial number in the brackets, and they can also be used to(), generally as follows:

DEVICE = torch.device('cuda' if torch.cuda.is_available else 'cpu')
tensor.to(DEVICE)

Data conversion with numpy and torch

  • torch → \rightarrow numpy: tensor.numpy()

  • numpy → \rightarrow torch:torch.Tensor(arr),torch.tensor(arr)ortorch.from_numpy()

    NOTE: The data in the GPU cannot be directly converted to the data in numpy, it must be converted to the data on the CPU first. torch.Tensor(arr)The obtained data type is torch.float32single-precision data, which will change the original data type. This type is more common in neural network calculations, while the other two will not.

Example 2:

arr = tensor.numpy()
arr_double = arr.astype(np.float64)
tensor_T = tensor.Tensor(arr_doubel)
tensor_t = tensor.tensor(arr_double)
tensor_n = tensor.from_numpy(arr_double)
print(arr.dtype,arr_double.dtype,tensor_T.dtype,tensor_t.dtype,tensor_n.dtype)
#>>> float32 float64 torch.float32 torch.float64 torch.float64

Requirements of loss function in torch for different data types

For the usage of the loss function, see: https://pytorch.org/docs/stable/nn.html#loss-functions, only the data type is mentioned here.

  • torch.nn.CrossEntropyLoss: The data in the calculation process is all torch.float32, but targetthe data type is torch.longlong integer, because it needs to be converted into a one-hot vector;
  • torch.nn.BCELoss: It is used in the case of two classifications, and is generally used with classes. The type nn.Sigmoid()here is the default type;targettorch.flaot32

Example: (source official website)

>>> loss = nn.CrossEntropyLoss()
>>> input = torch.randn(3, 5, requires_grad=True)
>>> target = torch.empty(3, dtype=torch.long).random_(5)
>>> output = loss(input, target)
>>> output.backward()

>>> m = nn.Sigmoid()
>>> loss = nn.BCELoss()
>>> input = torch.randn(3, requires_grad=True)
>>> target = torch.empty(3).random_(2)
>>> output = loss(m(input), target)
>>> output.backward()

Guess you like

Origin blog.csdn.net/Huang_Fj/article/details/120798943