[Type conversion from matrix to image 1]: tensor type, data type and conversion between data types in Pytorch

One, the type and data type of tensor in Pytorch

torch.TensorIt is a multi-dimensional matrix containing elements, but these elements belong to a single data type element (unlike list, which can store different types of elements in the same sequence).

1.0 Default type and data type

torch.TensorIt is torch.FlaotTensorthe abbreviation of the default tensor type ( ), and is placed on the CPU by default. When using this type declaration torch.FloatTensor, data of type and data type will be generated torch.float32.

torch.ensorIt is torch.LongTensorthe abbreviation of the default tensor type ( ), and is placed on the CPU by default. When using this type declaration torch.LongTensor, data of type and data type will be generated torch.int64.

Note: does not exist torch.cuda.Tensorortorch.cuda.tensor

import torch
print("测试开始")

print("==默认Tensor数据类型==")
tensor= torch.Tensor([1,2])
print("type(tensor):",type(tensor))
print("tensor.type():",tensor.type())
print("tensor.dtype:",tensor.dtype)
print("tensor.is_cuda:",tensor.is_cuda)


print("==默认tensor数据类型==")
tensor= torch.tensor([1,2])
print("type(tensor):",type(tensor))
print("tensor.type():",tensor.type())
print("tensor.dtype:",tensor.dtype)
print("tensor.is_cuda:",tensor.is_cuda)

image-20210318093904908

1.1 Overview of tensor types and data types

Pytorch defines 8种CPU张量类型and corresponding 8种GPU张量类型, (the same CPU张量, the GPU张量type corresponds to the same storage data type), the specific correspondence is shown in the following table:

Data type CPU tensor GPU tensor
32-bit floating point -> torch.float32 torch.FloatTensor torch.cuda.FloatTensor
64-bit floating point -> torch.float64 torch.DoubleTensor torch.cuda.DoubleTensor
16-bit floating point -> torch.float16 torch.HalfTensor torch.cuda.HalfTensor
8-bit integer (unsigned) -> torch.uint8 torch.ByteTensor torch.cuda.ByteTensor
8-bit integer (signed) -> torch.int8 torch.CharTensor torch.cuda.CharTensor
16-bit integer (signed) -> torch.int16 torch.ShortTensor torch.cuda.ShortTensor
32-bit integer (signed) -> torch.int32 torch.IntTensor torch.cuda.IntTensor
64-bit integer (signed) -> torch.int64 torch.LongTensor torch.cuda.LongTensor

1.2 Display of tensor type and data type

The results obtained for this 16种tensor变量use type()函数are<class 'torch.Tensor'>

Solve specific 张量类型needs to use,tensor_name.type()

Solve the corresponding 数据存储类型need to use,tensor_name.dtype

Example: see 1.0

1.3 Does tensor data belong to cuda display

To find out whether a specific tensor belongs to the cuda type needs to be used,tensor_name.iscuda

Example: see 1.0

1.4 Example: 16 types of matrix initialization and data type display

import torch
print("测试开始")

print("===torch.FloatTensor===")
tensor = torch.FloatTensor([1,2])
print("type(FloatTensor):",type(tensor))
print("FloatTensor.type():",tensor.type())
print("FloatTensor.dtype:",tensor.dtype)
print("FloatTensor.is_cuda:",tensor.is_cuda)
print("===torch.cuda.FloatTensor===")
tensor = torch.cuda.FloatTensor([1,2])
print("type(cuda_FloatTensor):",type(tensor))
print("cuda_FloatTensor.type():",tensor.type())
print("cuda_FloatTensor.dtype:",tensor.dtype)
print("cuda_FloatTensor.is_cuda:",tensor.is_cuda)

print("===torch.DoubleTensor===")
tensor = torch.DoubleTensor([1,2])
print("type(DoubleTensor):",type(tensor))
print("DoubleTensor.type():",tensor.type())
print("DoubleTensor.dtype:",tensor.dtype)
print("DoubleTensor.is_cuda:",tensor.is_cuda)
print("===torch.cuda.DoubleTensor===")
tensor = torch.cuda.DoubleTensor([1,2])
print("type(cuda_DoubleTensor):",type(tensor))
print("cuda_DoubleTensor.type():",tensor.type())
print("cuda_DoubleTensor.dtype:",tensor.dtype)
print("cuda_DoubleTensor.is_cuda:",tensor.is_cuda)

print("===torch.HalfTensor===")
tensor = torch.HalfTensor([1,2])
print("type(cuda_DoubleTensor):",type(tensor))
print("HalfTensor.type():",tensor.type())
print("HalfTensor.dtype:",tensor.dtype)
print("HalfTensor.is_cuda:",tensor.is_cuda)
print("===torch.cuda.HalfTensor===")
tensor = torch.cuda.HalfTensor([1,2])
print("type(cuda_HalfTensor):",type(tensor))
print("cuda_HalfTensor.type():",tensor.type())
print("cuda_HalfTensor.dtype:",tensor.dtype)
print("cuda_HalfTensor.is_cuda:",tensor.is_cuda)

print("===torch.ByteTensor===")
tensor = torch.ByteTensor([1,2])
print("type(ByteTensor):",type(tensor))
print("ByteTensor.type():",tensor.type())
print("ByteTensor.dtype:",tensor.dtype)
print("ByteTensor.is_cuda:",tensor.is_cuda)
print("===torch.cuda.ByteTensor===")
tensor = torch.cuda.ByteTensor([1,2])
print("type(cuda_ByteTensor):",type(tensor))
print("cuda_ByteTensor.type():",tensor.type())
print("cuda_ByteTensor.dtype:",tensor.dtype)
print("cuda_ByteTensor.is_cuda:",tensor.is_cuda)

print("===torch.CharTensor===")
tensor = torch.CharTensor([1,2])
print("type(CharTensor):",type(tensor))
print("CharTensor.type():",tensor.type())
print("CharTensor.dtype:",tensor.dtype)
print("CharTensor.is_cuda:",tensor.is_cuda)
print("===torch.cuda.CharTensor===")
tensor = torch.cuda.CharTensor([1,2])
print("type(cuda_CharTensor):",type(tensor))
print("cuda_CharTensor.type():",tensor.type())
print("cuda_CharTensor.dtype:",tensor.dtype)
print("cuda_CharTensor.is_cuda:",tensor.is_cuda)

print("===torch.ShortTensor===")
tensor = torch.ShortTensor([1,2])
print("type(ShortTensor):",type(tensor))
print("ShortTensor.type():",tensor.type())
print("ShortTensor.dtype:",tensor.dtype)
print("ShortTensor.is_cuda:",tensor.is_cuda)
print("===torch.cuda.ShortTensor===")
tensor = torch.cuda.ShortTensor([1,2])
print("type(cuda_ShortTensor):",type(tensor))
print("cuda_ShortTensor.type():",tensor.type())
print("cuda_ShortTensor.dtype:",tensor.dtype)
print("cuda_ShortTensor.is_cuda:",tensor.is_cuda)

print("===torch.IntTensor===")
tensor = torch.IntTensor([1,2])
print("type(IntTensor):",type(tensor))
print("IntTensor.type():",tensor.type())
print("IntTensor.dtype:",tensor.dtype)
print("IntTensor.is_cuda:",tensor.is_cuda)
print("===torch.cuda.IntTensor===")
tensor = torch.cuda.IntTensor([1,2])
print("type(cuda_IntTensor):",type(tensor))
print("cuda_IntTensor.type():",tensor.type())
print("cuda_IntTensor.dtype:",tensor.dtype)
print("cuda_IntTensor.is_cuda:",tensor.is_cuda)

print("===torch.LongTensor===")
tensor = torch.LongTensor([1,2])
print("type(LongTensor):",type(tensor))
print("LongTensor.type():",tensor.type())
print("LongTensor.dtype:",tensor.dtype)
print("LongTensor.is_cuda:",tensor.is_cuda)
print("===torch.cuda.LongTensor===")
tensor = torch.cuda.LongTensor([1,2])
print("type(cuda_LongTensor):",type(tensor))
print("cuda_LongTensor.type():",tensor.type())
print("cuda_LongTensor.dtype:",tensor.dtype)
print("cuda_LongTensor.is_cuda:",tensor.is_cuda)

image-20210317222913066

Second, the conversion between tensor data types

2.0 The need for data conversion

Tensor can do the same type of operation, whether in CPUand GPUthe same species, if the same type of data storage

2.1 Conversion of CPU tensor type and GPU tensor type

cpu_tensor.cuda()
gpu_tensor.cpu()

For example

import torch
print("测试开始")

print("==默认数据类型==")
tensor= torch.FloatTensor([1,2])
print("初始化CPU张量是否在GPU上:",tensor.is_cuda)
tensor.cuda()
print("CPU张量使用.cuda()后是否在GPU上:",tensor.is_cuda)
tensor = tensor.cuda()
print("CPU张量使用.cuda()并重新赋值后是否在GPU上:",tensor.is_cuda)

print("="*10)
tensor= torch.cuda.FloatTensor([1,2])
print("初始化GPU张量是否在GPU上:",tensor.is_cuda)
tensor.cpu()
print("GPU张量使用.cpu()并重新赋值后是否在GPU上:",tensor.is_cuda)
tensor = tensor.cpu()
print("GPU张量使用.cpu()并重新赋值后是否在GPU上:",tensor.is_cuda)

image-20210317203626826

2.2 Other data type conversions that do not involve CPU and GPU

2.2.1 tensor.int() projects the tensor to int type

All eight CPU types are converted to torch.IntTensor, and the data storage type is torch.int32 type

All eight GPU types are converted to torch.cuda.IntTensor, and the data storage type is torch.int32 type

2.2.2 tensor.long() projects tensor as long type

All eight CPU types are converted to torch.LongTensor, and the data storage type is torch.int64

All eight GPU types are converted to torch.cuda.LongTensor, and the data storage type is torch.int64

2.2.3 tensor.half() projects tensor as a half-precision floating-point type

All eight CPU types are converted to torch.HalfTensor, and the data storage type is torch.float16

All eight GPU types are converted to torch.cuda.HalfTensor, and the data storage type is torch.float16

2.2.4 tensor.double() projects the tensor to double type

All eight CPU types are converted to torch.DoubleTensor, and the data storage type is torch.float64

All eight GPU types are converted to torch.cuda.DoubleTensor, and the data storage type is torch.float64 type

2.2.5 tensor.float() projects the tensor as a float type

All eight CPU types are converted to torch.FloatTensor, and the data storage type is torch.float32 type

All eight GPU types are converted to torch.cuda.FloatTensor, and the data storage type is torch.float32

2.2.6 tensor.char() projects the tensor to char type

All eight CPU types are converted to torch.CharTensor, and the data storage type is torch.int8 type

All eight GPU types are converted to torch.cuda.CharTensor, and the data storage type is torch.int8

2.2.7 tensor.byte() projects the tensor to byte type

All eight CPU types are converted to torch.ByteTensor, and the data storage type is torch.uint8 type

All eight GPU types are converted to torch.cuda.ByteTensor, and the data storage type is torch.uint8 type

2.2.8 tensor.short() projects tensor to short type

All eight CPU types are converted to torch.ShortTensor, and the data storage type is torch.int16

All eight GPU types are converted to torch.cuda.ShortTensor, and the data storage type is torch.int16

2.3 Use tensor.type() function

The function of this function is to convert the tensor to another tensor type, which can convert the CPU type and GPU type simultaneously, such as torch.IntTensor-->torch.cuda.floatTensor

The syntax is as follows:

tensor_new = tensor.type(new_type=None, async=False)

For example:

import torch
print("测试开始")
tensor = torch.FloatTensor([1,2])
print("before type function:",tensor.type())
tensor = tensor.type(torch.int32)
print("after type function:",tensor.type())

image-20210318001735082

2.4 Use type_as(tesnor) to convert a tensor to a tensor of a given type

The function of this function is to convert the tensor to another tensor type, which can convert the CPU type and GPU type simultaneously, such as torch.IntTensor-->torch.cuda.floatTensor

If the tensor is already of the specified type, no conversion will be performed

The syntax is as follows:

tensor_new = tensor1.type_as(tensor2)

For example:

import torch
print("测试开始")
tensor1 = torch.FloatTensor([1,2])
tensor2 = torch.IntTensor([1,2])
print("before type_as function:",tensor1.type())
tensor1 = tensor1.type_as(tensor2)
print("after type function:",tensor1.type())

operation result:
image-20210318001826667

LAST, references

pytorch determine the variable type and whether it is cuda_ Ipoh No. 2 -CSDN blog
torch.Tensor - PyTorch Chinese document
pytorch: tensor type of construction and conversion _JNing-CSDN blog
Pytorch tensor supported data types and their conversion - know almost
pytorch knowledge of a tensor data declarations, type conversion. Fine-tune the attention points of rensnet34. _yangdeshun888's blog -CSDN blog
Pytorch basic data types_torrent source -CSDN blog_pytorch data types

Guess you like

Origin blog.csdn.net/qq_41554005/article/details/114964670