Deep learning method - comparing Tensor and tensor to create tensor

torch.TensorThe generated tensor uses the data type "FloatTensor" by default, but torch.tensorit identifies the data type of the input data, thereby generating a tensor of the corresponding data type :

import torch
t1 = torch.tensor([1,2])
t2 = torch.Tensor([1,2])
print(t1)
print(type(t1))    # <class 'torch.Tensor'>
print(t1.type())   # torch.LongTensor
print("\n")
print(t2)
print(type(t2))    # <class 'torch.Tensor'>
print(t2.type())   # torch.FloatTensor

Output:
tensor([1, 2])
<class 'torch.Tensor'>
torch.LongTensor

tensor([1., 2.])
<class ‘torch.Tensor’>
torch.FloatTensor


import torch
t1 = torch.tensor([1.,2.])
t2 = torch.Tensor([1.,2.])
print(t1)
print(type(t1))    # <class 'torch.Tensor'>
print(t1.type())   # torch.LongTensor
print("\n")
print(t2)
print(type(t2))    # <class 'torch.Tensor'>
print(t2.type())   # torch.FloatTensor

Output:
tensor([1., 2.])
<class 'torch.Tensor'>
torch.FloatTensor

tensor([1., 2.])
<class ‘torch.Tensor’>
torch.FloatTensor


Of course, both can control the data type of the generated tensor by specifying the dtype parameter, which is not shown here. Let me talk about a small discovery of mine - Tensor cannot specify a device when creating a tensor, but tensor can:

import torch
device = torch.device('cuda')
t1 = torch.tensor([1.,2.],device=device) # 正常执行
t2 = torch.Tensor([1.,2.],device=device) # 抛出错误

(PS: New differences found in subsequent use will continue to be updated)

Guess you like

Origin blog.csdn.net/qq_50571974/article/details/124529282