This error usually occurs when the nn.functional.softmax() function is executed, and the input data type is an integer type (Long).
The softmax() function requires the input data type to be a floating-point number (Float) or a double-precision floating-point number (Double).
To resolve this error, the input data type needs to be converted to a float type. Type conversion can be done using the tensor.float() or tensor.double() functions. For example:
import torch.nn.functional as F
import torch
# 创建一个整数类型的tensor
x = torch.LongTensor([[1, 2, 3], [4, 5, 6]])
# 将数据类型转换为float类型
x = x.float()
# 执行softmax函数
y = F.softmax(x, dim=1)
print(y)
The output is:
tensor([[0.0900, 0.2447, 0.6652],
[0.0900, 0.2447, 0.6652]])