pytorch Tensor and its basic operations

This chapter is just a summary of the routine operations of pytorch. It’s good for everyone to have an impression in their minds after reading it. Knowing that there is such a thing, you can read it in detail when you need it. In addition, it still needs to be used more in actual combat.
insert image description here

Tensor attributes:
There are three classes in tensor attributes, namely torch.dtype, torch.device, and torch.layout

Among them, torch.dtype is a class showing torch.Tensor data type, pytorch has eight different data types, the following table is a complete dtype list.

Torch.device is a class representing the type of device allocated by torch.Tensor, which is divided into 'cpu' and 'cuda'. If the device serial number is not displayed, it means that the tensor is allocated to the current device. For example: 'cuda' is equivalent For 'cuda': X , X is the return value of torch.cuda.current _device()

We can get its properties through tensor.device, and we can use characters or characters + serial numbers to assign devices

By string:

torch.device(‘cuda:0’)
device(type=‘cuda’, index=0)
torch.device(‘cpu’)
device(type=‘cpu’)
torch.device(‘cuda’) # 当前设备
device(type=‘cuda’)

By string and device ordinal:

torch.device('cuda', 0)
device(type='cuda', index=0)
torch.device('cpu', 0)
device(type='cpu', index=0)
In addition, cpu and cuda devices The conversion is achieved using 'to':

device_cpu = torch.device(“cuda”) #declare cuda device
device_cuda = torch.device('cuda') #device cpu device
data = torch.Tensor([1])
data.to(device_cpu) #convert data to cpu Format
data.to(device_cuda) #Convert data to cuda format

torch.layout is a class that represents the memory distribution of torch.Tensor, currently only supports torch.strided

Create tensor
and directly create
torch.tensor(data, dtype=None, device=None, requires_grad=False)

data - can be list, tuple, numpy array, scalar or other types

dtype - can return the desired tensor type

device - the returned device can be specified

requires_grad - You can specify whether to record the graph, the default is False

It should be noted that torch.tensor will always copy data, if you want to avoid copying, you can use torch.Tensor.detach(), if you are getting data from numpy, then you can use torch.from_numpy(), note from_numpy () is shared memory

torch.tensor([[0.1, 1.2], [2.2, 3.1], [4.9, 5.2]])
tensor([[ 0.1000, 1.2000],
[ 2.2000, 3.1000],
[ 4.9000, 5.2000]])

torch.tensor([0, 1]) # Type inference on data
tensor([ 0, 1])

torch.tensor([[0.11111, 0.222222, 0.3333333]],
dtype=torch.float64,
device=torch.device(‘cuda:0’)) # creates a torch.cuda.DoubleTensor
tensor([[ 0.1111, 0.2222, 0.3333]], dtype=torch.float64, device=‘cuda:0’)

torch.tensor(3.14159) # Create a scalar (zero-dimensional tensor)
tensor(3.1416)

torch.tensor([]) # Create an empty tensor (of size (0,))
tensor([])

Get data from numpy
torch.from_numpy(ndarry)

Note: The returned tensor will share data with ndarry, any operation on tensor will affect ndarry, and
vice versa

a = numpy.array([1, 2, 3])
t = torch.from_numpy(a)
t
tensor([ 1, 2, 3])
t[0] = -1
a
array([-1, 2, 3])

Create a specific tensor
based on numerical requirements:

torch.zeros(*sizes, out=None, …)# returns a zero matrix of size sizes

torch.zeros_like(input, …) # Return a zero matrix of the same size as input

torch.ones(*sizes, out=None, …) #f returns the identity matrix of sizes

torch.ones_like(input, …) #Return the identity matrix of the same size as input

torch.full(size, fill_value, …) #Returns a matrix whose size is sizes and the unit value is fill_value

torch.full_like(input, fill_value, …) returns a matrix with the same size as input, and the unit value is fill_value

torch.arange(start=0, end, step=1, …) #Return from start to end, 1-d tensor with unit step size step.

torch.linspace(start, end, steps=100, …) #Return from start to end, the interpolation number in the interval is 1-d tensor of steps

torch.logspace(start, end, steps=100, …) #Return 1-d tensor, the logarithmic interval of steps from 10 start to 10
end According to the matrix requirements:

torch.eye(n, m=None, out=None,…) #return 2-D unit diagonal matrix

torch.empty(*sizes, out=None, …) #Return a tensor filled with uninitialized values ​​and the size is sizes

torch.empty_like(input, …) # Return a tensor with the same size as input and filled with uninitialized values

Use random generation:
torch.normal(mean, std, out=None)

torch.rand(*size, out=None, dtype=None, …) #Returns a uniformly distributed random value between [0,1]

torch.rand_like(input, dtype=None, …) #Return a tensor of the same size as input, filled with uniformly distributed random values

torch.randint(low=0, high, size,…) #Returns a uniformly distributed integer random value between [low, high]

torch.randint_like(input, low=0, high, dtype=None, …) #

torch.randn(*sizes, out=None, …) #Returns a random value with a size of size and a normal distribution with a mean of 0 and a variance of 1

torch.randn_like(input, dtype=None, …)

torch.randperm(n, out=None, dtype=torch.int64) # returns a random arrangement of the sequence from 0 to n-1


Basic operation of operating tensor :

Joining ops:

torch.cat(seq,dim=0,out=None) # Connect tensors in seq along dim, all tensors must have the same size or be empty, the opposite operation is torch.split() and torch.chunk ()
torch.stack(seq, dim=0, out=None) #same as above

#Note: The difference between .cat and .stack is that cat will increase the value of the existing dimension, which can be understood as a continuation, and stack will add a new dimension, which can be understood
as superposition

a=torch.Tensor([1,2,3])
torch.stack((a,a)).size()
torch.size(2,3)
torch.cat((a,a)).size()
torch.size(6)

torch.gather(input, dim, index, out=None) #Return the new tensor collected along dim

t = torch.Tensor([[1,2],[3,4]])
index = torch.LongTensor([[0,0],[1,0]])
torch.gather(t, 0, index) #Since dim=0, the result is
| t[index[0, 0] 0] t[index[0, 1] 1] |
| t[index[1, 0] 0] t[index[1, 1] 1] |

For 3-D tensors, it can be used as

out[i][j][k] = input[index[i][j][k]][j][k] # if dim == 0
out[i][j][k] = input[i][index[i][j][k]][k] # if dim == 1
out[i][j][k] = input[i][j][index[i][j][k]] # if dim == 2

clicing ops:

torch.split(tensor, split_size_or_sections, dim=0) #Split tensor into corresponding chunks
torch.chunk(tensor, chunks, dim=0) #Split tensor into corresponding chunks, the last one will be smaller If not divisible#

#Note: The difference between split and chunk is:
split_size_or_sections of split indicates the data size in each chunk, and chunks indicates the number of chunks

a = torch.Tensor([1,2,3])
torch.split(a,1)
(tensor([1.]), tensor([2.]), tensor([3.]))
torch.chunk(a,1)
(tensor([ 1., 2., 3.]),)

Indexing ops:

torch.index_select(input, dim, index, out=None) #Return the specified tensor along dim, the index must be longTensor type, no shared memory

torch.masked_select(input, mask, out=None) #Return the value of input according to the mask, which is a 1-D tensor. Mask is ByteTensor, true returns, false does not return, and the return value does not share memory

x = torch.randn(3, 4)
x
tensor([[ 0.3552, -2.3825, -0.8297, 0.3477],
[-1.2035, 1.2252, 0.5002, 0.6248],
[ 0.1307, -2.0608, 0.1244, 2.0139]])
mask = x.ge(0.5)
mask
tensor([[ 0, 0, 0, 0],
[ 0, 1, 1, 1],
[ 0, 0, 0, 1]], dtype=torch.uint8)
torch.masked_select(x, mask)
tensor([ 1.2252, 0.5002, 0.6248, 2.0139])

Mutation ops:

torch.transpose(input, dim0, dim1, out=None) #Return tensor after dim0 and dim1 exchange
torch.t(input, out=None) #Specially for the transposition of 2D matrix, it is a convenient function of transpose

torch.squeeze(input, dim, out=None) #By default, all dimensions with a size of 1 are removed. When dim is specified, dimensions with a specified size of 1 are removed. The returned tensor will share storage space with the input, so any Any change will affect another
torch.unsqueeze(input, dim, out=None) #Expand the size of the input, such as A x B becomes 1 x A x B

torch.reshape(input, shape) #Return a tensor whose size is the same value as shape, pay attention to the expression shape=(-1,), and -1 means arbitrary.
#note reshape(-1,)

a=torch.Tensor([1,2,3,4,5]) #a.size is torch.size(5)
b=a.reshape(1,-1) #Indicates that the first dimension is 1, the second The dimension is filled with the size of a
b.size()
torch.size([1,5])

torch.where(condition,x,y) # Correspond to the value of x and y according to the value of condition, true returns the value of x, false returns the value of y, and forms a new tensor

torch.unbind(tensor, dim=0) #Return tuple to unbind the specified dim, which is equivalent to splitting by the specified dim

a=torch.Tensor([[1,2,3],[2,3,4]])
torch.unbind(a,dim=0)
(torch([1,2,3]),torch([2 ,3,4])) # Divide one (2,3) into two (3)

torch.nonzero(input, out=None) # Return the index of non-zero value, each row is an index value of non-zero value

torch.nonzero(torch.tensor([1, 1, 1, 0, 1]))
tensor([[ 0],
[ 1],
[ 2],
[ 4]])
torch.nonzero(torch.tensor([[0.6, 0.0, 0.0, 0.0],
[0.0, 0.4, 0.0, 0.0],
[0.0, 0.0, 1.2, 0.0],
[0.0, 0.0, 0.0,-0.4]]))
tensor([[ 0, 0],
[ 1, 1],
[ 2, 2],
[ 3, 3]])

Tensor operations
Point-to-point operations
Trigonometric functions:

torch.abs(input, out=None)
torch.acos(input, out=None)
torch.asin(input, out=None)
torch.atan(input, out=None)
torch.atan2(input, inpu2, out=None)
torch.cos(input, out=None)
torch.cosh(input, out=None)
torch.sin(input, out=None)
torch.sinh(input, out=None)
torch.tan(input, out=None)
torch.tanh(input, out=None)

Basic operations, addition, subtraction, multiplication and division

Torch.add(input, value, out=None)
.add(input, value=1, other, out=None)
.addcdiv(tensor, value=1, tensor1, tensor2, out=None)
.addcmul(tensor, value=1, tensor1, tensor2, out=None)
torch.div(input, value, out=None)
.div(input, other, out=None)
torch.mul(input, value, out=None)
.mul(input, other, out=None)

Logarithmic operations:

torch.log(input, out=None) # y_i=log_e(x_i)
torch.log1p(input, out=None) #y_i=log_e(x_i+1)
torch.log2(input, out=None) #y_i=log_2(x_i)
torch.log10(input,out=None) #y_i=log_10(x_i)

Power function:

torch.pow(input, exponent, out=None) # y_i=input^(exponent)

Exponential operation

torch.exp(tensor, out=None) #y_i=e^(x_i)
torch.expm1(tensor, out=None) #y_i=e^(x_i) -1

truncation function

torch.ceil(input, out=None) #Return to the positive direction to get the smallest integer
torch.floor(input, out=None) #Return to the negative direction to get the largest integer

torch.round(input, out=None) #returns the nearest integer, rounded

torch.trunc(input, out=None) #Return the value of the integer part
torch.frac(tensor, out=None) #Return the value of the fractional part

torch.fmod(input, divisor, out=None) #return input/divisor remainder
torch.remainder(input, divisor, out=None) #same as above

other operations

torch.erf(tensor, out=None)

torch.erfinv(tensor, out=None)

torch.sigmoid(input, out=None)

torch.clamp(input, min, max out=None) #return input<min, then return min, input>max, then return max, and the rest return input

torch.neg(input, out=None) #out_i=-1*(input)

torch.reciprocal(input, out=None) # out_i= 1/input_i

torch.sqrt(input, out=None) # out_i=sqrt(input_i)
torch.rsqrt(input, out=None) #out_i=1/(sqrt(input_i))

torch.sign(input, out=None) #out_i=sin(input_i) greater than 0 is 1, less than 0 is -1

torch.lerp(start, end, weight, out=None)

Dimensionality reduction operation
torch.argmax(input, dim=None, keepdim=False) #returns the index value sorted by the maximum value
torch.argmin(input, dim=None, keepdim=False) #returns the index value sorted by the minimum value

torch.cumprod(input, dim, out=None) #y_i=x_1 * x_2 * x_3 x_i
torch.cumsum(input, dim, out=None) #y_i=x_1 + x_2 + … + x_i

torch.dist(input, out, p=2) #returns the p-type distance between input and out
torch.mean() #returns the average value
torch.sum() #returns the sum
torch.median(input) #returns the intermediate value
torch. mode(input) #returns the majority value
torch.unique(input, sorted=False) #returns the unique tensor of 1-D, each value is returned once.

output = torch.unique(torch.tensor([1, 3, 2, 3], dtype=torch.long))
output
tensor([ 2, 3, 1])

torch.std( #return standard deviation)
torch.var() #return variance

torch.norm(input, p=2) #returns the paradigm of p-norm
torch.prod(input, dim, keepdim=False) #returns the product of each row of the specified dimension

Comparison operation:
torch.eq(input, other, out=None) #Equal operation by member, same return 1
torch.equal(tensor1, tensor2) #True if tensor1 and tensor2 have the same size and elements

torch.eq(torch.tensor([[1, 2], [3, 4]]), torch.tensor([[1, 1], [4, 4]]))
tensor([[ 1, 0],
[ 0, 1]], dtype=torch.uint8)
torch.eq(torch.tensor([[1, 2], [3, 4]]), torch.tensor([[1, 1], [4, 4]]))
tensor([[ 1, 0],
[ 0, 1]], dtype=torch.uint8)

torch.ge(input, other, out=None) # input>= other
torch.gt(input, other, out=None) # input>other
torch.le(input, other, out=None) # input=<other
torch.lt(input, other, out=None) # input<other
torch.ne(input, other, out=None) # input != other 不等于

torch.max() # return the maximum value
torch.min() # return the minimum value
torch.isnan(tensor) # determine whether it is 'nan'
torch.sort(input, dim=None, descending=False, out=None) # Sort the target input
torch.topk(input, k, dim=None, largest=True, sorted=True, out=None) #Return the maximum k values ​​and their index values ​​along the specified dimension torch.kthvalue(input,
k , dim=None, deepdim=False, out=None) #Return the minimum k values ​​and their index values ​​along the specified dimension

频谱操作
torch.fft(input, signal_ndim, normalized=False)
torch.ifft(input, signal_ndim, normalized=False)
torch.rfft(input, signal_ndim, normalized=False, onesided=True)
torch.irfft(input, signal_ndim, normalized=False, onesided=True)
torch.stft(signa, frame_length, hop, …)

Other operations:
torch.cross(input, other, dim=-1, out=None) #cross product (outer product)

torch.dot(tensor1, tensor2) #Return the dot product of tensor1 and tensor2

torch.mm(mat1, mat2, out=None) #Return the product of matrix mat1 and mat2

torch.eig(a, eigenvectors=False, out=None) #Return the eigenvalue/eigenvector of matrix a

torch.det(A) #Return the determinant of matrix A

torch.trace(input) #returns the trace of the 2-d matrix (summing the diagonal elements)

torch.diag(input, diagonal=0, out=None) #

torch.histc(input, bins=100, min=0, max=0, out=None) #Calculate the histogram of input

torch.tril(input, diagonal=0, out=None) #Return the lower triangular matrix of the matrix, the others are 0

torch.triu(input, diagonal=0, out=None) #returns the upper triangular matrix of the matrix, the others are 0

Tips:
Obtain python number:
After pytorch 0.4, the acquisition of python number is realized through .item() method:

a = torch.Tensor([1,2,3])
a[0] #Take the index directly and return the tensor data
tensor(1.)
a[0].item() #Get python number
1

Tensor setting
judgment:

torch.is_tensor() #If it is the tensor type of pytorch, return true
torch.is_storage() #If it is the storage type of pytorch, return true

There is also a little trick here. If you need to judge whether the tensor is empty, you can do it as follows

a=torch.Tensor()
len(a)
0
len(a) is 0
True

Setting: Through some built-in functions, you can set the accuracy, type, print parameters, etc. of the tensor

torch.set_default_dtype(d) #Set the default floating point type for torch.tensor()

torch.set_default_tensor_type() # Same as above, set the default tensor type for torch.tensor()

torch.tensor([1.2, 3]).dtype # initial default for floating point is torch.float32
torch.float32
torch.set_default_dtype(torch.float64)
torch.tensor([1.2, 3]).dtype # a new floating point tensor
torch.float64
torch.set_default_tensor_type(torch.DoubleTensor)
torch.tensor([1.2, 3]).dtype # a new floating point tensor
torch.float64

torch.get_default_dtype() #Get the current default floating point type torch.dtype

torch.set_printoptions(precision=None, threshold=None, edgeitems=None, linewidth=None, profile=None)#)

Set the printing parameters of printing

Guess you like

Origin blog.csdn.net/weixin_45884316/article/details/124366100