The use of reshape
Conclusion: reshape, whether it is numpy.array or torch.Tensor, rearranges the data according to the order of the rows. reshape can modify the dimensions of the original data
, but it should be noted that
Tensor.reshape(*shape);
array. reshape(shape, order='C ' )
means array.reshape (shape=(-1, 2, 3, 2)); Tensor.reshape(-1, 2, 3, 2)
One more word:
torch's view() and reshape() methods can both be used to reshape tensor's shape (note: view() in numpy is not a method for reshaping data but will change the data type and return a copy method, don't use it in the wrong place) ,
the difference between view() and reshape() is that the conditions used are different .
The view() method is only applicable to tensors that meet the continuity conditions , and this operation does not open up new memory space , but only generates a new alias and reference to the original storage space, and the return value is a view.
The return value of the reshape() method can be either a view or a copy. When the continuity condition is met, the view is returned, otherwise a copy is returned [this is equivalent to the contiguous().view() method, but the difference is reshape will open up a new memory space to save the reshape data ].
Conclusion:
So when you are not sure whether you can use view, you can use reshape.
If you just want to simply reshape the shape of a tensor, then use reshape,
but if you need to consider the memory overhead and ensure that the reshaped tensor shares storage space with the previous tensor , then use view().
1、reshape(-1)
print("===================test reshape(-1)==============================")
test_arr = torch.Tensor([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]])
print(test_arr)
print(test_arr.reshape(-1))
test_arr = torch.Tensor([[1, 4, 7, 10], [2, 5, 8, 11], [3, 6, 9, 12]])
print(test_arr)
print(test_arr.reshape(-1))
Result: the data is stretched in the order of the rows
===================test reshape(-1)==============================
tensor([[ 1., 2., 3., 4.],
[ 5., 6., 7., 8.],
[ 9., 10., 11., 12.]])
tensor([[ 1., 2., 3.],
[ 4., 5., 6.],
[ 7., 8., 9.],
[10., 11., 12.]])
[[ 1 4 7 10]
[ 2 5 8 11]
[ 3 6 9 12]]
[ 1 4 7 10 2 5 8 11 3 6 9 12]
2. The dimension of reshape remains unchanged
print("===================test reshape==============================")
test_arr = torch.Tensor([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]])
print(test_arr)
print(test_arr.reshape(-1, 3))
test_arr = np.array([[1, 4, 7, 10], [2, 5, 8, 11], [3, 6, 9, 12]])
print(test_arr)
print(test_arr.reshape(-1, 3))
Result: rearrange the data in the order of the rows
===================test reshape==============================
tensor([[ 1., 2., 3., 4.],
[ 5., 6., 7., 8.],
[ 9., 10., 11., 12.]])
tensor([[ 1., 2., 3.],
[ 4., 5., 6.],
[ 7., 8., 9.],
[10., 11., 12.]])
[[ 1 4 7 10]
[ 2 5 8 11]
[ 3 6 9 12]]
[[ 1 4 7]
[10 2 5]
[ 8 11 3]
[ 6 9 12]]
3. reshape adds a dimension
print("===================test reshape==============================")
test_arr = torch.Tensor([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]])
print(test_arr)
print(test_arr.reshape(-1, 2, 3, 2))
test_arr = np.array([[1, 4, 7, 10], [2, 5, 8, 11], [3, 6, 9, 12]])
print(test_arr)
print(test_arr.reshape((-1, 2, 3, 2)))
Result: rearrange the data in the order of the rows
===================test reshape==============================
tensor([[ 1., 2., 3., 4.],
[ 5., 6., 7., 8.],
[ 9., 10., 11., 12.]])
tensor([[[[ 1., 2.],
[ 3., 4.],
[ 5., 6.]],
[[ 7., 8.],
[ 9., 10.],
[11., 12.]]]])
[[ 1 4 7 10]
[ 2 5 8 11]
[ 3 6 9 12]]
[[[[ 1 4]
[ 7 10]
[ 2 5]]
[[ 8 11]
[ 3 6]
[ 9 12]]]]