Detailed understanding of tensors in Tensor

The most popular language explains what a tensor is. Recently, if you have confusion or incomprehension, you can take a look at it in detail. Just be patient. emmmm, I ran into a problem today and messed me up all at once, so I will recall the basics again.

1. One-dimensional tensor
Starting from one-dimensional:

tf.constant([1.0 , 3.0 , 6.0])

Means: Generate a one-dimensional tensor (vector), shape is 1 row and 3 columns, that is, shape is [3]. Why not [1,3]? Because this tensor is one-dimensional and has only one dimension, you can understand the relationship between dimension and shape as a key:value relationship, that is, a dimension has a value, and this value is reflected in the tensor as the number of layers of [].
The shape of [1,3] should be:tf.constant([[1.0 , 5.0 , 9.0]])

Two, two-dimensional tensor

tf.constant([[1., 2., 3.], [4., 5., 6.], [7., 8., 9.]])

As above, this is a two-dimensional tensor. To see how many dimensions a tensor has, first count from the innermost layer [or], and how many layers [or] have dimensions, so here is two-dimensional.
From the [or] point of view, it can be judged that there are two dimensions, so how to judge the shape here?
First of all, from the innermost dimension, it has 3 elements, such as [1.,2.,3.] (note that this generally represents several features, which can be understood as the column of Datafame), so the most The inner dimension is 3, and the innermost dimension has 3 samples, so the outermost dimension is 3.

Why am I talking about the innermost and the outermost here? Instead of talking about the first dimension and the second dimension? Because dimensions can be relative, you can also call the innermost dimension the first dimension, and you can also call the outermost dimension the first dimension. This is not a big deal. Here we regard the innermost as the first dimension, and the outermost as the second dimension.

Three, three-dimensional tensor
First, the following is a three-dimensional tensor:

tf.constant([ [ [1., 1.], [2., 2.]], [[3., 3.], [4., 4.] ] ])

The return value is:

<tf.Tensor: id=700, shape=(2, 2, 2), dtype=float32, numpy=
array([[[1., 1.],
        [2., 2.]],

       [[3., 3.],
        [4., 4.]]], dtype=float32)>

How to understand it? Firstly, it can be judged from the number of [that this tensor is three-dimensional. Secondly, the first dimension has two values, such as 1. and 1. Therefore, the first dimension is 2, and the second dimension also has two values. Such as [1.,1.] and [2.,2.], so the second dimension is 2, and then look at the third dimension, the third dimension has two values, respectively:
[[1.,1 .],[2.,2.]] and [[3.,3.],[4.,4.]]
so the third dimension is 2. So this tensor has 3 dimensions, and the values ​​of the 3 dimensions are 2, 2, and 2. That is, the shape is (2, 2, 2). Note: The dimension index here is calculated from the inside out.

Four, more than three-dimensional
This is not an example, anyway, just keep stacking [] into the array.
such as:

tf.constant([ [ [[1., 1.], [2., 2.]], [[3., 3.], [4., 4.] ]] ])

This is a four-dimensional vector!
The return value is:

<tf.Tensor: id=701, shape=(1, 2, 2, 2), dtype=float32, numpy=
array([[[[1., 1.],
         [2., 2.]],

        [[3., 3.],
         [4., 4.]]]], dtype=float32)>

Analyze it: when analyzing the three dimensions, the shape of the three-dimensional tensor is (2,2,2), and then a layer of [] is added for four-dimensional, which can be understood as follows: the third dimension of the three-dimensional tensor is counted as the fourth dimension Value, how many third dimensions are there, then what is the shape value of the fourth dimension.
So there are so many three dimensions here, as follows:

[[[1., 1.],
  [2., 2.]],

  [[3., 3.],
  [4., 4.]]]

The above is a three-dimensional tensor, and the fourth dimension of a four-dimensional tensor has only such a three-dimensional. Note that the above three-dimensional is regarded as a value of the four-dimensional!!! So because the fourth-dimensional has only one value, so naturally the fourth-dimensional The shape value is 1.

Emmm said that it is not an example, but another example. . .

The above is the fourth dimension. Then, in the case of n-dimensionality, it is also understood according to this train of thought and law.

note:!
In the format printed out of the TF tensor, he does not count the tensor from the inside to the outside, he counts the tensor from the outside to the inside.
Case:

tf.constant([ [ [[1., 1.], [2., 2.]], [[3., 3.], [4., 4.] ]] ])

Print result:

<tf.Tensor: id=948, shape=(1, 2, 2, 2), dtype=float32, numpy=
array([[[[1., 1.],
         [2., 2.]],

        [[3., 3.],
         [4., 4.]]]], dtype=float32)>

The shape hint here is (1, 2, 2, 2). If the shape is viewed from left to right, then when it counts in the tensor, it counts from the outside to the inside. Jin Kee!

If the shape is viewed from right to left, then when it counts in the tensor, it counts from the inside to the outside. Jin Kee!

Guess you like

Origin blog.csdn.net/qq_42658739/article/details/110262737