Feature decoupling, torch.cumprod(), np.random.seed(), plt.scatter

1.infoGAN
Usually, the features we learn are mixed together. As shown in the figure above, these features are encoded in a complex and unordered way in the data space, but if these features are decomposable, then these features Features will be more interpretable, and it will be easier for us to exploit these features for coding. So, how will we obtain these decomposable features through unsupervised learning?
Predecessors have also learned decomposable features through many supervised and unsupervised methods. In this paper, unsupervised learning learns decomposable features by using continuous and discrete latent factors.
2. Feature decoupling:
The features in the actual situation are very messy, and we hope that the feature relationship is relatively neat and clear. It is clear which column represents what, making it easy to control it. The purpose of infogan is to clarify and regularize these chaotic features.
Example of feature decoupling:
We can find a neuron that controls a certain feature, and then change its value to change a specific feature.
3.x.detach() is taken from the difference between x.data() and x.detach() in Pytorch
. Block gradient return.
Both x.data() or x.detach() will return a Tensor with the same data as x. And this Tensor shares memory with the original Tensor. If one changes, the other will also change, and the new tensor's requires_grad = False
instance:

class TestDetach(nn.Module):
    def __init__(self, InDim, HiddenDim, OutDim):
        super().__init__()
        self.layer1 = nn.Linear(InDim, HiddenDim, False)
        self.layer2 = nn.Linear(HiddenDim, OutDim, False)

    def forward(self, x, DetachLayer1):
        x = torch.relu(self.layer1(x))
        x = x.detach()
        # x = x.data()
        x = self.layer2(x)
        return x

Two-layer linear layer, the output of the first layer is detached, then the parameters of the first layer will never be updated .
4.torch.cumprod()
cumulative product means cumulative multiplication
. Example:

import torch
x = torch.Tensor([1, 2, 3, 4, 5])
y = torch.cumprod(x, dim = 0)
print(y)

tensor([ 1., 2., 6., 24., 120.])

5np.random.seed(0) can generate the same random number.
This is a function with no return value, used to initialize the random number function. A parameter can be added to the parentheses of seed(). This parameter will be the basis for generating random numbers. If no parameter is added, the basis will be the system time. If the parameters do not change, then the generation of random numbers will be consistent, making it easier to reproduce experiments.
That is to say, if we use seed(x), as long as x does not change, then this random sequence will never change.

Each call requires seed(0), which means the seeds are the same.
Example:

import numpy as np
np.random.seed(0)
x = np.random.randn(2,2)
np.random.seed(0)
y = np.random.randn(2,2)
print(x)
print(y)

[[1.76405235 0.40015721]
[0.97873798 2.2408932 ]]
[[1.76405235 0.40015721]
[0.97873798 2.2408932 ]]

import numpy as np
np.random.seed(1)
x = np.random.randn(2,2)
np.random.seed(0)
y = np.random.randn(2,2)
np.random.seed(1)# z同x
z = np.random.randn(2,2)
print(x)
print(y)
print(z) 

[[ 1.62434536 -0.61175641]
[-0.52817175 -1.07296862]]
[[1.76405235 0.40015721]
[0.97873798 2.2408932 ]]
[[ 1.62434536 -0.61175641]
[-0.52817175 -1.07296862]]

6.plt.scatter usage

import numpy as np
import matplotlib.pyplot as plt

np.random.seed(0)
x = np.random.rand(20)#x坐标
y = np.random.rand(20)#y坐标

colors = np.random.rand(20)
area = (50 * np.random.rand(20)) ** 2#面积
print("area",area)

plt.scatter(x, y, s=area, c=colors, alpha=0.5)
plt.show()

Insert image description here

Guess you like

Origin blog.csdn.net/weixin_44040169/article/details/128062406