Getting started with the Notes Pytorch

Getting started with the Notes Pytorch

  1. What is Pytorch?

    Pytorch Torch is to transplant the Python (Torch originally used Lua language)

    It is a dynamic process, and the data is built together.

  2. tensor.dot (tensor1, tensor2) // tensor in each location and then adding the multiplied

  3. print (net) may output the network structure

  4. The dynamic Pytorch: a plurality of network parameters may not be fixed, for example:

    Source: https://morvanzhou.github.io/tutorials/machine-learning/torch/5-01-dynamic/

    The most typical example is the RNN, sometimes RNN of time step will not be the same as, or in the training and testing time, batch_sizeand time_stepnot the same, then, Tensorflow on headache, Tensorflow of people a headache. Haha, if a dynamic computation graph Torch, we like to understand more, it is also much easier to write.

  5. Use layer activation function and function, no difference in effect

  6. Use torch.nn.Sequential quickly build models

    torch.nn.Sequential(

    ​ #eg

    ​ torch.nn.linear(2,10),

    ​ torch.nn.ReLU(),

    ​ torch.nn.linear(10,2),

    )

As used herein, is an anonymous object, then print out is no type name (ie self.hidden and self.predict like, when output will display hidden and predict).

  1. Save and retrieve neural network

    1. Storage

      torch.save (net, "net.pkl") # save the entire neural network model, type named pkl

      torch.save (net.state_dict (), "net_params.pkl") # only save the parameters without saving the entire network

    2. extract

      net = torch.load ( "net.pkl") # extract Network

      net2=torch.nn.Sequential(

      Here are just a Sequential cited the example to create a network, and if this is not the case the anonymous method is the same, that is, before the extraction parameters to be set up exactly the same as the original network and a network structure

      )

      net2.load_state_dict (torch.load ( "net_params.pkl")) # extract only parameter

  2. Batch training (Mini Batch Training)

    BATCH_SIZE=5
    x=torch.linspace(1,10,10)
    y=torch.linspace(10,1,10)
    torch_dataset=Data.TensorDataset(data_tensor=x,target_tensor=y)
    loader=Data.DataLoader(
     dataset=torch_dataset,
        batch_size=BATCH_SIZE,
        shuffle=True,#shuffle如果设置为true,则每次batch都是选择的不一样的数据,设置为False,则每次batch的数据都一样。
        num_workers=2,#设置提取数据时候的线程数量
    )
    for epoch in range(3):
        for step,(batch_x,batch_y)in enumerate(loader):#enumerate() 函数用于将一个可遍历的数据对象(如列表、元组或字符串)组合为一个索引序列,同时列出数据和数据下标,一般用在 for 循环当中。
       #例如本例中,那个step就是提取的index
    
  3. Super parameters: In the context of machine learning, the super parameters are parameters before the start of the learning process set value, rather than the parameter data obtained by training. Under normal circumstances, the need for ultra-parameter optimization, to select a set of machine learning to optimize the hyper order to improve the performance and effectiveness of learning.

    Thus, ultra-parameters are typically manually specified, as defined in global variables before model, its model and training process control. Traditionally, in capital is represented.

  4. image-20191129113300302

    I think it is the third for loop and zip together quite spiritual.

Guess you like

Origin www.cnblogs.com/jiading/p/11964700.html