Visualize a simple neural network using TorchLens

  TorchLens: Can be used to visualize any PyTorch model, a package for extracting and mapping the results of every tensor operation in a PyTorch model in a single line of code. TorchLens is very powerful. If you can master it proficiently, it can be regarded as a sharp sword for visualizing PyTorch models. This article uses TorchLens to visualize a simple neural network, which can be regarded as a starting point.

1. Define a simple neural network

import torch
import torch.nn as nn
import torch.optim as optim
import torchlens as tl
import os
os.environ["PATH"] += os.pathsep + 'D:/Program Files/Graphviz/bin/'


# 定义神经网络类
class NeuralNetwork(nn.Module): # 继承nn.Module类
    def __init__(self, input_size, hidden_size, output_size):
        super(NeuralNetwork, self).__init__() # 调用父类的构造函数
        # 定义输入层到隐藏层的线性变换
        self.input_to_hidden = nn.Linear(input_size, hidden_size)
        # 定义隐藏层到输出层的线性变换
        self.hidden_to_output = nn.Linear(hidden_size, output_size)
        # 定义激活函数
        self.sigmoid = nn.Sigmoid()

    def forward(self, x):
        # 前向传播
        hidden = self.sigmoid(self.input_to_hidden(x))
        output = self.sigmoid(self.hidden_to_output(hidden))
        return output

def NeuralNetwork_train(model):
    # 训练神经网络
    for epoch in range(10000):
        optimizer.zero_grad()  # 清零梯度
        outputs = model(input_data)  # 前向传播
        loss = criterion(outputs, labels)  # 计算损失
        loss.backward()  # 反向传播和优化
        optimizer.step()  # 更新参数

        # 每100个epoch打印一次损失
        if (epoch + 1) % 1000 == 0:
            print(f'Epoch [{epoch + 1}/10000], Loss: {loss.item():.4f}')

    return model


def NeuralNetwork_test(model):
    # 在训练后,可以使用模型进行预测
    with torch.no_grad():
        test_input = torch.tensor([[0, 0], [0, 1], [1, 0], [1, 1]], dtype=torch.float32)
        predictions = model(test_input)
        predicted_labels = (predictions > 0.5).float()
        print("Predictions:", predicted_labels)


if __name__ == '__main__':
    # 定义神经网络的参数
    input_size = 2  # 输入特征数量
    hidden_size = 4  # 隐藏层神经元数量
    output_size = 1  # 输出层神经元数量

    # 创建神经网络实例
    model = NeuralNetwork(input_size, hidden_size, output_size)

    # 定义损失函数和优化器
    criterion = nn.BCELoss()  # 二分类交叉熵损失
    optimizer = optim.SGD(model.parameters(), lr=0.1)  # 随机梯度下降优化器

    # 准备示例输入数据和标签
    input_data = torch.tensor([[0, 0], [0, 1], [1, 0], [1, 1]], dtype=torch.float32)
    labels = torch.tensor([[0], [1], [1], [0]], dtype=torch.float32)

    # model:神经网络模型
    # input_data:输入数据
    # layers_to_save:需要保存的层
    # vis_opt:rolled/unrolled,是否展开循环
    model_history = tl.log_forward_pass(model, input_data, layers_to_save='all', vis_opt='unrolled')  # 可视化神经网络
    print(model_history)
    # print(model_history['input_1'].tensor_contents)
    # print(model_history['input_1'])
    
    tl.show_model_graph(model, input_data)
     
    # model = NeuralNetwork_train(model) # 训练神经网络
    # NeuralNetwork_test(model) # 测试神经网络

1. Neural network structure
  The input layer includes 2 neurons, the hidden layer includes 4 neurons, and the output layer includes 1 neuron.
2.log_forward_pass
  Given input x, runs forward pass through the model and returns a ModelHistory object containing the forward pass log (layer activations and corresponding layer metadata). If vis_opt is set to rolled or unrolled and visualize the model diagram.
3.show_model_graph
  visualizes the model graph without saving any activations.
4. View the neural network model parameters.
There are 17 parameters in total for weight (12) + bias (5), as shown below:


2. Output result analysis
1.model_history output result

Log of NeuralNetwork forward pass: // 神经网络前向传播日志
	Random seed: 1626722175 // 随机种子
	Time elapsed: 1.742s (1.74s spent logging) // 耗时
	Structure: // 结构
		- purely feedforward, no recurrence // 纯前馈,无循环
		- no branching // 无分支
		- no conditional (if-then) branching // 无条件(if-then)分支
		- 3 total modules // 3个模块
	Tensor info: // 张量信息
		- 6 total tensors (976 B) computed in forward pass. // 前向传播中计算的6个张量(976 B)
		- 6 tensors (976 B) with saved activations. // 6个张量(976 B)保存了激活
	Parameters: 2 parameter operations (17 params total; 548 B) // 参数:2个参数操作(总共17个参数;548 B)
	Module Hierarchy: // 模块层次
		input_to_hidden // 输入到隐藏
		sigmoid:1 // sigmoid:1
		hidden_to_output // 隐藏到输出
		sigmoid:2 // sigmoid:2
	Layers (all have saved activations): // 层(所有层都有保存的激活)
		  (0) input_1        // 输入
		  (1) linear_1_1     // 线性
		  (2) sigmoid_1_2    // sigmoid
		  (3) linear_2_3     // 线性
		  (4) sigmoid_2_4    // sigmoid
		  (5) output_1       // 输出

2. The show_model_graph output result

(1) contains a total of 6 layers
  , namely input_1, linear_1_1, sigmoid_1_2, linear_2_3, sigmoid_2_4 and output_1.
(2) A total of 6 tensors
  refer to input_1(160B), linear_1_1(192B), sigmoid_1_2(192B), linear_2_3(144B), sigmoid_2_4(144B) and output_1(144B). Total 976B.
(3) input_1 4*2 (160B)
  4*2 represents the shape of input_1, and 160B refers to the size of the space occupied by the tensor in memory, in bytes (B). Knowing the shape and memory footprint of a tensor is useful information for model memory management and optimization. Other tensor information is as follows:

(4) A total of 17 parameters,
  linear_1_1 parameter information is 4*2 and *4, linear_1_1 parameter information is 1*4 and *1, a total of 17 parameters, and the memory occupied is 548B.

3. Problems encountered
1. Need to install and set up graphviz

subprocess.CalledProcessError: Command '[WindowsPath('dot'), '-Kdot', '-Tpdf', '-O', 'graph.gv']' returned non-zero exit status 1. 

The solution is to D:\Program Files\Graphviz\binadd to the system environment variable PATH.

2. AlexNet Neural Network
Because the BP neural network is too simple, let’s visualize a slightly more complex AlexNet neural network, as shown below:


References:
[1]torchlens_tutorial.ipynb: https://colab.research.google.com/drive/1ORJLGZPifvdsVPFqq1LYT3t5hV560SoW?usp=sharing#scrollTo=W_94PeNdQsUN
[2]Extracting and visualizing hidden activations and computational graphs of PyTorch models with TorchLens: https://www.nature.com/articles/s41598-023-40807-0
[3]torchlens: https://github.com/johnmarktaylor91/torchlens
[4]Torchlens Model Menagerie: https://drive.google. com/drive/folders/1BsM6WPf3eB79-CRNgZejMxjg38rN6VCb
[5] Use TorchLens to visualize a simple neural network: github.com/ai408/nlp-engineering/tree/main/20230917_NLP Engineering Public Account Article/Use torchlens to visualize a simple neural network

Guess you like

Origin blog.csdn.net/shengshengwang/article/details/132959275