[Loss is Nan] About the problem that Loss is Nan during deep learning training

About the problem that Loss is Nan in the process of deep learning training

question

When training the code on unnormalized data, everything works fine for training

But after normalizing the data, there is a problem that Loss is empty during the training process (especially after training the first batch)

In this way, we have determined that our own network model, optimizer, and loss function are all correct, and the null value is due to the problem of normalization

solution

The reason is that the input data has nan after normalization

Add the following code when loading data

import numpy as np
data = imread(path) #读取图像
np.asarray(data) #转为矩阵形式
nan=np.isnan(data) #将空值的索引存储在nan中
data[nan]=0 #将所有空值赋值为0

Summarize

The normalization method I use is the most value normalization:
xnorm = x − xminxmax − xmin x_{norm}=\frac{x-x_{min}}{x_{max}-x_{min}}xnorm=xmaxxminxxmin
But why the data has null values ​​when normalizing is still a problem, if anyone knows, please point it out in the comment area

If the reason is found later, it will be updated and supplemented here

おすすめ

転載: blog.csdn.net/weixin_46751388/article/details/131061510
おすすめ