Machine learning loss function-python implementation

1) Mean squared error

Insert picture description here
Insert picture description here
Program realization

def mean_squared_error(y, t):
    return 0.5 * np.sum((y-t)**2)

For example:

    t = [0, 0, 1, 0, 0, 0, 0, 0, 0, 0]#假设第二个位置为正确值,ont-hot显示
    y1 = [0.1, 0.05, 0.6, 0.0, 0.05, 0.1, 0.0, 0.1, 0.0, 0.0]#实际的向量,2的概率最大
    val = mean_squared_error(np.array(y1), np.array(t))
    print(val)#0.09750000000000003  均方差比较小

    y2 = [0.1, 0.05, 0.1, 0.0, 0.05, 0.1, 0.0, 0.6, 0.0, 0.0]#7的位置 概率最大
    val = mean_squared_error(np.array(y2), np.array(t))
    print(val)#0.5975  均方差比较大
    #由此可见y1 与目标值相近

2) The cross-entropy error
Insert picture description here
yk is the output of the neural network, and tk is the correct solution label.
The value of the cross-entropy error is determined by the output result corresponding to the correct labeling.

The larger the output corresponding to the correct labeling is, the closer the value is to 0; when the output is 1, the cross entropy error is 0. In addition, if the output corresponding to the correct solution of the label is smaller, the value of the formula is larger .

def cross_entropy_error(y, t):
	delta = 1e-7
	return -np.sum(t * np.log(y + delta))

for example

    t = [0, 0, 1, 0, 0, 0, 0, 0, 0, 0]  # 假设第二个位置为正确值,ont-hot显示
    y1 = [0.1, 0.05, 0.6, 0.0, 0.05, 0.1, 0.0, 0.1, 0.0, 0.0]  # 实际的向量,2的概率最大
    val = cross_entropy_error(np.array(y1), np.array(t))
    print(val)#0.510825457099338
    
    y2 = [0.1, 0.05, 0.1, 0.0, 0.05, 0.1, 0.0, 0.6, 0.0, 0.0]#7的位置 概率最大
    val = cross_entropy_error(np.array(y2), np.array(t))
    print(val)#2.302584092994546

2) Batch processing

one-hot form

def cross_entroy_error(y,t):
    if y.ndim == 1:
        t = t.reshape(1,t.size)#转化为二维
        y = y.reshape(1,y.size)
    batch_size = y.shape[0]
    return -np.sum(t*np.log(y+1e-7))/batch_size

Non-one-hot form

def cross_entroy_error(y,t):
    if y.ndim == 1:
        t = t.reshape(1,t.size)
        y = y.reshape(1,y.size)
    batch_size = y.shape[0]
    return -np.sum(np.log(y[np.arrange(batch_size),t]+1e-7))/batch_size
    #np.log(y[np.arrange(batch_size),t]+1e-7) 

If the output of the neural network at the correct solution label can be obtained, the cross-entropy error can be calculated. Therefore, when t is a one-hot representation, where t * np.log(y) is calculated, when t is a label form, np.log( y[np.arange(batch_size), t]) can be used to achieve the same processing .

Guess you like

Origin blog.csdn.net/WANGYONGZIXUE/article/details/110294482