1. sklearn.metrics.accuracy_score(y_true,y_pred,*,normalize=True,sample_weight=None)
Among them, y_true indicates the real label; y_pred indicates the predicted label; normalize indicates whether to normalize the result; sample_weight indicates the weight of the sample.
The calculation method of the two classifications:
from sklearn.metrics import accuracy_score
y_pred = [0, 2, 1, 3]
y_true = [0, 1, 2, 3]
accuracy_score(y_true, y_pred):0.5
accuracy_score(y_true, y_pred, normalize=False):2
Multi-label calculation method:
import numpy as np
accuracy_score(np.array([[0, 1], [1, 1]]), np.ones((2, 2))):0.5
Among them, y_true and y_pred are expressed in the form of 0, 1.
2. sklearn.metrics.f1_score(y_true,y_pred,*,labels=None,pos_label=1,average='binary',sample_weight=None,zero_division='warn')
Among them, y_true represents the real label; y_pred represents the predicted label;
labels indicates which categories of F1 values are to be calculated.
average is necessary for multi-category and multi-label, there are six types {'micro', 'macro', 'samples', 'weighted', 'binary', 'None'},
Where 'None' means to return the score of each category;
'binary' means to return the class result of the specified pos_label, which is only applicable to the case of binary classification;
'micro' means to calculate the indicator globally by calculating the total true positives, false negatives and false positives;
'macro' represents the indicators for each category and calculates their unweighted average;
'weighted' calculates the index of each category and calculates the weighted average; it can solve the problem of 'macro' category imbalance to a certain extent;
'samples' means to calculate the indicators of each instance and calculate the average value;
zero_division indicates that when the divisor is zero
different average
from sklearn.metrics import f1_score
y_true = [0, 1, 2, 0, 1, 2]
y_pred = [0, 2, 1, 0, 0, 1]
f1_score(y_true, y_pred, average='macro'):0.26
f1_score(y_true, y_pred, average='micro'):0.33
f1_score(y_true, y_pred, average='weighted'):0.26
f1_score(y_true, y_pred, average=None):array([0.8, 0. , 0. ])
y_true = [0, 0, 0, 0, 0, 0]
y_pred = [0, 0, 0, 0, 0, 0]
f1_score(y_true, y_pred, zero_division=1):1.0
multi-label
y_true = [[0, 0, 0], [1, 1, 1], [0, 1, 1]]
y_pred = [[0, 0, 0], [1, 1, 1], [1, 1, 0]]
f1_score(y_true, y_pred, average=None):array([0.66666667, 1. , 0.66666667])
labels: Calculate the value of the specified category
metrics.f1_score(y_true, y_pred, labels=[1, 2], average='micro'):0.0