Comparison of the accuracy of machine learning and human
When an algorithm performed better than humans, performance will gradually reach a theoretical upper limit, which is Bayes optimal error rate (bayes optional error), it refers to the optimal error rate can theoretically be achieved
To avoid bias
Cats classifier example, assume that human error rate of 1%, if the training set and test set error rate of 8% and 10%, respectively, compared with we would tend to reduce the deviation, because human error rate of 1%, the model fitting is not good. So we can train a larger neural network, or run a little longer gradient descent. But if human error rate is 7.5%, then there is not much need to reduce the error rate. Because human error rate and the error rate is almost Bayes optimal, so the current error rate and Bayesian has been almost, even if we strengthen the training, the error rate has not greatly improved, so never mind the error rate a.
Can avoid bias error rate = Model - Bayes optimal error rate (error rate human)
To avoid bias on behalf of the algorithms on how much room for improvement variance issues.
How to understand the person's performance
One example to illustrate: For the diagnosis of a medical CT, abcd are ordinary people, general practitioners, specialists, leading a team of error rate, these four categories of human error is different, so what should people as "standard error rate "mean? Generally considered to be the lowest error rate criteria, that the error rate is seen as the top team of 0.5 Bayes optimal error rate.