О бритве Оккама
Распространенная современная интерпретация "бритвы Оккама" для широкого круга задач не работает даже в математике:
"In machine learning, this is [Occam’s razor] often taken to mean that, given two classifiers with the same training error, the simpler of the two will likely have the lowest test error. Purported proofs of this claim appear regularly in the literature, but in fact there are many counter examples to it, and the “no free lunch” theorems imply it cannot be true. We saw one counter-example in the previous section: model ensembles. The generalization error of a boosted ensemble continues to improve by adding classifiers even after the training error has reached zero."[1]
Т. е. большая простота классификатора не предполагает непременно большую точность: распространенная современная интерпретация "бритвы Оккама" для широкого круга задач не работает даже в математике.
Сноски[править]
- ↑ Domingos P. A few useful things to know about machine learning // Communications of the ACM. Vol. 55 (10). 2012. n. 11.