- Accuracy paradox
The

**accuracy paradox**forpredictive analytics states that predictive models with a given level ofaccuracy may have greaterpredictive power than models with higher accuracy. It may be better to avoid the accuracy metric in favor of other metrics such asprecision andrecall .Accuracy is often the starting point for analyzing the quality of a predictive model, as well as an obvious criterion for prediction. Accuracy measures the ratio of correct predictions to the total number of cases evaluated. It may seem obvious that the ratio of correct predictions to cases should be a key metric. A predictive model may have high accuracy, but be useless.

In an example predictive model for an

insurance fraud application, all cases that are predicted as high-risk by the model will be investigated. To evaluate the performance of the model, the insurance company has created a sample data set of 10,000 claims. All 10,000 cases in thevalidation sample have been carefully checked and it is known which cases are fraudulent. To analyze the quality of the model, the insurance uses thetable of confusion . The definition of accuracy, the table of confusion for model M_{1}^{Fraud}, and the calculation of accuracy for model M_{1}^{Fraud}is shown below.A(M) = (TN + TP) / (TN + FP + FN + TP)whereTN is the number of true negative casesFP is the number of false positive casesFN is the number of false negative casesTP is the number of true positive cases"Formula 1: Definition of Accuracy"

Predicted Negative Predicted PositiveNegative Cases 9,700 150Positive Cases 50 100"Table 1: Table of Confusion for Fraud Model M

_{1}^{Fraud}."A(M) = (9,700 + 100) / (9,700 + 150 + 50 + 100) = 98.0%"Formula 2: Accuracy for model M

_{1}^{Fraud}"With an accuracy of 98.0% model M

_{1}^{Fraud}appears to perform fairly well. The paradox lies in the fact that accuracy can be easily improved to 98.5% by always predicting "no fraud". The table of confusion and the accuracy for this trivial “always predict negative” model M_{2}^{Fraud}and the accuracy of this model are shown below.Predicted Negative Predicted PositiveNegative Cases 9,850 0Positive Cases 150 0"Table 2: Table of Confusion for Fraud Model M

_{2}^{Fraud}."A(M) = (9,850 + 0) / (9,850 + 0 + 150 + 0) = 98.5%"Formula 3: Accuracy for model M

_{2}^{Fraud}"Model M

_{2}^{Fraud}reduces the rate of inaccurate predictions from 2% to 1.5%. This is an apparent improvement of 25%. The new model M_{2}^{Fraud}shows fewer incorrect predictions and markedly improved accuracy, as compared to the original model M_{1}^{Fraud}, but is obviously useless.The alternative model M

_{2}^{Fraud}does not offer any value to the company for preventing fraud. The less accurate model is more useful than the more accurate model.Model improvements should not be measured in terms of accuracy gains. It may be going too far to say that accuracy is irrelevant, but caution is advised when using accuracy in the evaluation of predictive models.

**ee also***

Receiver operating characteristic for other measures of how good model predictions are.

*Wikimedia Foundation.
2010.*