The Confusion Matrix | Reality: 1 | Reality: 0 |
Prediction: 1 | 50 | 20 |
Prediction: 0 | 10 | 20 |
The Confusion Matrix | Reality: 1 | Reality: 0 | |
Prediction: 1 | 50 | 20 | 70 |
Prediction: 0 | 10 | 20 | 30 |
60 | 40 | 100 |
Calculation:
Accuracy: Accuracy is defined as the percentage of correct predictions out of all the observations
Accuracy = Correct Predictions / Total Cases * 100%
Accuracy = (TP + TN) / (TP + TN + FP + FN +) * 100%
Where True Positive (TP), True Negative (TN), False Positive (FP) and False Negative (FN). Accuracy = (50+20) / (50+20+20+10) =
(70/100) = 0.7
Precision: Precision is defined as the percentage of true positive cases versus all the cases where the prediction is true.
Precision = True Positive / All Predicted Positives * 100%
Precision = TP / (TP + FP) * 100%
= (50 / (50 + 20)) = (50/70) = 0.714
Recall: It is defined as the fraction of positive cases that are correctly identified
Recall = True Positive / True Positive + False Negative
Recall = TP / TP + FN
= 50 / (50 + 60) = 50 / 110 = 0.5
F1 score is defined as the measure of balance between precision and recall.
F1 Score = 2 * Precision * Recall / Precision + Recall
= 2 * (0.714 *0.5) / (0.714 + 0.5) = 2 * (0.357 / 1.214)
= 2* (0.29406) = 0.58
Therefore,
Accuracy= 0.7 , Precision=0.714 , Recall=0.5 , F1 Score=0.588
Here within the test there is a tradeoff. But Recall is not a good Evaluation metric. Recall metric needs to improve more.
Because, False Positive (impacts Precision): A person is predicted as high risk but does not have heart attack.
False Negative (impacts Recall): A person is predicted as low risk but has heart attack. Therefore, False Negatives miss actual heart patients, hence recall metric need more improvement. False Negatives are more dangerous than False Positives.
Study more about Evaluation at Evaluation Class 10