# Confusion matrix

A confusion matrix provides a possibility to evaluate the output of a classifier or of predictive modeling. Basically, it sorts the output of a machine learning algorithm in respect to correctly and incorrectly classified or predicted values in an n x n-matrix, with columns containing the classified or predicted classes and rows containing the actual classes. If for example an empirical data set about 24 persons indicates that 9 persons adopted a PV-installation and 15 did not adopt (more to this example here and here ), and the machine learning algorithm classifies all 9 adopters correctly but only 13 of the non-adopters, a confusion matrix of this result would look like the one on the left side below:

 classified or predicted cases actual cases True positives (TP) False negatives (FN) False positives (FP) True negatives (TN)

The table on the right side shows the terminology used in this context. A correctly identified adopter is a case of a true positive (TP) instance, a correctly identified non-adopter is a case of a true negative (TN) instance. Correctly classified instances hence are shown in the main diagonal. The incorrectly classified (or incorrectly predicted) cases, that is, the errors of the classifier, are shown in the off-diagonal, with false negatives (FN)in the upper right cell and false positives (FP) in the lower left cell.

## Evaluation

The values from the confusion matrix can be used to calculate several indices about the quality of the result of a classification.

Accuracy $$= \frac{TP+TN}{TP+FP+TN+FN}$$

Precision (positive predictive value) $$= \frac{TP}{TP+FP}$$

Negative predictive value $$= \frac{TN}{FN+TN}$$

Specifity (True negative rate) $$= \frac{TN}{FP+TN}$$

Sensitivity (Hit rate, Recall, True positive rate) $$= \frac{TP}{TP+FN}$$

Fall out (False positive rate) $$= \frac{FP}{FP+TN}$$

Miss rate (False negative rate) $$= \frac{FN}{FN+TP}$$

F1 score (the harmonic mean of precision and sensitivity) $$= \frac{2TP}{2TP+FP+FN}$$

The most important indices are accuracy and precision, with accuracy indicating the degree of closeness of a classification to the actual (true) values, and precision indicating the degree to which repeated classifications under unchanged conditions would show the same results. A classifier hence can be accurate but not precise and vice versa.