Receiver Operating Characteristic Curve, or ROC Curve. Plot of False Positive Rate (X-Axis) vs True Positive Rate (y-axis).
True Positive Rate = Num of True Positives / (True Positives + False Negatives)
which is also referred to as Sensivity.
False Positive Rate = False Positives / ( False Positives + True Negatives)
which is also called Inverted Specificity.
as False Positive Rate = 1 - Specificity
Importance of ROC Curves
Easy comparison of different models
The Area under the Curve (AUC) can be used as a summary tool.
A no-skill classifier wouldn't be able to discriminate between the classes at all and would predict a random class or a constant class in all cases. A model with no skill is represented by (0.5,0.5).
A model with perfect skill is represented by (0,1). A model with perfect skill is represented by a line that travels from the bottom left of the plot to the top left and then across the top to the top right.
In Python, we have roc_curve() sklearn function.
Then comes, Precision-Recall Curves. Majorly used in the field of Information - Retrieval.
Precision = True Positives / (True Positives + False Positives)
Recall = True Positives / (True Positives + False Negatives)
Through the help of Precision Recall Curves, we can obtain F-Measure & Area Under Curve. F-Measure is the harmonic mean of the precision and recall values (model skill for a probability). AUC summarizes the integral of the precision recall curve (model skill across thresholds).
In Python, we have precision_recall_curve() sklearn function.
Please suggest any improvements or additions to be made, thanks!
Comments