Calculate the absolute number of correct/incorrect classifications and the following evaluation measures:
tpr
True positive rate (Sensitivity, Recall)
fpr
False positive rate (Fall-out)
fnr
False negative rate (Miss rate)
tnr
True negative rate (Specificity)
ppv
Positive predictive value (Precision)
for
False omission rate
lrp
Positive likelihood ratio (LR+)
fdr
False discovery rate
npv
Negative predictive value
acc
Accuracy
lrm
Negative likelihood ratio (LR-)
dor
Diagnostic odds ratio
For details on the used measures see measures and also https://en.wikipedia.org/wiki/Receiver_operating_characteristic.
The element for the false omission rate in the resulting object is not called for
but
fomr
since for
should never be used as a variable name in an object.
calculateROCMeasures(pred) # S3 method for ROCMeasures print(x, abbreviations = TRUE, digits = 2, ...)
pred | (Prediction) |
---|---|
x | ( |
abbreviations | ( |
digits | ( |
... |
|
(ROCMeasures
).
A list containing two elements confusion.matrix
which is
the 2 times 2 confusion matrix of absolute frequencies and measures
, a list of the above mentioned measures.
print
:
Other roc:
asROCRPrediction()
Other performance:
ConfusionMatrix
,
calculateConfusionMatrix()
,
estimateRelativeOverfitting()
,
makeCostMeasure()
,
makeCustomResampledMeasure()
,
makeMeasure()
,
measures
,
performance()
,
setAggregation()
,
setMeasurePars()
lrn = makeLearner("classif.rpart", predict.type = "prob") fit = train(lrn, sonar.task) pred = predict(fit, task = sonar.task) calculateROCMeasures(pred)#> predicted #> true M R #> M 95 16 tpr: 0.86 fnr: 0.14 #> R 10 87 fpr: 0.1 tnr: 0.9 #> ppv: 0.9 for: 0.16 lrp: 8.3 acc: 0.88 #> fdr: 0.1 npv: 0.84 lrm: 0.16 dor: 51.66 #> #> #> Abbreviations: #> tpr - True positive rate (Sensitivity, Recall) #> fpr - False positive rate (Fall-out) #> fnr - False negative rate (Miss rate) #> tnr - True negative rate (Specificity) #> ppv - Positive predictive value (Precision) #> for - False omission rate #> lrp - Positive likelihood ratio (LR+) #> fdr - False discovery rate #> npv - Negative predictive value #> acc - Accuracy #> lrm - Negative likelihood ratio (LR-) #> dor - Diagnostic odds ratio