Measures the quality of a prediction w.r.t. some performance measure.
performance( pred, measures, task = NULL, model = NULL, feats = NULL, simpleaggr = FALSE )
| pred | (Prediction) |
|---|---|
| measures | (Measure | list of Measure) |
| task | (Task) |
| model | (WrappedModel) |
| feats | (data.frame) |
| simpleaggr | (logical) |
(named numeric). Performance value(s), named by measure(s).
Other performance:
ConfusionMatrix,
calculateConfusionMatrix(),
calculateROCMeasures(),
estimateRelativeOverfitting(),
makeCostMeasure(),
makeCustomResampledMeasure(),
makeMeasure(),
measures,
setAggregation(),
setMeasurePars()
training.set = seq(1, nrow(iris), by = 2) test.set = seq(2, nrow(iris), by = 2) task = makeClassifTask(data = iris, target = "Species") lrn = makeLearner("classif.lda") mod = train(lrn, task, subset = training.set)#> Error: Please use column names for `x`#> Error in predict(mod, newdata = iris[test.set, ]): object 'mod' not foundperformance(pred, measures = mmce)#> Error in performance(pred, measures = mmce): object 'pred' not found# Compute multiple performance measures at once ms = list("mmce" = mmce, "acc" = acc, "timetrain" = timetrain) performance(pred, measures = ms, task, mod)#> Error in performance(pred, measures = ms, task, mod): object 'pred' not found