Measures the quality of a prediction w.r.t. some performance measure.
Arguments
- pred
(Prediction)
Prediction object.- measures
(Measure | list of Measure)
Performance measure(s) to evaluate. Default is the default measure for the task, see here getDefaultMeasure.- task
(Task)
Learning task, might be requested by performance measure, usually not needed except for clustering or survival.- model
(WrappedModel)
Model built on training data, might be requested by performance measure, usually not needed except for survival.- feats
(data.frame)
Features of predicted data, usually not needed except for clustering. If the prediction was generated from atask, you can also pass this instead and the features are extracted from it.- simpleaggr
(logical)
If TRUE, aggregation ofResamplePredictionobjects is skipped. This is used internally for threshold tuning. Default isFALSE.
Value
(named numeric). Performance value(s), named by measure(s).
See also
Other performance:
ConfusionMatrix,
calculateConfusionMatrix(),
calculateROCMeasures(),
estimateRelativeOverfitting(),
makeCostMeasure(),
makeCustomResampledMeasure(),
makeMeasure(),
measures,
setAggregation(),
setMeasurePars()
Examples
training.set = seq(1, nrow(iris), by = 2)
test.set = seq(2, nrow(iris), by = 2)
task = makeClassifTask(data = iris, target = "Species")
lrn = makeLearner("classif.lda")
mod = train(lrn, task, subset = training.set)
#> Error in x[0, , drop = FALSE]: incorrect number of dimensions
pred = predict(mod, newdata = iris[test.set, ])
#> Error in predict(mod, newdata = iris[test.set, ]): object 'mod' not found
performance(pred, measures = mmce)
#> Error in performance(pred, measures = mmce): object 'pred' not found
# Compute multiple performance measures at once
ms = list("mmce" = mmce, "acc" = acc, "timetrain" = timetrain)
performance(pred, measures = ms, task, mod)
#> Error in performance(pred, measures = ms, task, mod): object 'pred' not found
