This creates a BenchmarkResult from a batchtools::ExperimentRegistry. To setup the benchmark have a look at batchmark.
Usage
reduceBatchmarkResults(
ids = NULL,
keep.pred = TRUE,
keep.extract = FALSE,
show.info = getMlrOption("show.info"),
reg = batchtools::getDefaultRegistry()
)
Arguments
- ids
(data.frame or integer)
A base::data.frame (or data.table::data.table) with a column named “job.id”. Alternatively, you may also pass a vector of integerish job ids. If not set, defaults to all successfully terminated jobs (return value of batchtools::findDone.- keep.pred
(
logical(1)
)
Keep the prediction data in thepred
slot of the result object. If you do many experiments (on larger data sets) these objects might unnecessarily increase object size / mem usage, if you do not really need them. The default is set toTRUE
.- keep.extract
(
logical(1)
)
Keep theextract
slot of the result object. When creating a lot of benchmark results with extensive tuning, the resulting R objects can become very large in size. That is why the tuning results stored in theextract
slot are removed by default (keep.extract = FALSE
). Note that whenkeep.extract = FALSE
you will not be able to conduct analysis in the tuning results.- show.info
(
logical(1)
)
Print verbose output on console? Default is set via configureMlr.- reg
(batchtools::ExperimentRegistry)
Registry, created by batchtools::makeExperimentRegistry. If not explicitly passed, uses the last created registry.
See also
Other benchmark:
BenchmarkResult
,
batchmark()
,
benchmark()
,
convertBMRToRankMatrix()
,
friedmanPostHocTestBMR()
,
friedmanTestBMR()
,
generateCritDifferencesData()
,
getBMRAggrPerformances()
,
getBMRFeatSelResults()
,
getBMRFilteredFeatures()
,
getBMRLearnerIds()
,
getBMRLearnerShortNames()
,
getBMRLearners()
,
getBMRMeasureIds()
,
getBMRMeasures()
,
getBMRModels()
,
getBMRPerformances()
,
getBMRPredictions()
,
getBMRTaskDescs()
,
getBMRTaskIds()
,
getBMRTuneResults()
,
plotBMRBoxplots()
,
plotBMRRanksAsBarChart()
,
plotBMRSummary()
,
plotCritDifferences()