The function resample fits a model specified by Learner on a Task and calculates predictions and performance measures for all training and all test sets specified by a either a resampling description (ResampleDesc) or resampling instance (ResampleInstance).

You are able to return all fitted models (parameter models) or extract specific parts of the models (parameter extract) as returning all of them completely might be memory intensive.

The remaining functions on this page are convenience wrappers for the various existing resampling strategies. Note that if you need to work with precomputed training and test splits (i.e., resampling instances), you have to stick with resample.

resample(learner, task, resampling, measures, weights = NULL,
  models = FALSE, extract, keep.pred = TRUE, ...,
  show.info = getMlrOption("show.info"))

crossval(learner, task, iters = 10L, stratify = FALSE, measures,
  models = FALSE, keep.pred = TRUE, ...,
  show.info = getMlrOption("show.info"))

repcv(learner, task, folds = 10L, reps = 10L, stratify = FALSE,
  measures, models = FALSE, keep.pred = TRUE, ...,
  show.info = getMlrOption("show.info"))

holdout(learner, task, split = 2/3, stratify = FALSE, measures,
  models = FALSE, keep.pred = TRUE, ...,
  show.info = getMlrOption("show.info"))

subsample(learner, task, iters = 30, split = 2/3, stratify = FALSE,
  measures, models = FALSE, keep.pred = TRUE, ...,
  show.info = getMlrOption("show.info"))

bootstrapOOB(learner, task, iters = 30, stratify = FALSE, measures,
  models = FALSE, keep.pred = TRUE, ...,
  show.info = getMlrOption("show.info"))

bootstrapB632(learner, task, iters = 30, stratify = FALSE, measures,
  models = FALSE, keep.pred = TRUE, ...,
  show.info = getMlrOption("show.info"))

bootstrapB632plus(learner, task, iters = 30, stratify = FALSE,
  measures, models = FALSE, keep.pred = TRUE, ...,
  show.info = getMlrOption("show.info"))

growingcv(learner, task, horizon = 1, initial.window = 0.5, skip = 0,
  measures, models = FALSE, keep.pred = TRUE, ...,
  show.info = getMlrOption("show.info"))

fixedcv(learner, task, horizon = 1L, initial.window = 0.5, skip = 0,
  measures, models = FALSE, keep.pred = TRUE, ...,
  show.info = getMlrOption("show.info"))

Arguments

learner

(Learner | character(1))
The learner. If you pass a string the learner will be created via makeLearner.

task

(Task)
The task.

resampling

(ResampleDesc or ResampleInstance)
Resampling strategy. If a description is passed, it is instantiated automatically.

measures

(Measure | list of Measure)
Performance measure(s) to evaluate. Default is the default measure for the task, see here getDefaultMeasure.

weights

(numeric)
Optional, non-negative case weight vector to be used during fitting. If given, must be of same length as observations in task and in corresponding order. Overwrites weights specified in the task. By default NULL which means no weights are used unless specified in the task.

models

(logical(1))
Should all fitted models be returned? Default is FALSE.

extract

(function)
Function used to extract information from a fitted model during resampling. Is applied to every WrappedModel resulting from calls to train during resampling. Default is to extract nothing.

keep.pred

(logical(1))
Keep the prediction data in the pred slot of the result object. If you do many experiments (on larger data sets) these objects might unnecessarily increase object size / mem usage, if you do not really need them. In this case you can set this argument to FALSE. Default is TRUE.

...

(any)
Further hyperparameters passed to learner.

show.info

(logical(1))
Print verbose output on console? Default is set via configureMlr.

iters

(integer(1))
See ResampleDesc.

stratify

(logical(1))
See ResampleDesc.

folds

(integer(1))
See ResampleDesc.

reps

(integer(1))
See ResampleDesc.

split

(numeric(1))
See ResampleDesc.

horizon

(numeric(1))
See ResampleDesc.

initial.window

(numeric(1))
See ResampleDesc.

skip

(integer(1))
See ResampleDesc.

Value

(ResampleResult).

Note

If you would like to include results from the training data set, make sure to appropriately adjust the resampling strategy and the aggregation for the measure. See example code below.

See also

Examples

task = makeClassifTask(data = iris, target = "Species") rdesc = makeResampleDesc("CV", iters = 2) r = resample(makeLearner("classif.qda"), task, rdesc)
#> Resampling: cross-validation
#> Measures: mmce
#> [Resample] iter 1: 0.0266667
#> [Resample] iter 2: 0.0666667
#>
#> Aggregated Result: mmce.test.mean=0.0466667
#>
print(r$aggr)
#> mmce.test.mean #> 0.04666667
print(r$measures.test)
#> iter mmce #> 1 1 0.02666667 #> 2 2 0.06666667
print(r$pred)
#> Resampled Prediction for: #> Resample description: cross-validation with 2 iterations. #> Predict: test #> Stratification: FALSE #> predict.type: response #> threshold: #> time (mean): 0.00 #> id truth response iter set #> 1 2 setosa setosa 1 test #> 2 4 setosa setosa 1 test #> 3 5 setosa setosa 1 test #> 4 6 setosa setosa 1 test #> 5 9 setosa setosa 1 test #> 6 15 setosa setosa 1 test #> ... (#rows: 150, #cols: 5)
# include the training set performance as well rdesc = makeResampleDesc("CV", iters = 2, predict = "both") r = resample(makeLearner("classif.qda"), task, rdesc, measures = list(mmce, setAggregation(mmce, train.mean)))
#> Resampling: cross-validation
#> Measures: mmce.train mmce.test
#> [Resample] iter 1: 0.0133333 0.0266667
#> [Resample] iter 2: 0.0000000 0.0266667
#>
#> Aggregated Result: mmce.test.mean=0.0266667,mmce.train.mean=0.0066667
#>
print(r$aggr)
#> mmce.test.mean mmce.train.mean #> 0.026666667 0.006666667