mlr is designed to make usage errors due to typos or invalid parameter values as unlikely as possible. Occasionally, you might want to break those barriers and get full access, for example to reduce the amount of output on the console or to turn off checks. For all available options simply refer to the documentation of configureMlr(). In the following we show some common use cases.

Generally, function configureMlr() permits to set options globally for your current R session.

It is also possible to set options locally.

Example: Reducing the output on the console

You are bothered by all the output on the console like in this example?

rdesc = makeResampleDesc("Holdout")
r = resample("classif.multinom", iris.task, rdesc)
## Resampling: holdout
## Measures:             mmce
## # weights:  18 (10 variable)
## initial  value 109.861229 
## iter  10 value 8.635871
## iter  20 value 0.942436
## iter  30 value 0.225516
## iter  40 value 0.144303
## iter  50 value 0.139259
## iter  60 value 0.123724
## iter  70 value 0.089635
## iter  80 value 0.084994
## iter  90 value 0.058982
## iter 100 value 0.056564
## final  value 0.056564 
## stopped after 100 iterations
## [Resample] iter 1:    0.0400000
## 
## Aggregated Result: mmce.test.mean=0.0400000
## 

You can suppress the output for this Learner makeLearner() and this resample() call as follows:

lrn = makeLearner("classif.multinom", config = list(show.learner.output = FALSE))
r = resample(lrn, iris.task, rdesc, show.info = FALSE)

(Note that nnet::multinom() has a trace switch that can alternatively be used to turn off the progress messages.)

To globally suppress the output for all subsequent learners and calls to resample(), benchmark() etc. do the following:

configureMlr(show.learner.output = FALSE, show.info = FALSE)
r = resample("classif.multinom", iris.task, rdesc)

Accessing and resetting the configuration

Function getMlrOptions() returns a base::list() with the current configuration.

getMlrOptions()
## $show.info
## [1] FALSE
## 
## $on.learner.error
## [1] "stop"
## 
## $on.learner.warning
## [1] "warn"
## 
## $on.par.without.desc
## [1] "stop"
## 
## $on.par.out.of.bounds
## [1] "stop"
## 
## $on.measure.not.applicable
## [1] "stop"
## 
## $show.learner.output
## [1] FALSE
## 
## $on.error.dump
## [1] FALSE

To restore the default configuration call configureMlr() with an empty argument list.

getMlrOptions()
## $show.info
## [1] TRUE
## 
## $on.learner.error
## [1] "stop"
## 
## $on.learner.warning
## [1] "warn"
## 
## $on.par.without.desc
## [1] "stop"
## 
## $on.par.out.of.bounds
## [1] "stop"
## 
## $on.measure.not.applicable
## [1] "stop"
## 
## $show.learner.output
## [1] TRUE
## 
## $on.error.dump
## [1] FALSE

Example: Turning off parameter checking

It might happen that you want to set a parameter of a Learner (makeLearner(), but the parameter is not registered in the learner’s parameter set (ParamHelpers::makeParamSet()) yet. In this case you might want to contact us or open an issue as well! But until the problem is fixed you can turn off mlr’s parameter checking. The parameter setting will then be passed to the underlying function without further ado.

# Support Vector Machine with linear kernel and new parameter 'newParam'
lrn = makeLearner("classif.ksvm", kernel = "vanilladot", newParam = 3)
## Error in setHyperPars2.Learner(learner, insert(par.vals, args)): classif.ksvm: Setting parameter newParam without available description object!
## Did you mean one of these hyperparameters instead: degree scaled kernel
## You can switch off this check by using configureMlr!

# Turn off parameter checking completely
configureMlr(on.par.without.desc = "quiet")
lrn = makeLearner("classif.ksvm", kernel = "vanilladot", newParam = 3)
train(lrn, iris.task)
##  Setting default kernel parameters
## Model for learner.id=classif.ksvm; learner.class=classif.ksvm
## Trained on: task.id = iris-example; obs = 150; features = 4
## Hyperparameters: fit=FALSE,kernel=vanilladot,newParam=3

# Option "quiet" also masks typos
lrn = makeLearner("classif.ksvm", kernl = "vanilladot")
train(lrn, iris.task)
## Model for learner.id=classif.ksvm; learner.class=classif.ksvm
## Trained on: task.id = iris-example; obs = 150; features = 4
## Hyperparameters: fit=FALSE,kernl=vanilladot

# Alternatively turn off parameter checking, but still see warnings
configureMlr(on.par.without.desc = "warn")
lrn = makeLearner("classif.ksvm", kernl = "vanilladot", newParam = 3)
## Warning in setHyperPars2.Learner(learner, insert(par.vals, args)): classif.ksvm: Setting parameter kernl without available description object!
## Did you mean one of these hyperparameters instead: kernel nu degree
## You can switch off this check by using configureMlr!
## Warning in setHyperPars2.Learner(learner, insert(par.vals, args)): classif.ksvm: Setting parameter newParam without available description object!
## Did you mean one of these hyperparameters instead: degree scaled kernel
## You can switch off this check by using configureMlr!

train(lrn, iris.task)
## Model for learner.id=classif.ksvm; learner.class=classif.ksvm
## Trained on: task.id = iris-example; obs = 150; features = 4
## Hyperparameters: fit=FALSE,kernl=vanilladot,newParam=3

Example: Handling errors in a learning method

If a learning method throws an error the default behavior of mlr is to generate an exception as well. However, in some situations, for example if you conduct a larger bechmark experiment with multiple data sets and learners, you usually don’t want the whole experiment stopped due to one error. You can prevent this using the on.learner.error option of configureMlr().

# This call gives an error caused by the low number of observations in class "virginica"
train("classif.qda", task = iris.task, subset = 1:104)
## Error in qda.default(x, grouping, ...): some group is too small for 'qda'

# Get a warning instead of an error
configureMlr(on.learner.error = "warn")
mod = train("classif.qda", task = iris.task, subset = 1:104)
## Warning in train("classif.qda", task = iris.task, subset = 1:104): Could not train learner classif.qda: Error in qda.default(x, grouping, ...) : 
##   some group is too small for 'qda'

mod
## Model for learner.id=classif.qda; learner.class=classif.qda
## Trained on: task.id = iris-example; obs = 104; features = 4
## Hyperparameters: 
## Training failed: Error in qda.default(x, grouping, ...) : 
##   some group is too small for 'qda'
## 
## Training failed: Error in qda.default(x, grouping, ...) : 
##   some group is too small for 'qda'

# mod is an object of class FailureModel
isFailureModel(mod)
## [1] TRUE

# Retrieve the error message
getFailureModelMsg(mod)
## [1] "Error in qda.default(x, grouping, ...) : \n  some group is too small for 'qda'\n"

# predict and performance return NA's
pred = predict(mod, iris.task)
pred
## Prediction: 150 observations
## predict.type: response
## threshold: 
## time: NA
##   id  truth response
## 1  1 setosa     <NA>
## 2  2 setosa     <NA>
## 3  3 setosa     <NA>
## 4  4 setosa     <NA>
## 5  5 setosa     <NA>
## 6  6 setosa     <NA>
## ... (#rows: 150, #cols: 3)

performance(pred)
## mmce 
##   NA

If on.learner.error = "warn" a warning is issued instead of an exception and an object of class FailureModel() is created. You can extract the error message using function getFailureModelMsg(). All further steps like prediction and performance calculation work and return NA's.