Optimizes the hyperparameters of a learner. Allows for different optimization methods, such as grid search, evolutionary strategies, iterated F-race, etc. You can select such an algorithm (and its settings) by passing a corresponding control object. For a complete list of implemented algorithms look at TuneControl.

Multi-criteria tuning can be done with tuneParamsMultiCrit.

tuneParams(
  learner,
  task,
  resampling,
  measures,
  par.set,
  control,
  show.info = getMlrOption("show.info"),
  resample.fun = resample
)

Arguments

learner

(Learner | character(1))
The learner. If you pass a string the learner will be created via makeLearner.

task

(Task)
The task.

resampling

(ResampleInstance | ResampleDesc)
Resampling strategy to evaluate points in hyperparameter space. If you pass a description, it is instantiated once at the beginning by default, so all points are evaluated on the same training/test sets. If you want to change that behavior, look at TuneControl.

measures

(list of Measure | Measure)
Performance measures to evaluate. The first measure, aggregated by the first aggregation function is optimized, others are simply evaluated. Default is the default measure for the task, see here getDefaultMeasure.

par.set

(ParamHelpers::ParamSet)
Collection of parameters and their constraints for optimization. Dependent parameters with a requires field must use quote and not expression to define it.

control

(TuneControl)
Control object for search method. Also selects the optimization algorithm for tuning.

show.info

(logical(1))
Print verbose output on console? Default is set via configureMlr.

resample.fun

(closure)
The function to use for resampling. Defaults to resample. If a user-given function is to be used instead, it should take the arguments “learner”, “task”, “resampling”, “measures”, and “show.info”; see resample. Within this function, it is easiest to call resample and possibly modify the result. However, it is possible to return a list with only the following essential slots: the “aggr” slot for general tuning, additionally the “pred” slot if threshold tuning is performed (see TuneControl), and the “err.msgs” and “err.dumps” slots for error reporting. This parameter must be the default when mbo tuning is performed.

Value

(TuneResult).

Note

If you would like to include results from the training data set, make sure to appropriately adjust the resampling strategy and the aggregation for the measure. See example code below.

See also

Examples

set.seed(123) # a grid search for an SVM (with a tiny number of points...) # note how easily we can optimize on a log-scale ps = makeParamSet( makeNumericParam("C", lower = -12, upper = 12, trafo = function(x) 2^x), makeNumericParam("sigma", lower = -12, upper = 12, trafo = function(x) 2^x) ) ctrl = makeTuneControlGrid(resolution = 2L) rdesc = makeResampleDesc("CV", iters = 2L) res = tuneParams("classif.ksvm", iris.task, rdesc, par.set = ps, control = ctrl)
#> [Tune] Started tuning learner classif.ksvm for parameter set:
#> Type len Def Constr Req Tunable Trafo #> C numeric - - -12 to 12 - TRUE Y #> sigma numeric - - -12 to 12 - TRUE Y
#> With control class: TuneControlGrid
#> Imputation value: 1
#> [Tune-x] 1: C=0.000244; sigma=0.000244
#> [Tune-y] 1: mmce.test.mean=0.7333333; time: 0.0 min
#> [Tune-x] 2: C=4.1e+03; sigma=0.000244
#> [Tune-y] 2: mmce.test.mean=0.0533333; time: 0.0 min
#> [Tune-x] 3: C=0.000244; sigma=4.1e+03
#> [Tune-y] 3: mmce.test.mean=0.7333333; time: 0.0 min
#> [Tune-x] 4: C=4.1e+03; sigma=4.1e+03
#> [Tune-y] 4: mmce.test.mean=0.7333333; time: 0.0 min
#> [Tune] Result: C=4.1e+03; sigma=0.000244 : mmce.test.mean=0.0533333
print(res)
#> Tune result: #> Op. pars: C=4.1e+03; sigma=0.000244 #> mmce.test.mean=0.0533333
# access data for all evaluated points df = as.data.frame(res$opt.path) df1 = as.data.frame(res$opt.path, trafo = TRUE) print(head(df[, -ncol(df)]))
#> C sigma mmce.test.mean dob eol error.message #> 1 -12 -12 0.73333333 1 NA <NA> #> 2 12 -12 0.05333333 2 NA <NA> #> 3 -12 12 0.73333333 3 NA <NA> #> 4 12 12 0.73333333 4 NA <NA>
print(head(df1[, -ncol(df)]))
#> C sigma mmce.test.mean dob eol error.message #> 1 -12 -12 0.73333333 1 NA <NA> #> 2 12 -12 0.05333333 2 NA <NA> #> 3 -12 12 0.73333333 3 NA <NA> #> 4 12 12 0.73333333 4 NA <NA>
# access data for all evaluated points - alternative df2 = generateHyperParsEffectData(res) df3 = generateHyperParsEffectData(res, trafo = TRUE) print(head(df2$data[, -ncol(df2$data)]))
#> C sigma mmce.test.mean iteration #> 1 -12 -12 0.73333333 1 #> 2 12 -12 0.05333333 2 #> 3 -12 12 0.73333333 3 #> 4 12 12 0.73333333 4
print(head(df3$data[, -ncol(df3$data)]))
#> C sigma mmce.test.mean iteration #> 1 2.441406e-04 2.441406e-04 0.73333333 1 #> 2 4.096000e+03 2.441406e-04 0.05333333 2 #> 3 2.441406e-04 4.096000e+03 0.73333333 3 #> 4 4.096000e+03 4.096000e+03 0.73333333 4
if (FALSE) { # we optimize the SVM over 3 kernels simultanously # note how we use dependent params (requires = ...) and iterated F-racing here ps = makeParamSet( makeNumericParam("C", lower = -12, upper = 12, trafo = function(x) 2^x), makeDiscreteParam("kernel", values = c("vanilladot", "polydot", "rbfdot")), makeNumericParam("sigma", lower = -12, upper = 12, trafo = function(x) 2^x, requires = quote(kernel == "rbfdot")), makeIntegerParam("degree", lower = 2L, upper = 5L, requires = quote(kernel == "polydot")) ) print(ps) ctrl = makeTuneControlIrace(maxExperiments = 5, nbIterations = 1, minNbSurvival = 1) rdesc = makeResampleDesc("Holdout") res = tuneParams("classif.ksvm", iris.task, rdesc, par.set = ps, control = ctrl) print(res) df = as.data.frame(res$opt.path) print(head(df[, -ncol(df)])) # include the training set performance as well rdesc = makeResampleDesc("Holdout", predict = "both") res = tuneParams("classif.ksvm", iris.task, rdesc, par.set = ps, control = ctrl, measures = list(mmce, setAggregation(mmce, train.mean))) print(res) df2 = as.data.frame(res$opt.path) print(head(df2[, -ncol(df2)])) }