Fuses a base learner with a filter method. Creates a learner object, which can be used like any other learner object. Internally uses filterFeatures before every model fit.

makeFilterWrapper(
  learner,
  fw.method = "randomForestSRC_importance",
  fw.base.methods = NULL,
  fw.perc = NULL,
  fw.abs = NULL,
  fw.threshold = NULL,
  fw.fun = NULL,
  fw.fun.args = NULL,
  fw.mandatory.feat = NULL,
  cache = FALSE,
  ...
)

Arguments

learner

(Learner | character(1))
The learner. If you pass a string the learner will be created via makeLearner.

fw.method

(character(1))
Filter method. See listFilterMethods. Default is “randomForestSRC_importance”.

fw.base.methods

(character(1))
Simple Filter methods for ensemble filters. See listFilterMethods. Can only be used in combination with ensemble filters. See listFilterEnsembleMethods.

fw.perc

(numeric(1))
If set, select fw.perc*100 top scoring features. Mutually exclusive with arguments fw.abs, fw.threshold and `fw.fun.

fw.abs

(numeric(1))
If set, select fw.abs top scoring features. Mutually exclusive with arguments fw.perc, fw.threshold and fw.fun.

fw.threshold

(numeric(1))
If set, select features whose score exceeds fw.threshold. Mutually exclusive with arguments fw.perc, fw.abs and fw.fun.

fw.fun

(function))
If set, select features via a custom thresholding function, which must return the number of top scoring features to select. Mutually exclusive with arguments fw.perc, fw.abs and fw.threshold.

fw.fun.args

(any)
Arguments passed to the custom thresholding function

fw.mandatory.feat

(character)
Mandatory features which are always included regardless of their scores

cache

(character(1) | logical)
Whether to use caching during filter value creation. See details.

...

(any)
Additional parameters passed down to the filter. If you are using more than one filter method, you need to pass the arguments in a named list via more.args. For example more.args = list("FSelectorRcpp_information.gain" = list(equal = TRUE)).

Value

Learner.

Details

If ensemble = TRUE, ensemble feature selection using all methods specified in fw.method is performed. At least two methods need to be selected.

After training, the selected features can be retrieved with getFilteredFeatures.

Note that observation weights do not influence the filtering and are simply passed down to the next learner.

Caching

If cache = TRUE, the default mlr cache directory is used to cache filter values. The directory is operating system dependent and can be checked with getCacheDir(). Alternatively a custom directory can be passed to store the cache. The cache can be cleared with deleteCacheDir(). Caching is disabled by default. Care should be taken when operating on large clusters due to possible write conflicts to disk if multiple workers try to write the same cache at the same time.

See also

Examples

# \donttest{ task = makeClassifTask(data = iris, target = "Species") lrn = makeLearner("classif.lda") inner = makeResampleDesc("Holdout") outer = makeResampleDesc("CV", iters = 2) lrn = makeFilterWrapper(lrn, fw.perc = 0.5) mod = train(lrn, task)
#> Error: Please use column names for `x`
#> Error in getFilteredFeatures(mod): object 'mod' not found
# now nested resampling, where we extract the features that the filter method selected r = resample(lrn, task, outer, extract = function(model) { getFilteredFeatures(model) })
#> Resampling: cross-validation
#> Measures: mmce
#> [Resample] iter 1: 0.0533333
#> [Resample] iter 2: 0.0266667
#>
#> Aggregated Result: mmce.test.mean=0.0400000
#>
print(r$extract)
#> [[1]] #> [1] "Petal.Length" "Petal.Width" #> #> [[2]] #> [1] "Petal.Length" "Petal.Width" #>
# usage of an ensemble filter lrn = makeLearner("classif.lda") lrn = makeFilterWrapper(lrn, fw.method = "E-Borda", fw.base.methods = c("FSelectorRcpp_gain.ratio", "FSelectorRcpp_information.gain"), fw.perc = 0.5) r = resample(lrn, task, outer, extract = function(model) { getFilteredFeatures(model) })
#> Resampling: cross-validation
#> Measures: mmce
#> [Resample] iter 1: 0.0533333
#> [Resample] iter 2: 0.0133333
#>
#> Aggregated Result: mmce.test.mean=0.0333333
#>
print(r$extract)
#> [[1]] #> [1] "Petal.Length" "Petal.Width" #> #> [[2]] #> [1] "Petal.Length" "Petal.Width" #>
# usage of a custom thresholding function biggest_gap = function(values, diff) { gap_size = 0 gap_location = 0 for (i in (diff + 1):length(values)) { gap = values[[i - diff]] - values[[i]] if (gap > gap_size) { gap_size = gap gap_location = i - 1 } } return(gap_location) } lrn = makeLearner("classif.lda") lrn = makeFilterWrapper(lrn, fw.method = "randomForestSRC_importance", fw.fun = biggest_gap, fw.fun.args = list("diff" = 1)) r = resample(lrn, task, outer, extract = function(model) { getFilteredFeatures(model) })
#> Resampling: cross-validation
#> Measures: mmce
#> [Resample] iter 1: 0.0400000
#> [Resample] iter 2: 0.0266667
#>
#> Aggregated Result: mmce.test.mean=0.0333333
#>
print(r$extract)
#> [[1]] #> [1] "Petal.Length" "Petal.Width" #> #> [[2]] #> [1] "Petal.Length" "Petal.Width" #>
# }