Create a classification task.
id = deparse(substitute(data)),
weights = NULL,
blocking = NULL,
coordinates = NULL,
positive = NA_character_,
fixup.data = "warn",
check.data = TRUE
Id string for object.
Default is the name of the R variable passed to
A data frame containing the features and target variable(s).
Name(s) of the target variable(s).
For survival analysis these are the names of the survival time and event columns,
so it has length 2. For multilabel classification it contains the names of the logical
columns that encode whether a label is present or not and its length corresponds to the
number of classes.
Optional, non-negative case weight vector to be used during fitting.
Cannot be set for cost-sensitive learning.
NULL which means no (= equal) weights.
An optional factor of the same length as the number of observations.
Observations with the same blocking level “belong together”.
Specifically, they are either put all in the training or the test set
during a resampling iteration.
NULL which means no blocking.
Coordinates of a spatial data set that will be used for spatial partitioning of the data in a spatial cross-validation resampling setting.
Coordinates have to be numeric values.
Provided data.frame needs to have the same number of rows as data and consist of at least two dimensions.
Positive class for binary classification (otherwise ignored and set to NA).
Default is the first factor level of the target attribute.
Should some basic cleaning up of data be performed?
Currently this means removing empty factor levels for the columns.
Possible choices are:
“no” = Don't do it.
“warn” = Do it but warn about it.
“quiet” = Do it but keep silent.
Default is “warn”.
Should sanity of data be checked initially at task creation?
You should have good reasons to turn this off (one might be speed).