Create a cost-sensitive classification task.
id = deparse(substitute(data)),
blocking = NULL,
coordinates = NULL,
fixup.data = "warn",
check.data = TRUE
Id string for object.
Default is the name of the R variable passed to
A data frame containing the features and target variable(s).
A numeric matrix or data frame containing the costs of misclassification.
We assume the general case of observation specific costs.
This means we have n rows, corresponding to the observations, in the same order as
The columns correspond to classes and their names are the class labels
(if unnamed we use y1 to yk as labels).
Each entry (i,j) of the matrix specifies the cost of predicting class j
for observation i.
An optional factor of the same length as the number of observations.
Observations with the same blocking level “belong together”.
Specifically, they are either put all in the training or the test set
during a resampling iteration.
NULL which means no blocking.
Coordinates of a spatial data set that will be used for spatial partitioning of the data in a spatial cross-validation resampling setting.
Coordinates have to be numeric values.
Provided data.frame needs to have the same number of rows as data and consist of at least two dimensions.
Should some basic cleaning up of data be performed?
Currently this means removing empty factor levels for the columns.
Possible choices are:
“no” = Don't do it.
“warn” = Do it but warn about it.
“quiet” = Do it but keep silent.
Default is “warn”.
Should sanity of data be checked initially at task creation?
You should have good reasons to turn this off (one might be speed).