Generate threshold vs. performance(s) for 2-class classification.
Source:R/generateThreshVsPerf.R
generateThreshVsPerfData.Rd
Generates data on threshold vs. performance(s) for 2-class classification that can be used for plotting.
Arguments
- obj
(list of Prediction | list of ResampleResult | BenchmarkResult)
Single prediction object, list of them, single resample result, list of them, or a benchmark result. In case of a list probably produced by different learners you want to compare, then name the list with the names you want to see in the plots, probably learner shortnames or ids.- measures
(Measure | list of Measure)
Performance measure(s) to evaluate. Default is the default measure for the task, see here getDefaultMeasure.- gridsize
(
integer(1)
)
Grid resolution for x-axis (threshold). Default is 100.- aggregate
(
logical(1)
)
Whether to aggregate ResamplePredictions or to plot the performance of each iteration separately. Default isTRUE
.- task.id
(
character(1)
)
Selected task in BenchmarkResult to do plots for, ignored otherwise. Default is first task.
Value
(ThreshVsPerfData). A named list containing the measured performance across the threshold grid, the measures, and whether the performance estimates were aggregated (only applicable for (list of) ResampleResults).
See also
Other generate_plot_data:
generateCalibrationData()
,
generateCritDifferencesData()
,
generateFeatureImportanceData()
,
generateFilterValuesData()
,
generateLearningCurveData()
,
generatePartialDependenceData()
,
plotFilterValues()
Other thresh_vs_perf:
plotROCCurves()
,
plotThreshVsPerf()