|
|||||||||
PREV PACKAGE NEXT PACKAGE | FRAMES NO FRAMES |
See:
Description
Interface Summary | |
---|---|
PerformanceComparator | Compares two PerformanceVector s. |
Class Summary | |
---|---|
AbsoluteError | The absolute error: Sum(|label-predicted|)/#examples. |
AreaUnderCurve | This criterion calculates the area under the ROC curve. |
AttributeCounter | Returns a performance vector just counting the number of attributes currently used for the given example set. |
BinaryClassificationPerformance | This class encapsulates the well known binary classification criteria precision and recall. |
CorrelationCriterion | Computes the empirical corelation coefficient 'r' between label and prediction. |
EstimatedPerformance | This class is used to store estimated performance values before or even without the performance test is actually done using a test set. |
Margin | The margin of a classifier, defined as the minimal confidence for the correct label. |
MDLCriterion | Measures the length of an example set (i.e. the number of attributes). |
MeasuredPerformance | Superclass for performance citeria that are actually measured (not estimated). |
MinMaxCriterion | This criterion should be used as wrapper around other performance criteria
(see MinMaxWrapper ). |
MinMaxWrapper | Wraps a MinMaxCriterion around each performance criterion of type
MeasuredPerformance. |
MultiClassificationPerformance | Measures the accuracy and classification error for both binary classification problems and multi class problems. |
NormalizedAbsoluteError | Normalized absolute error is the total absolute error normalized by the error simply predicting average of the actual values. |
PerformanceCriterion | Each PerformanceCriterion contains a method to compute this criterion on a given set of examples, each which has to have a real and a predicted label. |
PerformanceEvaluator | A performance evaluator is an operator that expects a test ExampleSet
as input, whose elements have both true and predicted labels, and delivers as
output a list of performance values according to a list of performance
criteria that it calculates. |
PerformanceVector | Handles several performance criteria. |
PerformanceVector.DefaultComparator | The default performance comparator compares the main criterion of two performance vectors. |
PredictionAverage | Returns the average value of the prediction. |
RelativeError | The average relative error: Sum(|label-predicted|/label)/#examples. |
RootMeanSquaredError | The root-mean-squared error. |
RootRelativeSquaredError | Relative squared error is the total squared error made relative to what the error would have been if the prediction had been the average of the absolute value. |
SimpleAccuracy | This class calculates the accuracy without determining the complete contingency table. |
SimpleCriterion | Simple criteria are those which error can be counted for each example and can be averaged by the number of examples. |
SquaredCorrelationCriterion | Computes the square of the empirical corellation coefficient 'r' between label and prediction. |
SquaredError | The squared error. |
WeightedPerformanceCreator | Returns a performance vector containing the weighted fitness value of the input criteria. |
Provides performance evaluating operators and performance criteria.
|
|||||||||
PREV PACKAGE NEXT PACKAGE | FRAMES NO FRAMES |