Class Ai4r::Experiment::ClassifierEvaluator
In: lib/ai4r/experiment/classifier_evaluator.rb
Parent: Object

The ClassifierEvaluator is useful to compare different classifiers algorithms. The evaluator builds the Classifiers using the same data examples, and provides methods to evalute their performance in parallel. It is a nice tool to compare and evaluate the performance of different algorithms, the same algorithm with different parameters, or your own new algorithm against the classic classifiers.

Methods

<<   add_classifier   build   eval   new   test  

Attributes

build_times  [R] 
classifiers  [R] 
eval_times  [R] 

Public Class methods

Public Instance methods

<<(classifier)

Alias for add_classifier

Add a classifier instance to the test batch

Build all classifiers, using data examples found in data_set. The last attribute of each item is considered as the item class. Building times are measured by separate, and can be accessed through build_times attribute reader.

You can evaluate new data, predicting its class. e.g.

  classifier.eval(['New York',  '<30', 'F'])
  => ['Y', 'Y', 'Y', 'N', 'Y', 'Y', 'N']

Evaluation times are measured by separate, and can be accessed through eval_times attribute reader.

Test classifiers using a data set. The last attribute of each item is considered as the expected class. Data items are evaluated using all classifiers: evalution times, sucess rate, and quantity of classification errors are returned in a data set. The return data set has a row for every classifier tested, and the following attributes:

  ["Classifier", "Testing Time", "Errors", "Success rate"]

[Validate]