Skip to content

Experiment on dynamic selection of classifiers in multiple stages

License

Notifications You must be signed in to change notification settings

jpedrocm/dynamic-selection-experiment

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

24 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Homework 3

python status license

This is the third homework for the Multiple Classifiers System's class. The project was forked and adapted from the first homework.

Description

The goal of this homework is to perform an experiment comparing two different dynamic classifier selection strategies, two different dynamic ensemble selection strategies and a two-level classifier system which takes into account the hardness of examples. Bagging was chosen for generating pools of 100 Perceptrons, which are combined with hard voting. Metrics are collected for each fold in a 10-fold cross-validation setting. They include accuracy, f-measure, AUC and g-mean. Means and standard deviations of these metrics are calculated in order to analyze the results for two datasets.

Getting Started

Requirements

Installing

  • Clone this repository into your machine
  • Download and install all the requirements listed above in the given order
  • Download the CM1 and JM1 software defect prediction datasets in .arff format from the Promise repository and do not change their names
  • Place both .arff files inside the data/ folder

Reproducing

  • Enter into the code/ folder in your local repository
  • Run the experiment to produce every ensemble's predictions
python generate_predictions.py
  • Generate all metric results
python generate_metrics.py
  • Then, compare the scenarios wanted
python compare_scenarios.py [-f FILENAME] [-s SEPARATE] [-c1 COLUMN1] [-c2 COLUMN2]

Project Structure

.            
├── code                                  # Code files
|   ├── compare_scenarios.py              # Compare metric results 
│   ├── generate_metrics.py               # Generate metric results
│   ├── generate_predictions.py           # Generate models predictions
│   ├── two_stage_tiebreak_classifier.py  # Two stage ensemble based on instance hardness and tiebreaking rule
│   └── utils.py                          # Utils functions
├── comparisons                           # Result comparison files
├── data                                  # Datasets files
├── metrics                               # Metrics files
├── predictions                           # Models predictions files
├── LICENSE.md
└── README.md

Author

License

This project is licensed under the MIT License - see the LICENSE.md file for details.

About

Experiment on dynamic selection of classifiers in multiple stages

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages