Skip to content

jpedrocm/pool-pruning-experiment

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

24 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Homework 2

python status license

This is the second homework for the Multiple Classifiers System's class. The project was forked and adapted from the first homework.

Description

The goal of this homework is to perform an experiment comparing two different pruning strategies for pool of classifiers applied with three different validation sets. Bagging was chosen for generating pools of 100 Perceptrons, which are combined with hard voting. Metrics are collected for each fold in a 10-fold cross-validation setting. They include accuracy, f-measure, AUC and g-mean. Means and standard deviations of these metrics are calculated in order to analyze the results. Two pairwise diversity measures are also calculated for the pruned ensembles.

Getting Started

Requirements

Installing

  • Clone this repository into your machine
  • Download and install all the requirements listed above in the given order
  • Download the CM1 and JM1 software defect prediction datasets in .arff format from the Promise repository and do not change their names
  • Place both .arff files inside the data/ folder

Reproducing

  • Enter into the code/ folder in your local repository
  • Run the experiment to produce every ensemble's predictions
python generate_predictions.py
  • Generate all metric results
python generate_metrics.py
  • Then, compare the scenarios wanted
python compare_scenarios.py [-f FILENAME] [-s SEPARATE] [-c1 COLUMN1] [-c2 COLUMN2]

Project Structure

.            
├── code                             # Code files
|   ├── compare_scenarios.py         # Compare metric results 
│   ├── generate_metrics.py          # Generate metric results
│   ├── generate_predictions.py      # Generate models predictions
│   ├── prefit_voting_classifier.py  # Voting classifier for prefit base classifiers
│   └── utils.py                     # Utils functions
├── comparisons                      # Result comparison files
├── data                             # Datasets files
├── metrics                          # Metrics files
├── predictions                      # Models predictions files
├── LICENSE.md
└── README.md

Author

License

This project is licensed under the MIT License - see the LICENSE.md file for details.