Skip to content

Implementing Sequential Minimal Optimization algorithm from John C. Platt's 1998 paper.

License

Notifications You must be signed in to change notification settings

ilias-ant/smo-from-scratch

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

smo-from-scratch

python: supported versions Code style: black

Implementing Sequential Minimal Optimization algorithm from John C. Platt's 1998 paper.

Setup

For demonstration purposes, we will make use of the Gisette dataset.

To populate the data/ folder with the necessary data, simply run*:

sh datasets.sh

You should end up with the following data files:

data/
    gisette_scale  # extracted from gisette_scale.bz2

To enable reproducibility, Poetry has been used as a dependency manager.

python3 -m pip install poetry

and then:

python3 -m poetry install --no-dev

to install all required project dependencies in a virtual environment.

Usage

Spawn a shell within the created virtual environment with:

python3 -m poetry shell

From within the shell, the following:

python cli.py --help

will guide you through the available options:

Usage: cli.py [OPTIONS] COMMAND [ARGS]...

Options:
  --help  Show this message and exit.

Commands:
  fit   Perform a simple training of the SMO-based classifier, given a C.
  tune  Perform a hyperparameter tuning of the SMO-based classifier.

So, for example:

python cli.py fit --C=0.01

will train an SMO-based classifier with C = 0.01.

Citation

@techreport{platt1998sequential,
author = {Platt, John},
title = {Sequential Minimal Optimization: A Fast Algorithm for Training Support Vector Machines},
institution = {Microsoft},
year = {1998},
month = {April},
abstract = {This paper proposes a new algorithm for training support vector machines: Sequential Minimal Optimization, or SMO. Training a support vector machine requires the solution of a very large quadratic programming (QP) optimization problem. SMO breaks this large QP problem into a series of smallest possible QP problems. These small QP problems are solved analytically, which avoids using a time-consuming numerical QP optimization as an inner loop. The amount of memory required for SMO is linear in the training set size, which allows SMO to handle very large training sets. Because matrix computation is avoided, SMO scales somewhere between linear and quadratic in the training set size for various test problems, while the standard chunking SVM algorithm scales somewhere between linear and cubic in the training set size. SMO's computation time is dominated by SVM evaluation, hence SMO is fastest for linear SVMs and sparse data sets. On real-world sparse data sets, SMO can be more than 1000 times faster than the chunking algorithm.},
url = {https://www.microsoft.com/en-us/research/publication/sequential-minimal-optimization-a-fast-algorithm-for-training-support-vector-machines/},
number = {MSR-TR-98-14},
}

*command sh datasets.sh works only on Linux-based systems - make necessary alterations depending on your OS.

About

Implementing Sequential Minimal Optimization algorithm from John C. Platt's 1998 paper.

Topics

Resources

License

Stars

Watchers

Forks