Skip to content

Releases: EpistasisLab/tpot

Sparse matrix support, early stopping, and checkpointing

27 Sep 17:56
Compare
Choose a tag to compare
  • TPOT now supports sparse matrices with a new built-in TPOT configurations, "TPOT sparse". We are using a custom OneHotEncoder implementation that supports missing values and continuous features.

  • We have added an "early stopping" option for stopping the optimization process if no improvement is made within a set number of generations. Look up the early_stop parameter to access this functionality.

  • TPOT now reduces the number of duplicated pipelines between generations, which saves you time during the optimization process.

  • TPOT now supports custom scoring functions via the command-line mode.

  • We have added a new optional argument, periodic_checkpoint_folder, that allows TPOT to periodically save the best pipeline so far to a local folder during optimization process.

  • TPOT no longer uses sklearn.externals.joblib when n_jobs=1 to avoid the potential freezing issue that scikit-learn suffers from.

  • We have added pandas as a dependency to read input datasets instead of numpy.recfromcsv. NumPy's recfromcsv function is unable to parse datasets with complex data types.

  • Fixed a bug that DEFAULT in the parameter(s) of nested estimator raises KeyError when exporting pipelines.

  • Fixed a bug related to setting random_state in nested estimators. The issue would happen with pipeline with SelectFromModel (ExtraTreesClassifier as nested estimator) or StackingEstimator if nested estimator has random_state parameter.

  • Fixed a bug in the missing value imputation function in TPOT to impute along columns instead rows.

  • Refined input checking for sparse matrices in TPOT.

More built-in configurations, missing data support, and detailed API documentation

01 Jun 22:16
Compare
Choose a tag to compare
  • TPOT now detects whether there are missing values in your dataset and replaces them with the median value of the column.

  • TPOT now allows you to set a group parameter in the fit function so you can use the GroupKFold cross-validation strategy.

  • TPOT now allows you to set a subsample ratio of the training instance with the subsample parameter. For example, setting subsample=0.5 tells TPOT to create a fixed subsample of half of the training data for the pipeline optimization process. This parameter can be useful for speeding up the pipeline optimization process, but may give less accurate performance estimates from cross-validation.

  • TPOT now has more built-in configurations, including TPOT MDR and TPOT light, for both classification and regression problems.

  • TPOTClassifier and TPOTRegressor now expose three useful internal attributes, fitted_pipeline_, pareto_front_fitted_pipelines_, and evaluated_individuals_. These attributes are described in the API documentation.

  • Oh, TPOT now has thorough API documentation. Check it out!

  • Fixed a reproducibility issue where setting random_seed didn't necessarily result in the same results every time. This bug was present since TPOT v0.7.

  • Refined input checking in TPOT.

  • Removed Python 2 uncompliant code.

Multiprocessing support and custom operator configurations

22 Mar 20:49
Compare
Choose a tag to compare

TPOT 0.7 is now out, featuring multiprocessing support for Linux and macOS, customizable operator configurations, and more.

  • TPOT now has multiprocessing support (Linux and macOS only). TPOT allows you to use multiple processes for accelerating pipeline optimization in TPOT with the n_jobs parameter in both TPOTClassifier and TPOTRegressor.

  • TPOT now allows you to customize the operators and parameters explored during the optimization process. TPOT allows you to customize the list of operators and parameters in optimization process of TPOT with the config_dict parameter. The format of this customized dictionary can be found in the online documentation.

  • TPOT now allows you to specify a time limit for evaluating a single pipeline (default limit is 5 minutes) in optimization process with the max_eval_time_mins parameter, so TPOT won't spend hours evaluating overly-complex pipelines.

  • We tweaked TPOT's underlying evolutionary optimization algorithm to work even better, including using the mu+lambda algorithm. This algorithm gives you more control of how many pipelines are generated every iteration with the offspring_size parameter.

  • Fixed a reproducibility issue where setting random_seed didn't necessarily result in the same results every time. This bug was present since version 0.6.

  • Refined the default operators and parameters in TPOT, so TPOT 0.7 should work even better than 0.6.

  • TPOT now supports sample weights in the fitness function if some if your samples are more important to classify correctly than others. The sample weights option works the same as in scikit-learn, e.g., tpot.fit(x_train, y_train, sample_weights=sample_weights).

  • The default scoring metric in TPOT has been changed from balanced accuracy to accuracy, the same default metric for classification algorithms in scikit-learn. Balanced accuracy can still be used by setting scoring='balanced_accuracy' when creating a TPOT instance.

Support for regression problems

02 Sep 19:52
Compare
Choose a tag to compare
  • TPOT now supports regression problems! We have created two separate TPOTClassifier and TPOTRegressor classes to support classification and regression problems, respectively. The command-line interface also supports this feature through the -mode parameter.
  • TPOT now allows you to specify a time limit for the optimization process with the max_time_mins parameter, so you don't need to guess how long TPOT will take any more to recommend a pipeline to you.
  • Added a new operator that performs feature selection using ExtraTrees feature importance scores.
  • XGBoost has been added as an optional dependency to TPOT. If you have XGBoost installed, TPOT will automatically detect your installation and use the XGBoostClassifier and XGBoostRegressor in its pipelines.
  • TPOT now offers a verbosity level of 3 ("science mode"), which outputs the entire Pareto front instead of only the current best score. This feature may be useful for users looking to make a trade-off between pipeline complexity and score.

Full support for scikit-learn Pipelines

20 Aug 03:06
Compare
Choose a tag to compare

After a couple months hiatus in refactor land, we're excited to release the latest and greatest version of TPOT v0.5. For the past couple months, we worked on heavily refactoring TPOT's code base from a hacky research demo into a more elegant code base that will be easier to maintain in the long run. As an added bonus, TPOT now directly optimizes over and exports to scikit-learn Pipeline objects, so your auto-generated code should be much more readable.

Major changes in v0.5:

  • Major refactor: Each operator is defined in a separate class file. Hooray for easier-to-maintain code!
  • TPOT now exports directly to scikit-learn Pipelines instead of hacky code.
  • Internal representation of individuals now uses scikit-learn pipelines.
  • Parameters for each operator have been optimized so TPOT spends less time exploring useless parameters.
  • We have removed pandas as a dependency and instead use numpy matrices to store the data.
  • TPOT now uses k-fold cross-validation when evaluating pipelines, with a default k = 3. This k parameter can be tuned when creating a new TPOT instance.
  • Improved scoring function support: Even though TPOT uses balanced accuracy by default, you can now have TPOT use any of the scoring functions that cross_val_score supports.
  • Added the scikit-learn Normalizer preprocessor.
  • Minor text fixes.

Major upgrade

23 Jun 13:01
Compare
Choose a tag to compare

In TPOT 0.4, we've made some major changes to the internals of TPOT and added some convenience functions. We've summarized the changes below.

  • Added new sklearn models and preprocessors
    • AdaBoostClassifier
    • BernoulliNB
    • ExtraTreesClassifier
    • GaussianNB
    • MultinomialNB
    • LinearSVC
    • PassiveAggressiveClassifier
    • GradientBoostingClassifier
    • RBFSampler
    • FastICA
    • FeatureAgglomeration
    • Nystroem
  • Added operator that inserts virtual features for the count of features with values of zero
  • Reworked parameterization of TPOT operators
    • Reduced parameter search space with information from a scikit-learn benchmark
    • TPOT no longer generates arbitrary parameter values, but uses a fixed parameter set instead
  • Removed XGBoost as a dependency
    • Too many users were having install issues with XGBoost
    • Replaced with scikit-learn's GradientBoostingClassifier
  • Improved descriptiveness of TPOT command line parameter documentation
  • Removed min/max/avg details during fit() when verbosity > 1
    • Replaced with tqdm progress bar
    • Added tqdm as a dependency
  • Added fit_predict() convenience function
  • Added get_params() function so TPOT can operate in scikit-learn's cross_val_score & related functions

Zenodo release

06 Mar 17:02
Compare
Choose a tag to compare

Zenodo requires me to make a new release to assign a DOI, so here's that release. This is not a full release.

GECCO 2016 paper release

03 Feb 13:35
Compare
Choose a tag to compare

This is the version of TPOT that was used in the GECCO 2016 paper, "Evaluation of a Tree-based Pipeline Optimization Tool for Automating Data Science."

Export functionality and more ML models

07 Dec 18:52
Compare
Choose a tag to compare

New in v0.2.0:

  • TPOT now has the ability to export the optimized pipelines to sklearn code. See the documentation for more information.
  • Logistic regression, SVM, and k-nearest neighbors classifiers were added as pipeline operators. Previously, TPOT only included decision tree and random forest classifiers.
  • TPOT can now use arbitrary scoring functions for the optimization process. See the scoring function documentation for more information.

EvoBIO paper release

18 Nov 14:33
Compare
Choose a tag to compare
v0.1.3

Delete ,coveragerc