Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New methods #105

Open
6 of 19 tasks
glemaitre opened this issue Jul 21, 2016 · 48 comments
Open
6 of 19 tasks

New methods #105

glemaitre opened this issue Jul 21, 2016 · 48 comments
Labels
Status: Help Wanted Indicates that a maintainer wants help on an issue or pull request Type: Enhancement Indicates new feature requests

Comments

@glemaitre
Copy link
Member

glemaitre commented Jul 21, 2016

This is a non exhaustive list of the methods that can be added for the next release.

Oversampling:

Prototype Generation/Selection:

  • Steady State Memetic Algorithm (SSMA)
  • Adaptive Self-Generating Prototypes (ASGP)

Ensemble

Regression

  • SMOTE for regression

P. Branco, L. Torgo and R. Ribeiro (2016). A Survey of Predictive Modeling on Imbalanced Domains. ACM Comput. Surv. 49, 2, 31. DOI: http://dx.doi.org/10.1145/2907070

Branco, P. and Torgo, L. and Ribeiro R.P. (2017) "Pre-processing Approaches for Imbalanced Distributions in Regression" Special Issue on Learning in the Presence of Class Imbalance and Concept Drift. Neurocomputing Journal. (submitted).

@glemaitre
Copy link
Member Author

@dvro @chkoar you can add anything there. We can make a PR to add these stuff in the todo list.

We should also discuss where these methods will be added (under-/over-sampling or new module)

@chkoar
Copy link
Member

chkoar commented Jul 21, 2016

SGP it should be placed in a new module/package like in scikit-protopy. generation is a reasonable name for this kind of algorithm.

@glemaitre
Copy link
Member Author

@chkoar What would be the reason to disassociate over-sampling and generation?

@chkoar
Copy link
Member

chkoar commented Jul 21, 2016

Actually none. Just for semantic reasons. Obviously, prototype generation methods could be considered as over-sampling methods.

@dvro
Copy link
Member

dvro commented Jul 21, 2016

@glemaitre actually, oversampling is different than prototype generation:

Prototype Selection:
given a set of samples S, a PS method selects a subset S', where S' \in S and |S'| < |S|
Prototype Generation:
given a set of samples S, a PG method generates a new set S', where |S'| < |S|.
Oversampling:
given a set of samples S, an OS method generates a new set S', where |S'| > |S| and S \in S'

@chkoar
Copy link
Member

chkoar commented Jul 21, 2016

Thanks for the clarification @dvro. That could be placed in the wiki!

@glemaitre glemaitre added this to the 0.2.alpha milestone Jul 27, 2016
@glemaitre glemaitre added Type: Enhancement Indicates new feature requests new feature and removed Type: Enhancement Indicates new feature requests labels Jul 27, 2016
@glemaitre glemaitre changed the title New methods for Release 0.2 New methods Aug 31, 2016
@glemaitre glemaitre modified the milestones: 0.2.alpha, 0.3.alpha Aug 31, 2016
@dabrze
Copy link
Contributor

dabrze commented Jan 20, 2017

Hi,

If by SPIDER you mean algorithms from: "Selective Pre-processing of Imbalanced Data for Improving Classification Performance" and "Learning from imbalanced data in presence of noisy and borderline examples", maybe I could be of some help. I know the authors and maybe I could implement a python version of this algorithm with their "supervision"? That might be "safer" than using only pseudo-codes from conference papers.

@glemaitre
Copy link
Member Author

Yes, it is this article. We would be happy for having PR on that. We are going to make
a sprint at some point for developing some of the above methods.

The only important thing is to follow the scikit-learn convention regarding the estimator
but this is something that we will also take care at the moment of the revision.

@glemaitre glemaitre removed this from the 0.3.alpha milestone Aug 14, 2017
@chkoar
Copy link
Member

chkoar commented Aug 31, 2017

MetaCost could be a nice addition.

@glemaitre
Copy link
Member Author

Yep. You can added it up in the previous list.

@mwydmuch
Copy link

Hi,
I hope this is a good place to write about it:
I have an implementation of Roughly Balanced Bagging (Under-Bagging method) with an extension for multiclass problems (based on this article) as an extension of bagging class from sklearn, made a few months ago. I will gladly polish this implementation to match this package conventions for bagging classifiers and made a pull request if you are interested in such contribution.

@chkoar
Copy link
Member

chkoar commented Nov 22, 2017

@mwydmuch PRs are always welcome. With the addition of #360 will start the ensemble methods module and I think that we'll deprecate the current ensemble based samplers.

@chkoar
Copy link
Member

chkoar commented Nov 22, 2017

@glemaitre do you think that we should have requirements, e.g. number of citations, before we merge an implementation into the package?

@glemaitre
Copy link
Member Author

glemaitre commented Nov 22, 2017 via email

@chkoar
Copy link
Member

chkoar commented Nov 22, 2017

@glemaitre I was thinking to ask @mwydmuch to include a comparison with the BalancedBaggingClassifier (#360) but I thought that would be a nice addition after the implementation, and not a requirement. I think that we are on the same side here. Apart from that, we actually have requirements like the dependesies, right?

@glemaitre
Copy link
Member Author

yes, regarding the dependencies, we are limiting only numpy/scipy/scikit-learn. Then, we can see if we can vendor but it should be avoided as much as possible.

Regarding the comparison, it is a bit my point when making a benchmark. I need to fix #360 in fact :)

@lrq3000
Copy link

lrq3000 commented Aug 13, 2019

A new one:

Sharifirad, S., Nazari, A., & Ghatee, M. (2018). Modified smote using mutual information and different sorts of entropies. arXiv preprint arXiv:1803.11002.

Includes MIESMOTE, MAESMOTE, RESMOTE and TESMOTE.

Since SMOTE is mostly a meta-algorithm to interpolate new sample, with a defined strategy that change depending on the author, would it be possible to implement a generic SMOTE model where the user can provide a custom function to make his own version of SMOTE? This might also ease the writing (and contribution) of new SMOTE models.

@halimkas
Copy link

Hi,
I hope this is a good place to write about it:
I have an implementation of Roughly Balanced Bagging (Under-Bagging method) with an extension for multiclass problems (based on this article) as an extension of bagging class from sklearn, made a few months ago. I will gladly polish this implementation to match this package conventions for bagging classifiers and made a pull request if you are interested in such contribution.

Hi,
I hope this is a good place to write about it:
I have an implementation of Roughly Balanced Bagging (Under-Bagging method) with an extension for multiclass problems (based on this article) as an extension of bagging class from sklearn, made a few months ago. I will gladly polish this implementation to match this package conventions for bagging classifiers and made a pull request if you are interested in such contribution.

Hi Marek,
Kindly share with me python implementation of Roughly Balanced Bagging, i will be gratefull for you help.

Thank you.

Haleem

@Matgrb
Copy link

Matgrb commented Jan 26, 2020

Hello,

I am writing because in my current use case I am working on, we would love to have a certain oversampling feature, yet, it is not implemented anywhere. Therefore I would like to propose it here.

We are building an NLP model for binary classification, where one of the classes is strongly imbalanced. Therefore, one of the approaches would be to oversample using data augmentation techniques for nlp, e.g. using nlpaug library replace some words with synonyms. Having a class in the library, which allows to package the augmentation into the sklearn pipeline would be great! I can also see this being used in Computer Vision.

Let me know what do you think? Whether this could become one of the features in this library, and in that case I would love to contribute. If it doesn't fit into this library, do you know any other open source project where this would fit?

Cheers,
Mateusz

@chkoar chkoar added Type: Enhancement Indicates new feature requests Status: Help Wanted Indicates that a maintainer wants help on an issue or pull request and removed new features labels Apr 16, 2020
@beeb
Copy link

beeb commented Jul 31, 2020

Not sure if this is the right place, but for my work I implemented a custom version of SMOTE for Regression as described in this paper:

Torgo L., Ribeiro R.P., Pfahringer B., Branco P. (2013) SMOTE for Regression. In: Correia L., Reis L.P., Cascalho J. (eds) Progress in Artificial Intelligence. EPIA 2013. Lecture Notes in Computer Science, vol 8154. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-40669-0_33

As mentioned in the original post, it would be nice to get SMOTE for Regression in imbalanced-learn.

@Sandy4321
Copy link

I think that everyone is right in this discussion. However, I agree with @glemaitre that the main indexing should by method type, not characteristic. But it would be necessary to have @chandu8542 criteria on the benchmarking to see how all algorithms perform in terms of memory, speed, etc.. using some datasets at different set sizes. Of course, such benchmark should come with narrative documentation to guide the method's choice by the user.
As always, PRs are welcome. We would gladly put our time into reviewing such PR so that nobody ever again faces the same troubles.

Great ideas
Do you have something implemented?
For example
criteria on the benchmarking to see how all algorithms perform in terms of memory, speed, etc.. using some datasets at different set sizes

@Sandy4321
Copy link

Not sure if this is the right place, but for my work I implemented a custom version of SMOTE for Regression as described in this paper:

Torgo L., Ribeiro R.P., Pfahringer B., Branco P. (2013) SMOTE for Regression. In: Correia L., Reis L.P., Cascalho J. (eds) Progress in Artificial Intelligence. EPIA 2013. Lecture Notes in Computer Science, vol 8154. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-40669-0_33

As mentioned in the original post, it would be nice to get SMOTE for Regression in imbalanced-learn.

Can you share link to code?

@Sandy4321
Copy link

Hi,
I hope this is a good place to write about it:
I have an implementation of Roughly Balanced Bagging (Under-Bagging method) with an extension for multiclass problems (based on this article) as an extension of bagging class from sklearn, made a few months ago. I will gladly polish this implementation to match this package conventions for bagging classifiers and made a pull request if you are interested in such contribution.

Hi,
I hope this is a good place to write about it:
I have an implementation of Roughly Balanced Bagging (Under-Bagging method) with an extension for multiclass problems (based on this article) as an extension of bagging class from sklearn, made a few months ago. I will gladly polish this implementation to match this package conventions for bagging classifiers and made a pull request if you are interested in such contribution.

Hi Marek,
Kindly share with me python implementation of Roughly Balanced Bagging, i will be gratefull for you help.

Thank you.

Haleem

Is code shared , so far?

@halimkas
Copy link

Hi,
I hope this is a good place to write about it:
I have an implementation of Roughly Balanced Bagging (Under-Bagging method) with an extension for multiclass problems (based on this article) as an extension of bagging class from sklearn, made a few months ago. I will gladly polish this implementation to match this package conventions for bagging classifiers and made a pull request if you are interested in such contribution.

Hi,
I hope this is a good place to write about it:
I have an implementation of Roughly Balanced Bagging (Under-Bagging method) with an extension for multiclass problems (based on this article) as an extension of bagging class from sklearn, made a few months ago. I will gladly polish this implementation to match this package conventions for bagging classifiers and made a pull request if you are interested in such contribution.

Hi Marek,
Kindly share with me python implementation of Roughly Balanced Bagging, i will be gratefull for you help.
Thank you.
Haleem

Is code shared , so far?

Not yet!

thank you.

Haleem

@chkoar
Copy link
Member

chkoar commented Jul 31, 2020

@beeb actually they are call it imbalanced regression but to my view it is not. All the thing they call utility based learning and the key thing is around the utility function that it is used, right? In any case you can draft an implementations talk about it.

@beeb
Copy link

beeb commented Jul 31, 2020

Can you share link to code?

Here is the code of the original paper and also what I took as inspiration for my modified implementation https://rdrr.io/cran/UBL/man/smoteRegress.html

@beeb
Copy link

beeb commented Jul 31, 2020

@beeb actually they are call it imbalanced regression but to my view it is not. All the thing they call utility based learning and the key thing is around the utility function that it is used, right? In any case you can draft an implementations talk about it.

I'm not sure what you are saying. It's SMOTE but they use a function to determine if a data point is common or "rare" depending on how far away from the mean of the distribution it falls (kind of - I used the extremas of the whiskers of a box plot as the inflection points for a CubicHermiteSpline that defines "rarity", I think they also use this in the original code). Then they oversample those by selecting a random NN and computing the new sample in between (just like SMOTE) , the difference is that the label value for the new point is a weighted average of the labels for the two parents.

@chkoar
Copy link
Member

chkoar commented Jul 31, 2020

@beeb yeap. i have read all their related work. Since they invonvle that utility function, to me is not imbalanced regression but something like cost sensitive/interested regression. Apart from my personal opinion, I think that this method still remains in the scope of the package so I would love to see that implemented in imbalanced-learn. Please open a PR when you have time. It will be much appreciated.

@zoj613
Copy link

zoj613 commented Jan 31, 2021

Is there any interest in adding Localized Random Affine Shadowsampling (LoRAS) from the maintainers?

To quote from the paper's abstract:

We observed that LoRAS, on average generates better ML models in terms of F1-Score and Balanced accuracy. Another key observation is that while most of the extensions of SMOTE we have tested, improve the F1-Score with respect to SMOTE on an average, they compromise on the Balanced accuracy of a classification model. LoRAS on the contrary, improves both F1 Score and the Balanced accuracy thus produces better classification models. Moreover, to explain the success of the algorithm, we have constructed a mathematical framework to prove that LoRAS oversampling technique provides a better estimate for the mean of the underlying local data distribution of the minority class data space.

If there is interest in inclusion to the library, then I can prepare a PR.

Reference:
Bej, S., Davtyan, N., Wolfien, M. et al. LoRAS: an oversampling approach for imbalanced datasets. Mach Learn 110, 279–301 (2021). https://doi.org/10.1007/s10994-020-05913-4

@Sandy4321

This comment has been minimized.

@zoj613

This comment has been minimized.

@Sandy4321

This comment has been minimized.

@zoj613

This comment has been minimized.

@hayesall
Copy link
Member

hayesall commented Feb 3, 2021

Hey @zoj613 and @Sandy4321, please keep discussion focused, it creates a lot of noise otherwise.

@zoj613 I'm -1 on including it right now.

We loosely follow scikit-learn's rule of thumb to keep maintenance burden down. Methods should roughly be 3 years old and 200+ citations.

@zoj613
Copy link

zoj613 commented Feb 4, 2021

Hey @zoj613 and @Sandy4321, please keep discussion focused, it creates a lot of noise otherwise.

@zoj613 I'm -1 on including it right now.

We loosely follow scikit-learn's rule of thumb to keep maintenance burden down. Methods should roughly be 3 years old and 200+ citations.

Fair enough. Keeping to the topic at hand, I submitted a PR at #789 implementing SMOTE-RSB from the checklist in the OP.

@glemaitre
Copy link
Member Author

I think that we should prioritize the SMOTE variants that we want to include.
We could reuse the benchmark proposed there: analyticalmindsltd/smote_variants#14 (comment)

Basically, we could propose to implement the following:

  • polynom-fit-SMOTE
  • ProWSyn
  • SMOTE-IPF
  • Lee
  • SMOBD
  • G-SMOTE

Currently, we have SVM/KMeans/KNN based SMOTE for historical reasons rather than performance reasons.

I think that we should probably make an effort regarding the documentation. Currently, we show the differences regarding how the methods are sampling (this is already a good point). However, I think that we should have a clearer guideline on SMOTE works best for which applications. What I mean is that SMOTE, SMOTENC, SMOTEN, might already cover a good basis.

@BradKML
Copy link

BradKML commented Sep 7, 2022

@glemaitre are there any standard APIs to follow for the SMOTE variants?

@glemaitre
Copy link
Member Author

Whenever possible it should inherit from SMOTE.
You can check the current code hierarchy that we have for SMOTE.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Status: Help Wanted Indicates that a maintainer wants help on an issue or pull request Type: Enhancement Indicates new feature requests
Projects
No open projects
Development

No branches or pull requests