Skip to content
#

smote-oversampler

Here are 59 public repositories matching this topic...

solution https://www.kaggle.com/datasets/mlg-ulb/creditcardfraud. Xgboost is an efficient method of gradient boosting that makes a random initial prediction then calculates similarity scores and gain to build the trees and decrease the gap between the actual value and the predicted value.Gridsearch was used to get the best parameters tuning.

  • Updated Aug 14, 2023
  • Jupyter Notebook

Built a model to determine the risk associated with extending credit to a borrower. Performed Univariate and Bivariate exploration using various methods such as pair-plot and heatmap to detect outliers and to monitor the behaviour and correlation of the features. Imputed the missing values using KNN Imputer and implemented SMOTE to address the i…

  • Updated Nov 14, 2022
  • Jupyter Notebook

Machine learning for credit card default. Precision-recalls are calculated due to imbalanced data. Confusion matrices and test statistics are compared with each other based on Logit over and under-sampling methods, decision tree, SVM, ensemble learning using Random Forest, Ada Boost and Gradient Boosting. Easy Ensemble AdaBoost classifier appear…

  • Updated Jul 24, 2020
  • Jupyter Notebook

Improve this page

Add a description, image, and links to the smote-oversampler topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the smote-oversampler topic, visit your repo's landing page and select "manage topics."

Learn more