Skip to content

Shaon2221/Learning-and-Experimenting_Data-Science

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation


LEARNING-AND-EXPERIMENTING_DATA-SCIENCE

â—Ļ Master the Data World: Learn, Experiment, Conquer with Data Science!

Connect with me: LinkedIn 🚀

â—Ļ Developed with the software and tools below.

Jupyter HTML5 Python JSON

GitHub license git-last-commit GitHub commit activity GitHub top language

📖 Table of Contents


📍 Overview

The Learning-and-Experimenting_Data-Science repository functions as a comprehensive learning resource for Data Science, where Machine Learning concepts are beautifully demystified. Primarily geared towards self-learners, it hosts a wide array of solved Machine Learning projects explained along with deep dives into Keras-based deep learning tasks. This repository is a valuable go-to for those aspiring to gain practical, hands-on knowledge in data science and machine learning. Data science is the field of study that combines domain expertise, programming skills, and knowledge of mathematics and statistics to extract meaningful insights from data.Then, what is Data ?
Data is a collection of facts, such as numbers, words, measurements, observations or just descriptions of things.

This repository contains beginner level topics of Machine Learning and Data Science, as well as some advanced topics. It also contain some great projects which I learn. It takes almost four moths to learn all of the basic things for me. This repository is very close to my heart🖤. I have learnt most of the things from Codebasics and some of them from different sources. Hats off to the man of this youtube channel for making this possible. He is awesome teacher! One of the interesting part of his lesson is, he gave exercises for every topic. Please, visit his youtube channel and appreciate his hard work. Note that, I am updating this repository on a regular basis. I highly recommened your contribution and obviously feedback.

Objectives đŸ‘ŧđŸŧ

  • Readable codes which anyone can use as ebook for his learning purpose
  • To recall my memories when I get stuck

Pre-requisite 🗝

  • Python Basic
  • Basic Math ( Calculus, Matrix calculation, Algebra, Statistics)

đŸ“Ļ Features

Feature Description
⚙ī¸ Architecture The codebase is organized in folders according to topic (e.g., Deep Learning, Feature Engineering, Data Preprocessing).
🔗 Dependencies Uses popular data science libraries like TensorFlow, Keras, NumPy, Pandas, Matplotlib, and Seaborn.
🧩 Modularity The project uses Jupyter notebooks for the majority of work, with a clear segregation of different concepts in different notebooks.
đŸ§Ē Testing No testing framework or testing strategies were identified in the repository, seems rudimentary for self-learning and understanding concepts.
⚡ī¸ Performance Performance evaluation is not explicitly done but common data science libraries are used which are optimized for such tasks.
🔐 Security No explicitly defined security measures are found in the project. Data Science-oriented repo, major security concerns not applicable.
🔀 Version Control The repository uses Git for version control, but no branch management or other advanced strategies seem to be in place.
🔌 Integrations Direct integrations with other systems or services are not evidenced in the repository. It primarily deals with locally stored data.
đŸ“ļ Scalability As the project is more about learning data science concepts, scalability in the traditional software sense might not apply here.

📂 Repository Structure

└── Learning-and-Experimenting_Data-Science/
    ├── 200  Machine Learning Projects Solved and Explained _ by Aman Kharwal _ Medium.mht
    ├── Deep Learning/
    │   ├── 1. Keras Sequential Exercise Solution.ipynb
    │   ├── 1. Keras Sequential.ipynb
    │   ├── 2. Movie_Review_Classification_using_Tensorflow_&_Google_Colab.ipynb
    │   ├── 3. Activation Functions.ipynb
    │   ├── 4. Handwritten Digits recognization.ipynb
    │   ├── Intro.ipynb
    ├── Feature Engineering/
    │   ├── 1.0 Removing Outlier using Percentile.ipynb
    │   ├── 1.1 Removing Outlier using Percentile Exercise Solution.ipynb
    │   ├── 2.0 Standard Deviation, Z-score.ipynb
    │   ├── 2.1 Standard Deviation, Z-score Exercise Solution.ipynb
    │   ├── 3.0 Using IQR.ipynb
    │   ├── Dataset/
    ├── JSON, XML, Dictionary, File.ipynb
    ├── ML A-Z/
    │   ├── 1. Data Preprocessing/
    │   │   ├── categorical_data.py
    │   │   ├── data_preprocessing.py
    │   │   ├── data_preprocessing_template.py
    │   │   └── missing_data.py
    │   ├── 10. XGBoost/
    │   │   ├── Dataset/
    │   │   └── xgboost.py
    │   ├── 2. Regression/
    │   │   ├── Dataset/
    │   │   ├── Multiple_Linear_Regression.py
    │   │   ├── Polynomial_Regression.py
    │   │   ├── Random_Forest.py
    │   │   ├── Simple_Linear_Regression.py
    │   │   ├── decision_tree.py
    │   │   ├── regression_template.py
    │   │   └── svr.py
    │   ├── 3. Classification/
    │   │   ├── Classification_Template.py
    │   │   ├── Dataset/
    │   │   ├── Decision_Tree_Classification.py
    │   │   ├── Kernel_SVM.py
    │   │   ├── Knn.py
    │   │   ├── Logistic_Regression.py
    │   │   ├── Naive_bayes.py
    │   │   ├── Random_Forest_Classification.py
    │   │   └── Svm.py
    │   ├── 4. Clustering/
    │   │   ├── Dataset/
    │   │   ├── hierarchical_clustering.py
    │   │   └── k_means.py
    │   ├── 5. Association Rule Learning/
    │   │   ├── Apriori.py
    │   │   ├── Dataset/
    │   │   └── apyori.py
    │   ├── 6. Natural Language Processing/
    │   │   ├── Dataset/
    │   │   └── nlp.py
    │   ├── 7. Neural Network/
    │   │   ├── Dataset/
    │   │   ├── ann.py
    │   │   ├── cnn.py
    │   │   └── rnn.py
    │   ├── 8. Dimensionality Reduction/
    │   │   ├── Dataset/
    │   │   ├── KernelPca.py
    │   │   ├── lda.py
    │   │   └── pca.py
    │   ├── 9. Model Selection/
    │   │   ├── Dataset/
    │   │   ├── GridSearchCV.py
    │   │   └── k-cross_validation.py
    ├── Machine Learning/
    │   ├── 1. Linear Regression With One Variable.ipynb
    │   ├── 10. Support Vector Machine (SVM) 1.ipynb
    │   ├── 10. Support Vector Machine (SVM) 2.ipynb
    │   ├── 11. Random Forest 1.ipynb
    │   ├── 11. Random Forest 2.ipynb
    │   ├── 12. K Fold Cross Validation 1.ipynb
    │   ├── 12. K Fold Cross Validation 2.ipynb
    │   ├── 13. K Means Clustering 1.ipynb
    │   ├── 13. K Means Clustering 2.ipynb
    │   ├── 14. Naive Bayes 1.ipynb
    │   ├── 14. Naive Bayes 2.ipynb
    │   ├── 14. Naive Bayes 3.ipynb
    │   ├── 15. GridSearchCV Hyper Parameter Tuning 1.ipynb
    │   ├── 15. GridSearchCV Hyper Parameter Tuning 2.ipynb
    │   ├── 2. Linear Regression With Multiple Variable.ipynb
    │   ├── 3. Gradient_Descent and Cost_function_1.ipynb
    │   ├── 3. Gradient_Descent and Cost_function_2.ipynb
    │   ├── 3. Gradient_Descent and Cost_function_3.ipynb
    │   ├── 4. Saving Model Using Pickle and sklearn joblib.ipynb
    │   ├── 5. Dummy Variables & One Hot Encoding.ipynb
    │   ├── 5. Dummy Variables & One Hot Encoding2.ipynb
    │   ├── 6. Train,Test Split.ipynb
    │   ├── 7. Logistic Regression(Binary Classification) 1.ipynb
    │   ├── 7. Logistic Regression(Binary Classification) 2.ipynb
    │   ├── 8. Logistic Regression(Multi-class Classification) 1.ipynb
    │   ├── 8. Logistic Regression(Multi-class Classification) 2.ipynb
    │   ├── 9. Decision Tree 1.ipynb
    │   ├── 9. Decision Tree 2.ipynb
    │   ├── 9. Decision Tree 3.ipynb
    │   ├── gd2.PNG
    │   ├── gd3.PNG
    │   ├── gd4.PNG
    │   ├── gd5.PNG
    │   ├── model_joblib
    │   ├── model_pickle
    ├── Matplotlib/
    │   ├── 1. Format String in plot.ipynb
    │   ├── 2. Axes Labels, Legend, Grid.ipynb
    │   ├── 3. Bar chart.ipynb
    │   ├── 4. Histograms.ipynb
    │   ├── 5. Pie Chart.ipynb
    │   ├── 6. Save Chart.ipynb
    │   ├── 7. Subplot.ipynb
    ├── Numpy/
    │   ├── 1. Introduction to Numpy.ipynb
    │   ├── 2. Numpy array Operations.ipynb
    │   ├── 3. Indexing, Slicing & Boolean arrays.ipynb
    │   ├── 4. Iterate using nditer.ipynb
    ├── Pandas/
    │   ├── 1. Different Ways of Creating DataFrame.ipynb
    │   ├── 10. Crosstab or Contingency table.ipynb
    │   ├── 11. TimeSeries- DateTimeIndex, Resample.ipynb
    │   ├── 12. TimeSeries- Date_range.ipynb
    │   ├── 13. TimeSeries- Holidays or Custom Business Days.ipynb
    │   ├── 14. TimeSeries- To_DateTime.ipynb
    │   ├── 15. Period, PeriodIndex, TimeStamp.ipynb
    │   ├── 16. Shifting in Pandas.ipynb
    │   ├── 17. Timeseries- Handling Timezone.ipynb
    │   ├── 18. DataFrame Styling.ipynb
    │   ├── 19. Pandas Profiling.ipynb
    │   ├── 2. Basic Functions of Dataframe.ipynb
    │   ├── 3. Read, Write csv, excel files [replacing na values].ipynb
    │   ├── 4. Handling missing Data.ipynb
    │   ├── 5. Group By DataFrame.ipynb
    │   ├── 6. Concat Dataframe.ipynb
    │   ├── 7. Merge Dataframe.ipynb
    │   ├── 8.  Pivot, Pivot table, Melt.ipynb
    │   ├── 9. Stack, Unstack DataFrame.ipynb
    │   ├── BangaloreEDA.html
    │   ├── output_min.html
    ├── Projects/
    │   └── Real Estate Price Prediction/
    │       ├── Home Price Prediction.ipynb
    │       ├── columns.json
    ├── SQL/
    ├── SciPy/
    ├── Seaborn/
    ├── String,List,Dictionary,Tuple.ipynb
    ├── bookjson.txt
    ├── bookxml.txt

⚙ī¸ Modules

Root
File Summary
bookjson.txt The code represents a directory tree for a data science learning and experimentation project. This includes resources addressing Machine Learning, Deep Learning, Feature Engineering, Data Preprocessing, Regression, Classification, Clustering, Neural Networks, and various visualization tools like Matplotlib and Seaborn. Moreover, it includes different project solutions and an example of a JSON file containing name, address, and phone data.
[JSON, XML, Dictionary, File.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/JSON, XML, Dictionary, File.ipynb) The code represents a Jupyter notebook that covers data manipulation between common data formats: Dictionary, JSON, and XML. It demonstrates how to convert a dictionary into JSON format, write it to a text file, then read it back into a dictionary. The code also showcases how to perceive and write the dictionary into XML format using the dicttoxml Python library.
[200 Machine Learning Projects Solved and Explained _ by Aman Kharwal _ Medium.mht](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/200 Machine Learning Projects Solved and Explained _ by Aman Kharwal _ Medium.mht) The code illustrates a directory tree of a wide-ranging Data Science learning and experimenting project. It includes machine learning projects, deep learning exercises like Keras, Tensorflow, activation functions and digit recognition models. Additionally, it contains feature engineering methods for outlier removal, standard deviation and Inter Quartile Range (IQR) calculations. It also provides preprocessing data, XGBoost implementation and regression techniques in the Machine Learning A-Z section.
bookxml.txt The directory tree represents a comprehensive Data Science learning and experimenting repository. It houses practice notebooks explaining Machine Learning, Deep Learning, Feature Engineering concepts, and python Data Science libraries: Numpy, Pandas, Matplotlib. It also includes exercises on data preprocessing, various types of regression, classification, clustering, and dimensionality reduction. ML A-Z section processes project-like assignments highlighting in-depth practical implementation. There's a separate section for projects, including a Real Estate Price Prediction exercise. The code shows xml data represented as a string.
String,List,Dictionary,Tuple.ipynb The Python code under'String,List,Dictionary,Tuple.ipynb' in the'Learning-and-Experimenting_Data-Science' directory demonstrates basic operations on Python data structures. It provides examples of string slicing, lists addition, removal, insertion, dictionary addition, deletion, iteration, and tuple creation with multiple data types. The immutability of string and tuples is also highlighted.
Machine learning
File Summary
[10. Support Vector Machine (SVM) 1.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Machine Learning/10. Support Vector Machine (SVM) 1.ipynb) The provided code structure represents a collection of various Data Science and Machine Learning projects and exercises. Key areas covered include Deep Learning with exercises in Keras, TensorFlow, classification and activation functions. It also features Feature Engineering exercises focusing on outlier elimination and standard deviation. Additionally, it includes ML A-Z practices on data preprocessing, XGBoost, and regression. It also contains a notebook handling JSON, XML, and Dictionaries.
[5. Dummy Variables & One Hot Encoding.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Machine Learning/5. Dummy Variables & One Hot Encoding.ipynb) This code represents a directory tree of a Data Science learning and experimenting project. It includes machine learning projects, exercises and solutions centered around deep learning with TensorFlow & Keras, movie review classification, activation functions, and digit recognition. There are also exercises on outlier removal, z-score computation, and quartile usage in feature engineering. The project also has sections dedicated to data-preprocessing, XGBoost, various regression and classification algorithms in machine learning from scratch to advanced level.
[2. Linear Regression With Multiple Variable.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Machine Learning/2. Linear Regression With Multiple Variable.ipynb) This code represents a directory tree for a data science learning and experimentation project. It includes machine learning topics, featuring in-depth exercises and solutions on deep learning and feature engineering. It also covers pre-processing and outlier removal methods with examples. In-depth tutorials on various machine learning algorithms for regression and classification are provided alongside python code. The project also delves into Python libraries like Keras and TensorFlow and discusses concepts like activation functions and Z-scores.
[6. Train,Test Split.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Machine Learning/6. Train,Test Split.ipynb) The code provides a directory structure for a Data Science learning project. It contains resources and exercises related to Machine Learning, Deep Learning, Feature Engineering, and data preprocessing, such as using Keras, TensorFlow, handling categorical data, outlier removal, standard deviation calculation, and regression models. The focus is broad, covering diverse tools and techniques, also extending to XGBoost and other areas like file handling.
[13. K Means Clustering 1.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Machine Learning/13. K Means Clustering 1.ipynb) The provided directory tree represents a collection of projects and exercises in Data Science, Machine Learning, and Deep Learning. It includes resources like solutions for machine learning projects, exercises with Keras, TensorFlow, Google Colab, and handwritten digits recognition. Additionally, it contains guides about outlier removal, standard deviation, Z-score calculation, and interquartile range usage in Feature Engineering along with data preprocessing scripts in ML A-Z section.
[8. Logistic Regression(Multi-class Classification) 2.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Machine Learning/8. Logistic Regression(Multi-class Classification) 2.ipynb) This code depicts a directory tree for a data science learning and experimentation project. It includes directories and files focused on deep learning solutions (Keras, TensorFlow), feature engineering techniques (outlier removal, standard deviation, Z-score, IQR), various machine learning projects such as regression and XGBoost, as well as some generic data preprocessing files. Additionally, there’s a file for handling JSON, XML, dictionary, and file inputs.
[12. K Fold Cross Validation 2.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Machine Learning/12. K Fold Cross Validation 2.ipynb) This code uses various machine learning models (Logistic Regression, Decision Tree, Support Vector Machine, RandomForest) to classify the Iris dataset. It calculates and compares the models' average scores based on 7-fold Cross Validation. The best performing model, based on resulting scores, is the Support Vector Machine model.
[13. K Means Clustering 2.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Machine Learning/13. K Means Clustering 2.ipynb) This code indicates a structured file directory for projects and resources centered on learning and experimenting with data science. Inside, there are materials on machine learning, deep learning, and feature engineering which include various practical exercises and solutions. Another section exists for data preprocessing, as part of a ML A-Z resource. There are also files focusing on more specific areas like outliers, standard deviation, Z-score, and Interquartile Range (IQR).
[9. Decision Tree 1.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Machine Learning/9. Decision Tree 1.ipynb) The code represents the directory structure of a data science learning and experimentation project. It includes solved machine learning projects, deep learning exercises involving Keras and Tensorflow, notes on feature engineering techniques for outlier removal and standardization, and files dealing with JSON, XML, and others. The ML A-Z folder contains files related to various machine learning topics like data preprocessing, XGBoost, and regression and classification techniques.
[5. Dummy Variables & One Hot Encoding2.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Machine Learning/5. Dummy Variables & One Hot Encoding2.ipynb) The code depicts a directory structure for a data science learning and experimentation project. It includes work on machine learning with solved examples and exercises, deep learning exercises with Keras and TensorFlow, feature engineering techniques such as outlier removal and standard deviation, JSON/XML/Dictionary/File operations, and regression techniques. Notably, it contains a section dedicated to data preprocessing and an application of XGBoost. Some sections come with corresponding datasets.
[11. Random Forest 2.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Machine Learning/11. Random Forest 2.ipynb) The code shows the directory structure for a data science learning and experimenting project. It contains solved machine learning projects, deep learning exercises involving Keras and Tensorflow, feature engineering notebooks detailing outlier removal methods, JSON and XML processing, and a comprehensive machine learning guide from data preprocessing to regression analysis. Additionally, there are specific exercises involving XGBoost and multiple linear regression.
[15. GridSearchCV Hyper Parameter Tuning 2.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Machine Learning/15. GridSearchCV Hyper Parameter Tuning 2.ipynb) The code explores hyperparameter tuning in Python using GridSearchCV. It uses different machine learning models (SVM, Random Forest, Logistic Regression, GaussianNB, MultinomialNB, DecisionTree) with various hyperparameters to find the best model and its parameters. The best results are achieved with the SVM model using a linear kernel and C=1.
[7. Logistic Regression(Binary Classification) 1.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Machine Learning/7. Logistic Regression(Binary Classification) 1.ipynb) The directory contains various data science and machine learning resources. It includes projects on deep learning using Keras and Tensorflow, featuring activities like movie review classification and handwritten digit recognition. It also contains exercises on Feature Engineering techniques such as outlier removal and normalization. The'ML A-Z' folder contains scripts for pre-processing data and various regression techniques. Lastly, there's a file explaining JSON, XML and Dictionary handling in Python.
[9. Decision Tree 2.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Machine Learning/9. Decision Tree 2.ipynb) The code represents a data science project directory with solutions and exercises. It includes machine learning projects and notebooks on different topics under deep learning and feature engineering such as Keras Sequential, movie review classification using TensorFlow, activation functions, handwritten digits recognition, and methods for outlier removal and understanding standard deviation & Z-score.
[7. Logistic Regression(Binary Classification) 2.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Machine Learning/7. Logistic Regression(Binary Classification) 2.ipynb) This directory tree summarizes a set of data science learning resources and projects. It includes machine learning projects, deep learning exercises, feature engineering techniques such as outlier removal and use of Z-score, JSON/XML/Dictionary/File operations, and pre-processing and regression methodologies like XGBoost and polynomial regression in a ML A-Z folder. Each category contains Jupyter notebooks or Python scripts for practice and solutions.
[15. GridSearchCV Hyper Parameter Tuning 1.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Machine Learning/15. GridSearchCV Hyper Parameter Tuning 1.ipynb) This code represents a directory tree for a Data Science repository that contains notebooks and scripts addressing various topics such as Machine Learning, Deep Learning, Feature Engineering and different types of regression and classification techniques. The repository includes solved exercises, project solutions and an array of datasets for hands-on practice. It also covers outlier removal, Z-score calculations, and advanced machine learning techniques including XGBoost.
[3. Gradient_Descent and Cost_function_1.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Machine Learning/3. Gradient_Descent and Cost_function_1.ipynb) The code is a Jupyter notebook that contains a tutorial on implementing Gradient Descent and calculating Cost Function in Machine Learning. It explains theory and the algorithm of the Gradient Descent, aims to minimize the cost function, and demonstrates the partial derivatives-based formulas for determining the slope'm' and intercept'b' in linear regression. The code leverages markdown cells for the theoretical explanation and visual illustrations using images.
[3. Gradient_Descent and Cost_function_2.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Machine Learning/3. Gradient_Descent and Cost_function_2.ipynb) This is a Data Science learning and experimenting directory tree, containing a variety of machine learning, deep learning, and feature engineering applications. It includes Abstracts of ML projects, Keras, TensorFlow exercises, outlier detection methods, encoding and decoding data files, various regression models, and XGBoost training. It also offers various related datasets and codes for data preprocessing and treats missing categories.
[14. Naive Bayes 3.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Machine Learning/14. Naive Bayes 3.ipynb) The code provides a directory structure for a data science learning and experimenting project. It includes different sections: introduction and solution files for machine learning and deep learning projects, feature engineering techniques, and pre-processing templates. Specifically, it contains Jupyter notebooks focusing on deep learning exercises, outlier removal methods, standard deviation & IQR usage, and movie review classification. The final section, "ML A-Z", provides Python scripts for various data pre-processing techniques and machine learning algorithms such as regression, classification, and XGBoost.
[9. Decision Tree 3.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Machine Learning/9. Decision Tree 3.ipynb) The presented code illustrates a directory tree for a data science learning and experimenting project. This comprises a medium article on 200 solved machine learning projects, a deep learning folder with exercises and solutions using Keras, Tensorflow, and Google Colab, and a feature engineering directory addressing outlier removal and standard deviation calculation techniques.
[8. Logistic Regression(Multi-class Classification) 1.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Machine Learning/8. Logistic Regression(Multi-class Classification) 1.ipynb) The code represents a directory structure for a Data Science learning and experimentation project. It includes folders for machine learning projects, deep learning exercises with Keras and TensorFlow, feature engineering techniques like outlier removal and standard deviations, and data preprocessing. It also involves work with various data types like JSON, XML, dictionaries, and files.
[10. Support Vector Machine (SVM) 2.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Machine Learning/10. Support Vector Machine (SVM) 2.ipynb) The directory tree illustrates a repository dedicated to Machine Learning and Data Science learning and experimentation. It includes solved projects, dedicated folders for deep learning concepts, with Keras and Tensorflow exercises, and feature engineering techniques. It also includes specific preprocessing, regression, and XGBoost application examples, in Machine Learning from A-Z folder. Files are a mix of Python scripts and Jupyter notebooks, some of which include solutions to exercises.
[12. K Fold Cross Validation 1.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Machine Learning/12. K Fold Cross Validation 1.ipynb) The provided code is a machine learning notebook that explains the implementation of K-Fold Cross Validation in Python. Key features include visually explaining the concept, implementing K-Fold Cross Validation for Logistic Regression, SVM, and Random Forest models using a digits dataset. It also demonstrates parameter tuning to optimize the number of estimators in the RandomForestClassifier. Further, it showcases the use of the cross_val_score function to automate the cross-validation process.
[4. Saving Model Using Pickle and sklearn joblib.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Machine Learning/4. Saving Model Using Pickle and sklearn joblib.ipynb) The given code focuses on creating a linear regression model based on a housing prices dataset and then demonstrates two methods to save and load the trained model: Python's Pickle and Scikit-learn's joblib. The model is successfully loaded back from the saved state and used to make predictions, confirming that the saving and loading methods are effective.
[1. Linear Regression With One Variable.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Machine Learning/1. Linear Regression With One Variable.ipynb) This code outlines the directory structure for a data science project with resources for learning and experimenting. It includes files for machine learning projects, deep learning exercises involving Keras, TensorFlow for movie review classification, activation functions, etc. In Feature Engineering, outliers are removed using percentile, SD/Z-Score and IQR. Additionally, it contains files related to data processing and handling in various formats.
[3. Gradient_Descent and Cost_function_3.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Machine Learning/3. Gradient_Descent and Cost_function_3.ipynb) This code represents a directory structure for a data science project, showcasing machine learning examples, deep learning exercises with Keras and Tensorflow, feature engineering examples, and data processing scripts. It includes practice notebooks for outlier handling, SD, and IQR use, files for data preprocessing, regression models in the "ML A-Z" subdirectory and XGBoost datasets and scripts. Additionally, it also handles JSON, XML, and file processing, each encapsulating different aspects of data science learning and experimenting.
[14. Naive Bayes 2.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Machine Learning/14. Naive Bayes 2.ipynb) The given code is part of a Python-based Naive Bayes machine learning model to predict email spam. It imports a data set of emails, preprocesses the text data into numerical values using a Count Vectorizer, and encodes "spam" as 1 in a new "spam" column. It splits the data into training and test subsets and trains the model. Furthermore, it uses a pipeline to streamline the fit and score process. Finally, it applies the model on testing data to evaluate its performance and to predict spam on new data.
[14. Naive Bayes 1.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Machine Learning/14. Naive Bayes 1.ipynb) This is a Data Science learning and experimentation directory tree. It contains various Machine Learning and Deep Learning projects, Feature Engineering exercises, JSON/XML handling methods, and ML tasks broken down into sections such as data preprocessing, regression, classification, and XGBoost. Each section comprises exercise files, solutions, and relevant datasets. It seems to be an organized repository of myriad data science concepts, techniques, and solution approaches.
[11. Random Forest 1.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Machine Learning/11. Random Forest 1.ipynb) The code describes the structure of a data science learning and experimenting repository. This includes machine learning projects with explanations, notebooks in the Deep Learning and Feature Engineering subdirectories with related exercises and solutions, and a special focus on outlier removal and normalization techniques. Categorization and data preprocessing sample in Python files are also present, supplementary notebooks about JSON, XML, Dictionary, Files management are included too.
Real estate price prediction
File Summary
[Home Price Prediction.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Projects/Real Estate Price Prediction/Home Price Prediction.ipynb) The code presents a directory structure for a Data Science learning and experimenting project. There are four main sections: a Medium article on machine learning projects, deep learning exercises using Keras, Tensorflow and Google Colab, feature engineering notebooks on outlier removal, standard deviation and IQR, and a machine learning module from A-Z that covers data preprocessing and regression. There is also specific handling of JSON, XML, dictionary, and file-related operations.
[columns.json](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Projects/Real Estate Price Prediction/columns.json) The provided code is a JSON object containing data columns for a Real Estate Price Prediction project. It defines the features of properties, predominantly areas in and around Bangalore, India. It includes specifics like total square footage, number of baths and bedrooms, and various neighborhood locations. These data columns can potentially be used as inputs for a predictive model to forecast real estate prices.
Matplotlib
File Summary
[3. Bar chart.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Matplotlib/3. Bar chart.ipynb) The code represents a hierarchical directory tree for a Data Science learning project. It contains resources explaining machine learning and deep learning, including notebooks on keras, tensorflow, and activation function. Another section focuses on feature engineering topics like outliers removal, standard deviation, and the use of IQR. Included are explanation files, exercise notebooks, and corresponding solutions. There's also a section focusing on file types in data science (JSON, XML, etc).
[4. Histograms.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Matplotlib/4. Histograms.ipynb) The code represents a directory structure for a data science learning and experimentation project. It contains resources and exercises for machine learning, deep learning, and feature engineering via Jupyter notebooks (.ipynb). The deep learning section covers topics like Keras Sequentials, Tensorflow, activation functions, and handwritten digits recognition. Feature engineering includes outlier removal, Z-score and Standard deviation applications, and Interquartile range (IQR) usage. It also contains unspecified resources related to JSON, XML, Dictionaries, and Files.
[1. Format String in plot.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Matplotlib/1. Format String in plot.ipynb) The provided code exemplifies a directory tree for a data science learning and experimentation repository. It features resources like a medium article and notebooks for experiments on Machine Learning, Deep Learning and Feature Engineering topics. These topics include implementing Keras Sequential Models, classification, understanding activation functions, outlier removal, working with standard deviation, and Z-score exercises.
[6. Save Chart.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Matplotlib/6. Save Chart.ipynb) The code represents a directory setup for a data science project, including solved machine learning projects, deep learning exercises involving Keras, Tensorflow, and activation functions, as well as feature engineering techniques like outlier removal, standard deviation and Z-score calculations. It also contains a dataset for use in these tasks.
[5. Pie Chart.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Matplotlib/5. Pie Chart.ipynb) The directory contains materials for learning Data Science, specifically Machine Learning and Deep Learning. There are solved projects and explanations, solutions to Keras sequential exercise, and lessons on movie review classification using Tensorflow and Google Colab. Also featured are notebooks on classifying handwritten digits, feature engineering techniques like outliers removal using percentile, standard deviation, Z-score applications, and usage of Interquartile Range (IQR).
[7. Subplot.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Matplotlib/7. Subplot.ipynb) The code displays a basic tutorial for creating subplots using Matplotlib, specifically creating a bar chart of income over four years. It introduces the concept of a subplot: a secondary section of the main plot. The dataset includes details for years 2014-2017 and their respective earnings and expenses. However, only the income bar chart is displayed in the output.
[2. Axes Labels, Legend, Grid.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Matplotlib/2. Axes Labels, Legend, Grid.ipynb) The code outlines a directory tree for a Data Science project. It includes a document with the explanation of 200 solved Machine Learning projects and notebooks with different Deep Learning and Feature Engineering exercises and their solutions. Deep Learning exercises cover Keras, TensorFlow, activation functions, and digit recognition, while Feature Engineering folders include techniques to remove outliers and standard deviation, Z-score functionalities.
Feature engineering
File Summary
[2.0 Standard Deviation, Z-score.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Feature Engineering/2.0 Standard Deviation, Z-score.ipynb) The code represents a directory structure for a data science learning and experimenting project. It includes machine learning project solutions, deep learning notebooks for different exercises including Keras, Tensorflow and Handwritten digits recognition. Also exercises on feature engineering using techniques like removing outliers, standard deviation, Z-score, IQR, and data preprocessing. There's also JSON, XML, Dictionary, File notebooks. The ML A-Z folder contains scripts for data preprocessing.
[1.0 Removing Outlier using Percentile.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Feature Engineering/1.0 Removing Outlier using Percentile.ipynb) The provided directory tree represents a collection of data science learning and experimenting projects, segregated into categories of machine learning, deep learning, feature engineering, and handling different file formats. It contains Python programs and Jupyter notebooks exhibiting implementations of diverse machine learning algorithms, data preprocessing techniques, outlier handling methods, and neural network modeling. It includes projects dealing with regression, classification, XGBoost models, and various data handling techniques.
[1.1 Removing Outlier using Percentile Exercise Solution.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Feature Engineering/1.1 Removing Outlier using Percentile Exercise Solution.ipynb) The provided directory contains resources for learning and experimenting with Data Science. It includes projects and exercises on Machine Learning, Deep Learning, Feature Engineering, and data preprocessing techniques. It also contains various Python scripts for regression, classification, clustering, and XGBoost models. The exercises employ libraries like Keras and Tensorflow for deep learning and provide solutions for removing outliers, using IQR, standard deviation and Z-score.
[2.1 Standard Deviation, Z-score Exercise Solution.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Feature Engineering/2.1 Standard Deviation, Z-score Exercise Solution.ipynb) The directory tree details a collection of Data Science and Machine Learning projects. The code includes various project files dealing with deep learning methods with Keras and Tensorflow, feature engineering techniques, and Data Preprocessing. It also contains a tutorial and solution of Machine Learning Python projects on XGBoost, and data handling in different formats like JSON, XML, and Dictionary.
[3.0 Using IQR.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Feature Engineering/3.0 Using IQR.ipynb) This Python code uses pandas to remove outliers in a dataset pertaining to heights through an Interquartile Range (IQR) method. The script calculates the first (Q1) and third quartiles (Q3) to establish an IQR. Outliers are then determined as values that fall below Q1-1.5IQR or above Q3+1.5IQR. The outliers are dropped to create a cleaned dataset.
Pandas
File Summary
[16. Shifting in Pandas.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Pandas/16. Shifting in Pandas.ipynb) The code lists files in a Data Science project directory. The main functionalities covered are Machine Learning, Deep Learning, and Feature Engineering. This includes solved and explained projects, exercises in Keras & TensorFlow, outlier removal, standard deviation calculation techniques, and various methods like multiple, simple linear, polynomial, and random forest regression. It also consists of classification algorithms such as decision tree, kernel SVM, and KNN. The repository also contains preprocessed & XGBoost datasets.
[8. Pivot, Pivot table, Melt.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Pandas/8. Pivot, Pivot table, Melt.ipynb) The given directory tree outlines a collection of machine learning and data science projects. It includes topics like deep learning with exercises and solutions, feature engineering techniques for outlier removal and standard deviation calculation. It also houses pre-processing and classification scripts including various regression methods and XGBoost, along with corresponding datasets. Additional topics include handling of JSON, XML, dictionary files, and a document summarizing 200 solved machine learning projects.
[17. Timeseries- Handling Timezone.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Pandas/17. Timeseries- Handling Timezone.ipynb) The code represents a directory tree for a Learning and Experimenting Data Science project. It contains Machine Learning projects, Deep Learning exercises with Keras and TensorFlow, Feature Engineering techniques for outlier removal and normalization, and data preprocessing scripts. It also features scripts for various machine learning algorithms such as Regression, XGBoost, and Classification techniques, including Decision Trees and Support Vector Machines.
[4. Handling missing Data.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Pandas/4. Handling missing Data.ipynb) The given directory tree provides an overview of a dataset for learning and experimenting with data science, containing notebooks and Python scripts for various topics like Machine Learning, Deep Learning, Feature Engineering, and different kinds of regressions, classifications, and preprocessing techniques. Also included are specific exercises focused on prepared datasets and methods for outlier removal, data normalization, and activation functions in deep learning.
[10. Crosstab or Contingency table.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Pandas/10. Crosstab or Contingency table.ipynb) The code comprises directories for machine learning and data science projects. The "Deep Learning" directory is dedicated to methods like Keras and TensorFlow exercises. The "Feature Engineering" directory has methods for handling outliers and standard deviations. The "ML A-Z" directory contains pre-processing data, regression models, XGBoost, and various classification methods. Also, the directory includes datasets, and python scripts for different machine learning techniques.
[14. TimeSeries- To_DateTime.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Pandas/14. TimeSeries- To_DateTime.ipynb) The Python code in this Jupyter notebook performs a variety of time series manipulation tasks using pandas library. It converts different date-time formats to a unified format, customizes date-time formats, handles invalid date values, and works with both US and European date styles. Additionally, the code converts between epochs (specific unix time) and human-readable date-time formats.
output_min.html The code directory structure represents a collection of data science resources and exercises. It includes solved machine learning projects, deep learning case studies using Keras and Tensorflow like movie review classification, and digit recognition. There's also material focused on feature engineering techniques, JSON and XML handling, as well as a comprehensive suite of scripts showcasing ML techniques such as preprocessing, regression, classification, and XGBoost, each with accompanying datasets.
[19. Pandas Profiling.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Pandas/19. Pandas Profiling.ipynb) The directory tree presents a collection of data science projects and tutorials, including machine learning tasks, deep learning exercises like movie review classification and digit recognition using Keras and TensorFlow, feature engineering using different techniques like Z-score and IQR, and handling different file types. It also contains code for data preprocessing, various regression techniques, and utilizing XGBoost, all part of an extensive "ML A-Z" series.
[11. TimeSeries- DateTimeIndex, Resample.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Pandas/11. TimeSeries- DateTimeIndex, Resample.ipynb) This directory tree represents a collection of data science and machine learning projects. It includes solved projects explained in a document, deep learning exercises involving Keras and Tensorflow, feature engineering techniques such as outlier removal and standard deviation calculations. It also contains Python scripts for data preprocessing tasks, as well as other miscellaneous data science related files.
BangaloreEDA.html The code is a directory structure for a data science learning project. It includes various machine learning techniques like Keras and Tensorflow for deep learning, methods to remove outliers using percentile, standard deviation and IQR for feature engineering, and XML, JSON file handling. The tree includes a'ML A-Z' directory which deals with topics from data preprocessing to regressions and XGBoost. The projects include exercises and solutions.
[3. Read, Write csv, excel files [replacing na values].ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Pandas/3. Read, Write csv, excel files [replacing na values].ipynb) This directory tree outlines a collection of data science and machine learning resources. It comprises projects on deep learning using Keras and Tensorflow, feature engineering techniques with outlier removal and standardization, data preprocessing scripts, regression, and classification algorithms including XGBoost, Multiple Linear Regression, Decision Tree, and more. Also, it includes a.mht file detailing 200 machine learning projects, and an.ipynb file related to JSON, XML, and file handling.
[15. Period, PeriodIndex, TimeStamp.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Pandas/15. Period, PeriodIndex, TimeStamp.ipynb) The directory tree contains various machine learning and deep learning projects including tutorials, exercise solutions, and datasets. Areas covered include Keras, Tensorflow, outlier removal techniques, standard deviations and Z-scores, and Python scripts for various classification, regression, and data preprocessing methods. Highlights include a machine learning project compilation article, movie review classification, handwriting digit recognition, and implementations of XGBoost, decision trees, and other ML models.
[12. TimeSeries- Date_range.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Pandas/12. TimeSeries- Date_range.ipynb) The code describes the file and directory structure of a data science project. It involves machine learning projects, exercises and solutions on Deep Learning using Keras and TensorFlow, and Feature Engineering methods like outlier removal and standard deviation. There's reference material on handling data types (JSON, XML, Dictionary, File) and a specific'ML A-Z' section for preprocessing and implementing XGBoost algorithm.
[6. Concat Dataframe.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Pandas/6. Concat Dataframe.ipynb) The code presents a directory tree for a project titled "Learning-and-Experimenting_Data-Science", containing notebooks and python scripts for machine learning and deep learning projects. Subdivided into various categories like Keras, TensorFlow, Feature Engineering, and more, along with dataset folders. The projects cover topics such as outlier removal, Z-Scores, data preprocessing, regression, classification, clustering, and techniques like XGBoost and decision trees.
[7. Merge Dataframe.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Pandas/7. Merge Dataframe.ipynb) This code shows a directory tree for a Data Science learning and experimentation project. It includes machine learning projects with exercises and explanations, deep learning exercises focused on Keras, movie review classification, activation functions, and digit recognition. Also included are notebooks for feature engineering, handling JSON and XML files. Another section is dedicated for A-Z machine learning tutorials which cover data preprocessing, XGBoost, multiple types of regression and classification models, and clustering.
[1. Different Ways of Creating DataFrame.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Pandas/1. Different Ways of Creating DataFrame.ipynb) The given Python code, organized under the Pandas folder of a data science learning and experimentation directory, illustrates different methods of creating a DataFrame in pandas. It demonstrates how to generate DataFrame from a CSV file, an Excel sheet, a dictionary, a list of tuples, and a list of dictionaries. Each method creates a DataFrame consisting of daily weather data (day, temperature, windspeed, and event).
[13. TimeSeries- Holidays or Custom Business Days.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Pandas/13. TimeSeries- Holidays or Custom Business Days.ipynb) The code provides exercises and solutions for managing custom business days and holidays in a time series data analysis. Functionalities include creating a custom business day range that accounts for US federal holidays, creating a holiday calendar frequency using AbstractHolidayCalendar, and adding custom holidays to existing business days. It uses several Pandas' methods, including CustomBusinessDay object, pandas date_range function and setting index for data frames.
[9. Stack, Unstack DataFrame.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Pandas/9. Stack, Unstack DataFrame.ipynb) The directory tree outlines the structure of a Learning and Experimenting Data Science project. Main sections include Machine Learning article review, deep learning exercises (building models with TensorFlow & Keras, demonstrating various activation functions, etc), feature engineering (outlier removal, standard deviation, etc), and work with JSON & XML files. It also features a comprehensive ML A-Z section encapsulating data preprocessing, regression, and several key classification techniques using various machine learning algorithms.
[5. Group By DataFrame.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Pandas/5. Group By DataFrame.ipynb) The code represents a directory tree for a data science learning and experimenting project. It includes ML projects explained by Aman Kharwal, deep learning projects covering topics like Keras, TensorFlow, and activation functions. The directory also contains notebooks on feature engineering techniques like outlier removal, and standard deviation calculations. Additionally, JSON, XML file handling, and a separate section titled "ML A-Z" are present.
[2. Basic Functions of Dataframe.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Pandas/2. Basic Functions of Dataframe.ipynb) The code is a hierarchy of data science learning and experimenting projects. It contains solved machine learning projects, deep learning explorations in Keras and TensorFlow-e.g., movie review classification, activation functions, and digit recognition. In addition, it features feature engineering practices-e.g., outlier removal and standardization, as well as various ML techniques, such as data preprocessing, regression, classification and gradient boosting in Python. Moreover, it includes JSON, XML, dictionary and file handling notebook.
[18. DataFrame Styling.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Pandas/18. DataFrame Styling.ipynb) The code shows a directory structure of a data science learning and experimenting project. It includes resources for machine learning projects, deep learning problems using Keras and TensorFlow, and feature engineering techniques with Python notebooks. Also included are Python scripts for data pre-processing, methods such as XGBoost, several forms of regression, and classification, providing an extensive range of tools for data science exploration and learning.
10. xgboost
File Summary
[xgboost.py](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/ML A-Z/10. XGBoost/xgboost.py) The python script implements the XGBoost model for classification. It first preprocesses the data, handling categorical features, and splitting the dataset into training and test sets. It also scales the features. The model is fitted on training data and predictions are made on test data. Then, the accuracy of the model is assessed using a confusion matrix. Lastly, K-fold cross validation is used to estimate the model's performance.
6. natural language processing
File Summary
[nlp.py](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/ML A-Z/6. Natural Language Processing/nlp.py) The code implements a Natural Language Processing (NLP) solution for a restaurant review dataset. It begins by preprocessing the reviews, including text cleaning, stemming, and removing stop words. A Bag of Words is created through CountVectorizer. The prepared data is then split into training and testing sets. StandardScaler is used for feature scaling. Finally, a Naive Bayes classifier is trained and used to predict test data. The effectiveness of the model is measured using a confusion matrix.
1. data preprocessing
File Summary
[categorical_data.py](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/ML A-Z/1. Data Preprocessing/categorical_data.py) The provided code belongs to a data preprocessing script that includes various steps such as importing libraries, importing the dataset, handling missing data, and encoding categorical data. It uses the pandas, numpy, matplotlib and scikit-learn libraries to read the dataset, fill missing values using the mean strategy, and convert categorical data into numerical format using Label Encoding and One Hot Encoding methods.
[data_preprocessing.py](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/ML A-Z/1. Data Preprocessing/data_preprocessing.py) This Python code is a data pre-processing pipeline. It reads a CSV dataset, handles missing values by imputing with the mean, encodes categorical features using One Hot Encoding and Label Encoding. The dataset is then split into training and testing sets. Finally, feature scaling is applied to normalize the feature sets. The code uses libraries like pandas, numpy, matplotlib, and scikit-learn.
[missing_data.py](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/ML A-Z/1. Data Preprocessing/missing_data.py) The provided script is a Python data preprocessing code using pandas and sklearn libraries. It imports a dataset from a csv file, splits it into features'X' and target'y', and deals with missing data. The script uses the sklearn Imputer to replace'NaN' values in the features matrix'X' with the mean of respective columns.
[data_preprocessing_template.py](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/ML A-Z/1. Data Preprocessing/data_preprocessing_template.py) The code is a python template for Data preprocessing. It starts by importing essential libraries such as numpy, matplotlib, and pandas. It then imports a dataset from a CSV file and splits it into features and the target variable. The dataset is divided into a training set and a test set. Finally, it scales the features using the StandardScaler from the sklearn library to standardize the feature set to enhance the machine learning model's performance.
8. dimensionality reduction
File Summary
[lda.py](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/ML A-Z/8. Dimensionality Reduction/lda.py) This Python code performs Linear Discriminant Analysis (LDA), a dimensionality reduction technique, on a dataset containing wine details. It starts by appointing the necessary libraries and loading the dataset. After dividing the data into training and testing sets and applying feature scaling, the LDA transformation is conducted. Following this, a logistic regression model is trained using the transformed training set. The trained model is used to predict the test set results, and an associated confusion matrix is generated. Lastly, the classification boundaries of the trained model on both the training and testing sets are visualized.
[pca.py](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/ML A-Z/8. Dimensionality Reduction/pca.py) The code is an implementation of Principal Component Analysis (PCA) for dimensionality reduction in Machine Learning. It begins by importing necessary libraries and the dataset, then standardizes the dataset features. The PCA technique is applied and the amount of variance each PCA holds is calculated. Then, a Logistic Regression model is fitted on transformed dataset and tested. Predictions are compared with actual results using a confusion matrix. Lastly, results are visualized for both the training and test dataset.
[KernelPca.py](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/ML A-Z/8. Dimensionality Reduction/KernelPca.py) The script performs Kernel Principal Component Analysis (Kernel PCA) for dimensionality reduction and applies Logistic Regression for classification. It first imports and prepares the datasets, applies feature scaling, then implements Kernel PCA. It uses the transformed dataset to fit the logistic model, makes predictions, creates a confusion matrix, and visualizes the training and test set results.
3. classification
File Summary
[Decision_Tree_Classification.py](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/ML A-Z/3. Classification/Decision_Tree_Classification.py) The Python code performs decision tree classification on a dataset. It starts by importing the necessary libraries, followed by loading and preparing the dataset. This includes scale standardization and splitting it into a training and test set. The DecisionTreeClassifier model is then fit with the training data. Predictions are made on the test set and then evaluated using a confusion matrix. The results of the classification are visualized for both the training and test sets.
[Logistic_Regression.py](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/ML A-Z/3. Classification/Logistic_Regression.py) The code executes logistic regression on the'Social_Network_Ads.csv' dataset. After importing the dataset, it splits the data into training and test sets, and scales the feature variables. The logistic regression model is then fitted to the training set. The prediction results are evaluated using a confusion matrix, and both the training and test set results are visually represented with scatter plots.
[Naive_bayes.py](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/ML A-Z/3. Classification/Naive_bayes.py) The code implements a Naive Bayes classifier for social network ads data. It starts by importing required libraries and data, processes features by scaling, splits the data into training and test sets, fits the Naive Bayes model on the training set and predicts outcomes on the test set. Confusion matrix is used for model evaluation. Finally, it visualizes the classifier performance on both training and test sets.
[Classification_Template.py](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/ML A-Z/3. Classification/Classification_Template.py) The code imports a dataset of social network ads, splits it into training and test sets, and scales the features. It also contains an outline for fitting a classifier to the training set, predicting test set results, creating a confusion matrix for evaluating the accuracy, and visualizing the predictions on both the training and the test sets. The visualizations use age and estimated salary as input features.
[Kernel_SVM.py](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/ML A-Z/3. Classification/Kernel_SVM.py) The code implements a Kernel Support Vector Machine(SVM) for classification tasks. It begins by loading a dataset and splitting it into training and test portions. The features are then scaled to maintain uniformity. The classifier with an'rbf' kernel is trained on the training set, and predictions are made on the test set. Results are evaluated using a confusion matrix. Finally, the code visualizes the decision boundaries for the training and test sets.
[Random_Forest_Classification.py](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/ML A-Z/3. Classification/Random_Forest_Classification.py) The code is a Random Forest Classifier implementation for the classification of social network ads. It imports the dataset and pre-processes it through feature scaling. The classified data is then trained and tested using a split dataset. Performance is evaluated using a confusion matrix and the results, both for training and testing sets, are visualized using scatter plots.
[Svm.py](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/ML A-Z/3. Classification/Svm.py) This Python script is a machine learning application using the Support Vector Machines (SVM) algorithm for classification. It first imports and preprocesses the data, splitting it into training and testing sets and performing feature scaling. The SVM model is trained on the training set, then used to make predictions on the test set. The script also provides confusions matrix and visualization of results in the feature space for both the training and test sets.
[Knn.py](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/ML A-Z/3. Classification/Knn.py) The code is for a K-Nearest Neighbour classification model implementation. It loads a social network ad dataset, splits it into training and testing sets, applies feature scaling, and builds a KNN model. After training, it predicts results for the test set. It creates a confusion matrix to evaluate the model performance. Lastly, it visualizes the training and test set results in regards to age and estimated salary.
4. clustering
File Summary
[hierarchical_clustering.py](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/ML A-Z/4. Clustering/hierarchical_clustering.py) The provided Python script implements Hierarchical Clustering, a Machine Learning algorithm, on a dataset of mall customers. The script begins with importing necessary packages and loading customer data. Via dendrograms, it finds the optimal number of clusters. It then fits the Hierarchical Clustering model to the dataset, then visualizes the formed clusters, categorizing customers into groups like'Careful','Standard','Target','Careless', and'Sensible' based on their income and spending score.
[k_means.py](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/ML A-Z/4. Clustering/k_means.py) The Python code performs K-means clustering on a dataset of mall customers. It starts by import relevant libraries and loading the data. It then uses the elbow method to determine the optimal number of clusters, which turns out to be 5. It applies K-means clustering algorithm to the dataset. Lastly, it visualizes the clusters and their centroids on a scatter plot, with income depicted on the x-axis and spending score on the y-axis.
5. association rule learning
File Summary
[apyori.py](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/ML A-Z/5. Association Rule Learning/apyori.py) The code implements the Apriori algorithm in Python, an unsupervised machine learning model for mining frequent itemsets and relevant association rules from a transaction database. The coded functionality includes transaction management, candidate generation, support calculation, ordered statistic generation, filtering, and serialization methods. It also handles command-line arguments and input-output file transactions, allowing this implementation to be used as a standalone script or as a module in a larger application.
[Apriori.py](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/ML A-Z/5. Association Rule Learning/Apriori.py) The Python script implements the Apriori algorithm, which is used for Association Rule Learning in data mining. It first imports necessary libraries and reads a dataset. It then prepares the data into transactions format. The script further applies the Apriori algorithm on the dataset with a certain minimum support, confidence, lift, and length for generating rules. Finally, it stores all the generated rules.
7. neural network
File Summary
[ann.py](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/ML A-Z/7. Neural Network/ann.py) The provided Python code performs data preprocessing on a dataset (Churn_Modelling.csv), and subsequently creates an Artificial Neural Network using the Keras library. The model consists of an input layer, two hidden layers, and an output layer. After training the model and evaluating its accuracy, dropout regularization is employed to reduce overfitting. The code further evaluates the model's performance using cross-validation and refines it through hyperparameter tuning with a grid search.
[cnn.py](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/ML A-Z/7. Neural Network/cnn.py) The code implements a Convolutional Neural Network (CNN) for image classification using Keras library. It has four layers: a convolutional layer to process input images, a pooling layer to reduce the spatial dimensions, a flattening layer to transform the data, and a fully connected layer for classification. The CNN model is compiled and fitted to the training set, with data augmentation applied. Finally, the model's performance is evaluated using a validation set.
[rnn.py](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/ML A-Z/7. Neural Network/rnn.py) The code implements a recurrent neural network (RNN) model for predicting Google's stock prices. It first preprocesses the training data by normalizing it, restructuring it into 60 timesteps, and reshaping it. The RNN, built using Keras with LSTM layers and dropout for regularisation, is trained on this data. It then predicts stock prices for a test set, inverse transforms the scaled predictions, and visually compares these against real stock prices.
9. model selection
File Summary
[GridSearchCV.py](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/ML A-Z/9. Model Selection/GridSearchCV.py) The code employs a Kernel SVM classifier to make predictions on a dataset, with periodic visualizations of classification regions. It first splits the data into training and testing sets, scales features, and trains the classifier. Predictions on the test set are verified using a confusion matrix. It applies k-fold cross validation to measure performance. The GridSearchCV module optimizes the model parameters to enhance its accuracy. The optimized parameters are visualized on both the training and test sets.
[k-cross_validation.py](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/ML A-Z/9. Model Selection/k-cross_validation.py) The Python script demonstrates the use k-Fold Cross Validation in Machine Learning, using the Kernel SVM model. It imports a dataset, applies data pre-processing steps (like feature scaling) and splits it into a test set and a training set. The script then fits the training set, makes predictions on the test set, and evaluates the model by creating a confusion matrix and calculating cross-validation scores. The results for both training and test sets are visualized in two-dimensional plots.
2. regression
File Summary
[decision_tree.py](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/ML A-Z/2. Regression/decision_tree.py) The Python script implements a Decision Tree Regressor model utilizing the scikit-learn library. The model is fitted with data from'Position_Salaries.csv' to predict a new result. The code also includes commented-out sections for data splitting and feature scaling. The prediction results are then visually represented using a scatter plot and a regression curve, showcasing the relationship between Position level and Salary.
[Polynomial_Regression.py](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/ML A-Z/2. Regression/Polynomial_Regression.py) The Python script is a comparison study of Linear and Polynomial Regression models. Initial data is read from a CSV file into "x" and "y" arrays. Two models are constructed: linear regression and polynomial regression (with degree 4). The models are fitted to the data. Linear and polynomial regressions are then visualized by plotting the data and regression predictions. Additionally, the script predicts salary values based on linear and polynomial regressions using a specified position level (6.5).
[regression_template.py](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/ML A-Z/2. Regression/regression_template.py) The Python code provides a template for regression analysis. It loads a dataset, splits it into dependent and independent variables and uses a regression model (not defined in the code) to make predictions. It also includes commented instructions for splitting data into training and test sets, and feature scaling. Finally, it produces 2 plots, visually representing the regression results and their predictions against the real data. For higher resolution visualization, the code also includes a line for generating a smoother curve.
[Random_Forest.py](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/ML A-Z/2. Regression/Random_Forest.py) The given Python code performs Random Forest Regression on a dataset named'Position_Salaries.csv'. The model is instantiated with 300 estimators and trained on the entire dataset without any feature scaling. It then predicts a new result for a specific input (6.5). The trained model's predictions are visualized against actual data points on a scatter plot for evaluation, illustrating the accuracy of the Random Forest Regression model fir.
[svr.py](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/ML A-Z/2. Regression/svr.py) The provided Python code implements the Support Vector Regression (SVR) model. After loading a dataset, the features and targets are scaled using StandardScaler for optimal performance. The SVR model is then fit to the transformed data. A new prediction is made for a given input, and the result is inverse-transformed to original scale. Finally, the SVR predictions and actual target values are plotted for visual comparison.
[Simple_Linear_Regression.py](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/ML A-Z/2. Regression/Simple_Linear_Regression.py) The code implements a simple linear regression model for predicting salaries based on years of experience. It starts by importing required libraries, reading and splitting the dataset into training and testing data. The linear regression model is trained using the training data. It then uses this model to predict salaries in the test data, and visualizes these predictions against the actual values for both training and testing sets.
[Multiple_Linear_Regression.py](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/ML A-Z/2. Regression/Multiple_Linear_Regression.py) This is a multiple linear regression script in Python. It starts by importing required libraries and the data from a CSV file. Then, it encodes categorical features and splits the data into training and testing sets. The script fits the training data into a linear regression model and makes predictions for the test set. Finally, it applies backward elimination to optimize the model by removing statistically insignificant predictors, re-fitting the model each time, and cross-validating the results.
Deep learning
File Summary
[Intro.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Deep Learning/Intro.ipynb) The code is a Jupyter notebook introducing Deep Learning. It includes images illustrating important DL concepts and provides instructions to install'tensorflow' and'keras', necessary libraries for Deep Learning. The notebook resides in a directory housing multiple ML, DL, and Data Science related projects and tutorials, which include exercises, solutions, and source codes.
[4. Handwritten Digits recognization.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Deep Learning/4. Handwritten Digits recognization.ipynb) The code describes a directory structure for a Data Science learning and experimenting project, which includes machine learning projects, deep learning concepts with exercises, feature engineering techniques with relevant exercises, a notebook on handling various data types, and a section on data preprocessing in machine learning. It contains Jupyter notebooks (.ipynb) for interactive learning and Python scripts (.py).
[3. Activation Functions.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Deep Learning/3. Activation Functions.ipynb) The provided code is a Jupyter notebook that implements various activation functions used in neural networks, including Sigmoid, Hyperbolic Tangent (tanh), Rectified Linear Unit (ReLU), and Leaky ReLU. These functions are crucial for determining the output of neural networks. The purpose of activation functions is to introduce non-linearity into the network. Each function is tested with various inputs to illustrate its behavior.
[2. Movie_Review_Classification_using_Tensorflow_&Google_Colab.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Deep Learning/2. Movie_Review_Classification_using_Tensorflow&_Google_Colab.ipynb) The directory represents a collection of data science and machine learning resources. It includes Python scripts and Jupyter notebooks detailing exercises and solutions related to Neural Networks, feature engineering, regression, classification, clustering, association rule learning, NLP, and dimensionality reduction. There are also Jupyter notebooks focused on deep learning using Keras and Tensorflow with topics such as activation functions and digit recognition. Outlier removal methods and file handling scripts are also present.
[1. Keras Sequential.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Deep Learning/1. Keras Sequential.ipynb) The directory structure presents resources for learning and experimenting with Data Science. It includes solved projects, tutorials and implementation scripts for deep learning algorithms like Keras Sequential and activation functions. There are guides for feature engineering techniques like outlier removal, standard deviation, and interquartile range. Additionally, it covers data preprocessing in the'ML A-Z' subdirectory. The structure also shows modules dealing with data in formats like JSON, XML, Dictionary, and Files.
[1. Keras Sequential Exercise Solution.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Deep Learning/1. Keras Sequential Exercise Solution.ipynb) The code represents a directory structure for a data science learning and experimentation project. It contains various Jupyter notebook exercises and solutions about deep learning with Keras and TensorFlow, feature engineering techniques such as outlier removal and standard deviation, and basics of ML like data preprocessing and regression. There's also a focus on XGBoost, a Machine Learning algorithm. The structure includes datasets for practice and Machine Learning project files explained by Aman Kharwal.
Numpy
File Summary
[2. Numpy array Operations.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Numpy/2. Numpy array Operations.ipynb) This Python code demonstrates numpy operations on arrays. It includes creating one-dimensional and two-dimensional arrays, understanding their properties (like dimension, datatype, size, shape), performing arithmetic operations, creating arrays with specified values (zeroes, ones), creating a range of numbers using arange or linspace, reshaping arrays, and array flattening. It further explores matrix operations like addition, multiplication, and dot product.
[4. Iterate using nditer.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Numpy/4. Iterate using nditer.ipynb) The code is a Jupyter notebook demonstrating various ways to traverse a Numpy array using the nditer function. It includes operations like reshaping, printing rows or columns, modifying original elements, and simultaneously iterating through two arrays. The traversal methods also explore different ordering: C-style (row-major) and Fortran-style (column-major).
[3. Indexing, Slicing & Boolean arrays.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Numpy/3. Indexing, Slicing & Boolean arrays.ipynb) The code demonstrates various techniques for handling one and multi-dimensional NumPy arrays in a Python environment. It covers methods for array creation and reshaping, accessing elements of an array through indexing and slicing, performing operations on arrays, and showing how to iterate through arrays. It also includes procedures for splitting arrays along horizontal and vertical axes, and using boolean operations to manipulate array data.
[1. Introduction to Numpy.ipynb](https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science/blob/main/Numpy/1. Introduction to Numpy.ipynb) The code demonstrates the advantages of using NumPy over standard Python lists in terms of memory efficiency and execution speed. It covers basic operations such as array creation and element access. Furthermore, it measures the memory usage and execution time differences between Python lists and NumPy arrays for large-scale computations. Lastly, simple mathematical operations are performed on NumPy arrays to showcase their convenience.

🚀 Getting Started

Dependencies

Please ensure you have the following dependencies installed on your system: requirements.txt

🔧 Installation

  1. Clone the Learning-and-Experimenting_Data-Science repository:
git clone https://github.com/Shaon2221/Learning-and-Experimenting_Data-Science
  1. Change to the project directory:
cd Learning-and-Experimenting_Data-Science
  1. Install the dependencies:
pip install -r requirements.txt

🤖 Running Learning-and-Experimenting_Data-Science

jupyter nbconvert --execute notebook.ipynb

🤝 Contributing

Contributions are welcome! Here are several ways you can contribute:

Contributing Guidelines

Click to expand
  1. Fork the Repository: Start by forking the project repository to your GitHub account.
  2. Clone Locally: Clone the forked repository to your local machine using a Git client.
    git clone <your-forked-repo-url>
  3. Create a New Branch: Always work on a new branch, giving it a descriptive name.
    git checkout -b new-feature-x
  4. Make Your Changes: Develop and test your changes locally.
  5. Commit Your Changes: Commit with a clear and concise message describing your updates.
    git commit -m 'Implemented new feature x.'
  6. Push to GitHub: Push the changes to your forked repository.
    git push origin new-feature-x
  7. Submit a Pull Request: Create a PR against the original project repository. Clearly describe the changes and their motivations.

Once your PR is reviewed and approved, it will be merged into the main branch.

Return


Where to start 🤔

  1. Pandas
  2. NumPy
  3. Matplotlib & Seaborn
  4. Feature Engineering
  5. Machine Learning
  6. Projects
  7. Deep Learning
  8. ML A-Z
  9. SQL

Set-up Environment đŸ’ģ

Most of the code executed in jupyter notebook. You can download Anaconda which come up with all necessery packages. Download Anaconda here.
If you're using Jupyter Notebook without anaconda distribution, execute following command. It will install all packages you need right now. 👇đŸŧ

pip install numpy pandas seaborn matplotlib scikit-learn

That's it! Now open your jupyter notebook and Start playing with codes. Good Luck.🤗

What is nextđŸŽ¯

After learning all of the things in this repo, you will be confident enough to do great things. Some of the recommendation what I am doing after this:

  • Doing projects
  • Data Science micro course Kaggle
  • Intro to Inferential Statistics
  • Intro to Descriptive Statistics
  • Data Wrangling
  • SQL for Data Analysis
  • Business Intelligence
  • Tableau/Power BI

Some Great Resources

Youtube:

Websites:

Please, keep in mind that data science is a never end learning process. Prepare yourself for challenging and learning new stuffs regularly.


For Bengali community:
Data Science using Python
āĻ¯āĻ–āĻ¨āĻŋ āĻ¨āĻ¤ā§āĻ¨ āĻ•āĻŋāĻ›ā§ āĻļāĻŋāĻ–ā§‡āĻ›āĻŋ, āĻ¨āĻŋāĻœā§‡āĻ° āĻŽāĻ¤ā§‹ āĻ•āĻ°ā§‡ āĻ¨ā§‹āĻŸ āĻ•āĻ°ā§‡ āĻ°ā§‡āĻ–ā§‡āĻ›āĻŋāĨ¤ āĻŽā§‚āĻ˛āĻ¤, āĻ¸ā§‡āĻ­āĻžāĻŦā§‡āĻ‡ āĻ¤ā§ˆāĻ°āĻŋ āĻšāĻ¯āĻŧā§‡āĻ›ā§‡ āĻāĻ‡ āĻ—āĻŋāĻŸāĻšāĻžāĻŦ āĻ°āĻŋāĻĒā§‹āĻ¸āĻŋāĻŸāĻ°āĻŋāĨ¤ āĻ†āĻŽāĻžāĻ° āĻŽāĻ¤ā§‹ āĻ¯āĻžāĻ°āĻž āĻĄāĻžāĻŸāĻž āĻ¨āĻŋāĻ¯āĻŧā§‡ āĻ•āĻžāĻœ āĻ•āĻ°āĻ¤ā§‡ āĻ†āĻ—ā§āĻ°āĻšā§€, āĻ¤āĻžāĻĻā§‡āĻ° āĻ•ā§‹āĻ¨ āĻ•āĻžāĻœ āĻ āĻ˛āĻžāĻ—āĻ¤ā§‡ āĻĒāĻžāĻ°ā§‡āĨ¤ āĻ¨ā§‹āĻŸāĻŦā§āĻ•ā§‡ āĻĨāĻŋāĻ‰āĻ°āĻŋ āĻŦā§āĻ¯āĻžāĻ–ā§āĻ¯āĻž āĻ•āĻ°āĻž āĻ†āĻ›ā§‡, āĻĒāĻžāĻļāĻžāĻĒāĻžāĻļāĻŋ āĻ•āĻŽā§‡āĻ¨ā§āĻŸ āĻ°ā§‡āĻ–ā§‡ āĻ•ā§‹āĻĄ āĻ°āĻŋāĻĄā§‡āĻŦāĻ˛ āĻ°āĻžāĻ–āĻžāĻ° āĻšā§‡āĻˇā§āĻŸāĻž āĻ•āĻ°ā§‡āĻ›āĻŋāĨ¤ āĻ•ā§‹āĻ¨āĻŸāĻž āĻļā§āĻ°ā§ āĻ•āĻ°āĻŦ, āĻ•ā§‹āĻĨāĻžāĻ¯āĻŧ āĻļā§āĻ°ā§ āĻ•āĻ°āĻŦ, āĻ°āĻŋāĻ¸ā§‹āĻ°ā§āĻ¸ āĻ•ā§‹āĻĨāĻžāĻ¯āĻŧ āĻĒāĻžāĻŦ āĻāĻ—ā§āĻ˛ā§‹āĻ¤ā§‡ āĻ¸āĻŽāĻ¯āĻŧ āĻ¨āĻˇā§āĻŸ āĻ•āĻŽāĻŋāĻ¯āĻŧā§‡ āĻļā§‡āĻ–āĻž āĻļā§āĻ°ā§ āĻ•āĻ°āĻ¤ā§‡ āĻĒāĻžāĻ°āĻŦā§‡ āĻ¯ā§‡ āĻ•ā§‡āĻ‰āĨ¤ āĻ‡-āĻŦā§āĻ• āĻāĻ° āĻŽāĻ¤ā§‹ āĻŦā§āĻ¯āĻŦāĻšāĻžāĻ° āĻ•āĻ°āĻ¤ā§‡ āĻĒāĻžāĻ°ā§‡āĻ¨āĨ¤ āĻ†āĻŽāĻžāĻ° āĻ‰āĻĻā§āĻĻā§‡āĻļā§āĻ¯ āĻ›āĻŋāĻ˛ā§‹ ā§¨āĻŸāĻžāĨ¤
ā§§| āĻ­āĻŦāĻŋāĻˇā§āĻ¯āĻ¤ā§‡ āĻ¨āĻŋāĻœā§‡ āĻ•ā§‹āĻĨāĻžāĻ“ āĻ†āĻŸāĻ•ā§‡ āĻ—ā§‡āĻ˛ā§‡ āĻ¯ā§‡āĻ¨ āĻāĻ–āĻžāĻ¨ āĻĨā§‡āĻ•ā§‡ āĻ¸āĻžāĻšāĻžāĻ¯ā§āĻ¯ āĻ¨āĻŋāĻ¤ā§‡ āĻĒāĻžāĻ°āĻŋāĨ¤
ā§¨| āĻ†āĻŽāĻŋ āĻ“āĻĒā§‡āĻ¨ āĻ¸ā§‹āĻ°ā§āĻ¸ āĻ•āĻŽāĻŋāĻ‰āĻ¨āĻŋāĻŸāĻŋ āĻĨā§‡āĻ•ā§‡ āĻļāĻŋāĻ–āĻ¤ā§‡āĻ›āĻŋāĨ¤ āĻāĻ‡ āĻ•āĻŽāĻŋāĻ‰āĻ¨āĻŋāĻŸāĻŋāĻ¤ā§‡ āĻ¯āĻ¤āĻŸāĻž āĻ¸āĻŽā§āĻ­āĻŦ āĻ•āĻ¨ā§āĻŸā§āĻ°āĻŋāĻŦāĻŋāĻ‰āĻļāĻ¨ āĻ•āĻ°āĻžāĨ¤
āĻ…āĻ¨ā§āĻ—ā§āĻ°āĻš āĻ•āĻ°ā§‡ āĻ°āĻŋāĻ­āĻŋāĻ‰/āĻĢāĻŋāĻĄāĻŦā§āĻ¯āĻžāĻ• āĻĻāĻŋāĻŦā§‡āĻ¨ 🙏
āĻ§āĻ¨ā§āĻ¯āĻŦāĻžāĻĻ 🖤