Skip to content

anilesh-prajapati/Data-Structures-Algorithms-and-Machine-Learning-Optimization

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 

Repository files navigation

Data Structures, Algorithms and Machine Learning Optimization

This repo is home to the code that accompanies Data Structures, Algorithms and Machine Learning Optimization curriculum, which provides a comprehensive overview of all of the subjects — across Data Structures, Algorithms and Machine Learning Optimization — that underlie contemporary machine learning approaches, including deep learning and other artificial intelligence techniques:

Algorithms & Data Structures Curriculum

  • Segment 1: Introduction to Data Structures and Algorithms

    • A Brief History of Data
    • A Brief History of Algorithms
    • “Big O” Notation for Time and Space Complexity
  • Segment 2: Lists and Dictionaries

    • List-Based Data Structures: Arrays, Linked Lists, Stacks, Queues, and Deques
    • Searching and Sorting: Binary, Bubble, Merge, and Quick
    • Set-Based Data Structures: Maps and Dictionaries
    • Hashing: Hash Tables, Load Factors, and Hash Maps
  • Segment 3: Trees and Graphs

    • Trees: Decision Trees, Random Forests, and Gradient-Boosting (XGBoost)
    • Graphs: Terminology, Directed Acyclic Graphs (DAGs)
    • Resources for Further Study of Data Structures & Algorithms

Machine Learning Optimization Curriculum

  • Segment 1: The Machine Learning Approach to Optimization

    • The Statistical Approach to Regression: Ordinary Least Squares
    • When Statistical Approaches to Optimization Break Down
    • The Machine Learning Solution
  • Segment 2: Gradient Descent

    • Objective Functions
    • Cost / Loss / Error Functions
    • Minimizing Cost with Gradient Descent
    • Learning Rate
    • Critical Points, incl. Saddle Points
    • Gradient Descent from Scratch with PyTorch
    • The Global Minimum and Local Minima
    • Mini-Batches and Stochastic Gradient Descent (SGD)
    • Learning Rate Scheduling
    • Maximizing Reward with Gradient Ascent
  • Segment 3: Fancy Deep Learning Optimizers

    • A Layer of Artificial Neurons in PyTorch
    • Jacobian Matrices
    • Hessian Matrices and Second-Order Optimization
    • Momentum
    • Nesterov Momentum
    • AdaGrad
    • AdaDelta
    • RMSProp
    • Adam
    • Nadam
    • Training a Deep Neural Net
    • Resources for Further Study