Skip to content

jalalirs/Introduction-to-Computer-Vision

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Introduction to Computer Vision

A python version

In this project, you will find notes, screenshots and python implementations of all the concepts discussed in the lovely free Ucacity course "Introduction to Computer Vision" thankfully produced by Georgia Institute of Technology and presented by Prof. Aaron Bobick, Irfan Essa, and Arpan Chakraborty. This course provides an introduction to computer vision including fundamentals, and methods for application and machine learning classification.

Project Instructions

  1. The code in the notebooks is based on Python 3. Python 2 will fail in several cases.
  2. The notebooks essentially depend on the following libraries: numpy, scipy, cv2, and PIL.
  3. You will likely need to install more pip packages to run the notebooks. We assume that you have enough proficiency to install them as needed.

Content

  • 1A-L1: Introduction
  • Images
    • 2A-L1 Images as functions
    • 2A-L2 Filtering
    • 2A-L3 Linearity and convolution
    • 2A-L4 Filters as templates
    • 2A-L5 Edge detection: Gradients
    • 2A-L6 Edge detection: 2D operators
    • 2B-L1 Hough transform: Lines
    • 2B-L2 Hough transform: Circles
    • 2B-L3 Generalized Hough transform
    • 2C-L1 Fourier transform
    • 2C-L2 Convolution in frequency domain
    • 2C-L3 Aliasing
  • Camera & Calibration
    • 3A-L1 Cameras and images
    • 3A-L2 Perspective imaging
    • 3B-L1 Stereo geometry
    • 3B-L2 Epipolar geometry
    • 3B-L3 Stereo correspondence
    • 3C-L1 Extrinsic camera parameters
    • 3C-L2 Instrinsic camera parameters
    • 3C-L3 Calibrating cameras
    • 3D-L1 Image to image projections
    • 3D-L2 Homographies and mosaics
    • 3D-L3 Projective geometry
    • 3D-L4 Essential matrix
    • 3D-L5 Fundamental matrix
  • Visual Feature
    • 4A-L1 Introduction to "features"
    • 4A-L2 Finding corners
    • 4A-L3 Scale invariance
    • 4B-L1 SIFT descriptor
    • 4B-L2 Matching feature points (a little)
    • 4C-L1 Robust error functions
    • 4C-L2 RANSAC
  • Photometry
    • 5A-L1 Photometry
    • 5B-L1 Lightness
    • 5C-L1 Shape from shading
  • Motion
    • 6A-L1 Introduction to motion
    • 6B-L1 Dense flow: Brightness constraint
    • 6B-L2 Dense flow: Lucas and Kanade
    • 6B-L3 Hierarchical LK
    • 6B-L4 Motion models
  • Tracking
    • 7A-L1 Introduction to tracking
    • 7B-L1 Tracking as inference
    • 7B-L2 The Kalman filter
    • 7C-L1 Bayes filters
    • 7C-L2 Particle filters
    • 7C-L3 Particle filters for localization
    • 7C-L4 Particle filters for real
    • 7D-L1 Tracking considerations
  • Recognition
    • 8A-L1 Introduction to recognition
    • 8B-L1 Classification: Generative models
    • 8B-L2 Principle Component Analysis
    • 8B-L3 Appearance-based tracking
    • 8C-L1 Discriminative classifiers
    • 8C-L2 Boosting and face detection
    • 8C-L3 Support Vector Machines
    • 8C-L4 Bag of visual words
    • 8D-L1 Introduction to video analysis
    • 8D-L2 Activity recognition
    • 8D-L3 Hidden Markov Models
  • Colors
    • 9A-L1 Color spaces
    • 9A-L2 Segmentation
    • 9A-L3 Mean shift segmentation
    • 9A-L4 Segmentation by graph partitioning
    • 9B-L1 Binary morphology
    • 9C-L1 3D perception
  • Human Vision System
    • 10A-L1 The retina
    • 10B-L1 Vision in the brain
  • We're Done!

About

Introduction to Computer Vision (Udacity Course)

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published