Skip to content

jiankang1991/coursera_algorithms

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

23 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Python codes for Coursera Algorithms


Coursera algorithms

Python 3 version

Karatsuba Multiplication

alt text

alt text

Merge Sort

alt text

Big-O notation

Formal definition: $T(n) = O(f(n))$ if and only if there exist $c, n_0 > 0$, such that $T(n) \leq cf(n)$ for all $n \geq n_0$.

Omega notation

Formal definition: $T(n) = \Omega(f(n))$ if and only if there exist $c, n_0 > 0$, such that $T(n) \geq cf(n)$ for all $n \geq n_0$.

Theta notation

Formal definition: $T(n)=\theta(f(n))$ if and only if $T(n)=O(f(n))$ and $T(n)=\Omega(f(n))$. there exist $c_1, c_2$, such that $T(n) \geq c_1 f(n)$ and $T(n) \leq c_2 f(n)$ for all $n \geq n_0$.

Counting inversions

Couting inversion means that

$i, j$ form an inversion if $a_i > a_j$, that is, if the two elements $a_i$ and $a_j$ are "out of order".

Finding "similarity" between two rankings. Given a sequence of n numbers 1..n (assume all numbers are distinct). Define a measure that tells us how far this list is from being in ascending order. The value should be 0 if $a_1 < a_2 < ... < a_n$ and should be higher as the list is more "out of order".

alt text

alt text

Strassen's Subcubic Matrix Multiplication Algorithm

alt text

The master method for asymptotic analysis of divide and conquer algorithms

alt text

alt text

QuickSort

implementation

alt text

alt text

Example

alt text

alt text

Background knowledge

Utilized linearity of Expectation and decomposition principle

Rule of thumb for checking independene of random variables:

So, for those of you without much practice with independence, here's my rule of thumb for whether or not you treat random variables as independent. If things are independent by construction, like, for example, you define it in your algorithm, so the two different things are independent. Then you can proceed with the analysis under the assumption that they're independent. If there's any doubt, if it's not obvious the two things are independent, you might want to, as a rule of thumb, assume that they're dependent until further notice.

alt text

alt text

alt text

A good exercise is the birthday problem with the solution in Introduction of Algorithms

Asymptotic analysis and proof

alt text

alt text

alt text

alt text

alt text

alt text

alt text

alt text

Karger Min-cut algorithm

alt text

alt text

alt text

Graph Search

BFS

alt text

alt text

DFS

alt text

Strongly Connected Components of direct graph

alt text

alt text

alt text

alt text

Single source shortest path

alt text

Dijkstra algorithm

alt text

alt text

Heap data structure to increase the searching

alt text

alt text

Heap and its in median maintenance

alt text

Hash table and 2-sum problem

alt text

Greedy method

Schedule problem

alt text

alt text

MST problem

alt text

alt text

Clustering and UnionFind data structure

Kruskal MST

alt text

UnionFind data structure

alt text

Greedy algorithm for clustering

alt text

alt text

Huffman coding and Dynamic Programming

Huffman coding

alt text

alt text

Max weighted independent set problem

alt text

alt text

alt text

alt text

Dynamic programming

alt text

Knapsack

alt text

alt text

alt text

alt text

Optimal binary search tree

alt text

alt text

alt text

Releases

No releases published

Packages

No packages published

Languages