Skip to content

simerplaha/reinforcement-learning

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

17 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Reinforcement learning

Referred material

Basic Matrix[T] implementation.

Tic-tac-toe game. Uses basic probability matrix for each game state to make decisions. TODO - needs better prediction.

Simple example that returns the head/first integer from an input Array[Int] by learning from training dataset only - without explicitly defining the rule to "return the head integer".

Bandit from chapter 2. Uses incremental implementation.

10 levers with probability of [0.5, 0.10, 0.20, 0.25, 0.30, 0.50, 0.60, 0.65, 0.80, 0.90] for each lever in that order.

direction

Last lever has the highest probability (0.90) therefore has more chance of getting pulled.

Implements the Student MDP from David Silver's lecture 2 at this (24:56) timestamp. There are tests in StudentSpec that prove that no other state can return the same optimal value as optimal state using bellman equation.

Value: -2.25      Sample: List(Class1, Class2, Class3, Pass, Sleep)
Value: -3.125     Sample: List(Class1, Facebook, Facebook, Class1, Class2, Sleep)
Value: -3.65625   Sample: List(Class1, Class2, Class3, Pub, Class2, Class3, Pass, Sleep)
Value: -2.21875   Sample: List(Facebook, Facebook, Facebook, Class1, Class2, Class3, Pub, Class2, Sleep)

Grid World

Implements bellman equation to find the quickest path to targets within a grid.

The following shows results of a 11x11 grid with 3 goal targets - ⌂ (circled green). The arrows indicate the optimal direction to take at each grid to reach the nearest target.

direction

Value function created after 100 value iteration.

values

Monte Carlo Agents do not have any knowledge of the environment. The Agent knows the Action it can perform and has to learn by executing random Actions on the Environment with the goal to find an optimal Policy, Action and State.

The following image shows the beginning (1st iteration) where the Agent randomly walks the Grid starting from top-left to reach the green circled gird marked ⌂.

direction

After 1000 iterations the Agent finds a better policy.

values