Skip to content

Implementation of the paper "A neural active inference model of perceptual-motor learning" published on Computational Neuroscience in 2023.

License

Notifications You must be signed in to change notification settings

George614/neural_active_inference

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Neural Active Inference

This repository holds the code for paper A Neural Active Inference Model of Perceptual-Motor Learning, which is published on Frontiers in Computational Neuroscience, 2023. In this work, we propose an active inference (AI) agent based on an artificial neural network trained via backpropagation of errors (backprop). This model embodies a key assumptions:

  1. A simplified model is sufficient for reasonably-sized state spaces (like Mountain Car, Cartpole, etc.) -- thus this model only jointly adapts a transition model and an expected free energy (EFE) model at each time step.
  2. We only need the simple Q-learning bootstrap principle to train this system (as opposed to policy gradients)
  3. We simplify the Bayesian inference by assuming a uniform prior (or uninformative prior) on the parameters of our model.

To run the code, you can use the following Bash commands.

To train a local prior model to expert data (imitation learning), which will be later used in the active inference agent (if a local prior is desired), then run the following command (after setting desired values in fit_interception_prior.cfg):

$ python train_prior.py --cfg_fname=fit_interception_prior.cfg --gpu_id=0 

To train the final AIF agent according to whatever is configured inside the run_interception_ai.cfg file, run the following command:

$ python train_simple_agent.py --cfg_fname=run_interception_ai.cfg --gpu_id=0

Inside the training configuration file, you can choose to use an alternative prior preference as follows:

  1. To use a local prior (model), set
    instru_term = prior_local
    which also requires running train_prior.py and storing the prior in the correct folder is used. Make sure you set the prior_model_save_path in the config file to point to wherever you dump/save the prior model on disk.
  2. To use a global, hand-coded prior
    instru_term = prior_global
    which requires changing the tensor variable self.global_mu inside the QAIModel (in src/model/qai_model.py) to a vector of encoded mean values (default is None.
  3. To use the reward as the global prior, set
    instru_term = prior_reward
    where we justify this by appealing to the Complete Class Theorem.

Please cite our article if you find our code useful using the following bibtext:

@ARTICLE{yang2023neural,
    AUTHOR={Yang, Zhizhuo and Diaz, Gabriel J. and Fajen, Brett R. and Bailey, Reynold and Ororbia, Alexander G.},   
    TITLE={A neural active inference model of perceptual-motor learning},      
    JOURNAL={Frontiers in Computational Neuroscience},      
    VOLUME={17},           
    YEAR={2023},      
    URL={https://www.frontiersin.org/articles/10.3389/fncom.2023.1099593},       
    DOI={10.3389/fncom.2023.1099593},      
    ISSN={1662-5188}
}

About

Implementation of the paper "A neural active inference model of perceptual-motor learning" published on Computational Neuroscience in 2023.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published