Skip to content
#

multi-armed-bandit

Here are 115 public repositories matching this topic...

In probability theory, the multi-armed bandit problem is a problem in which a fixed limited set of resources must be allocated between competing (alternative) choices in a way that maximizes their expected gain, when each choice's properties are only partially known at the time of allocation, and may become better understood as time passes or by…

  • Updated Jun 1, 2018
  • Jupyter Notebook

Improve this page

Add a description, image, and links to the multi-armed-bandit topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the multi-armed-bandit topic, visit your repo's landing page and select "manage topics."

Learn more