Skip to content

RewardReports/reward-reports

Repository files navigation



GitHub Contributor Covenant

Welcome to Reward Reports!

We are working to publicly document state-of-the-art reinforcement learning systems. Building on the documentation frameworks for model cards and datasheets proposed by Mitchell et al. and Gebru et al., we argue the need for Reward Reports for AI systems. In a whitepaper recently published by the Center for Long-Term Cybersecurity, we introduced Reward Reports as living documents for proposed RL deployments that demarcate design choices. However, many questions remain about the applicability of this framework to different RL applications, roadblocks to system interpretability, and the resonances between deployed supervised machine learning systems and the sequential decision-making utilized in RL. At a minimum, Reward Reports are an opportunity for RL practitioners to deliberate on these questions and begin the work of deciding how to resolve them in practice.

For more information, visit our website or read the Reward Reports paper.

Contributing

Given the multi-stakeholder nature of deployed machine learning systems, we are calling on the community to help collect the required information to understand these systems. Please open an issue if you would like to contribute.

Citation

@article{gilbert2022reward,
  title={Reward reports for reinforcement learning},
  author={Gilbert, Thomas Krendl and Dean, Sarah and Lambert, Nathan and Zick, Tom and Snoswell, Aaron},
  journal={arXiv preprint arXiv:2204.10817},
  year={2022}
}

Running

To run the web tool, run the following command:

cd builder
npm start

Then visit localhost:8080/ in the browser.