Skip to content

ZhecanJamesWang/Chain-of-ThoughtsPapers

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

25 Commits
 
 

Repository files navigation

Chain-of-ThoughtsPapers

A trend starts from "Chain of Thought Prompting Elicits Reasoning in Large Language Models".

Papers

  1. Chain of Thought Prompting Elicits Reasoning in Large Language Models.

    Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, Denny Zhou [pdf] 2022.1

  2. Self-Consistency Improves Chain of Thought Reasoning in Language Models.

    Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Denny Zhou [pdf] 2022.3

  3. STaR: Self-Taught Reasoner Bootstrapping Reasoning With Reasoning.

    Eric Zelikman, Yuhuai Wu, Noah D. Goodman [pdf] 2022.3

  4. PaLM: Scaling Language Modeling with Pathways.

    Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, Noah Fiedel [pdf] 2022.4

  5. Can language models learn from explanations in context?.

    Andrew K. Lampinen, Ishita Dasgupta, Stephanie C. Y. Chan, Kory Matthewson, Michael Henry Tessler, Antonia Creswell, James L. McClelland, Jane X. Wang, Felix Hill [pdf] 2022.4

  6. Inferring Implicit Relations with Language Models.

    Uri Katz, Mor Geva, Jonathan Berant [pdf] 2022.4

  7. The Unreliability of Explanations in Few-Shot In-Context Learning.

    Xi Ye, Greg Durrett [pdf] 2022.5

  8. Large Language Models are Zero-Shot Reasoners.

    Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, Yusuke Iwasawa [pdf] 2022.5

  9. Least-to-Most Prompting Enables Complex Reasoning in Large Language Models.

    Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Olivier Bousquet, Quoc Le, Ed Chi [pdf] 2022.5

  10. On the Advance of Making Language Models Better Reasoners.

    Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen, Jian-Guang Lou, Weizhu Chen [pdf] 2022.6

  11. Emergent Abilities of Large Language Models.

    Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H. Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, William Fedus [pdf] 2022.6

  12. Minerva: Solving Quantitative Reasoning Problems with Language Models.

    Posted by Ethan Dyer and Guy Gur-Ari, Research Scientists, Google Research, Blueshift Team [blog] 2022.6

  13. JiuZhang: A Chinese Pre-trained Language Model for Mathematical Problem Understanding.

    Wayne Xin Zhao, Kun Zhou, Zheng Gong, Beichen Zhang, Yuanhang Zhou, Jing Sha, Zhigang Chen, Shijin Wang, Cong Liu, Ji-Rong Wen [pdf] 2022.6

  14. A Dataset and Benchmark for Automatically Answering and Generating Machine Learning Final Exams

    Sarah Zhang, Reece Shuttleworth, Derek Austin, Yann Hicke, Leonard Tang, Sathwik Karnik, Darnell Granberry, Iddo Drori [pdf] 2022.6

  15. Rationale-Augmented Ensembles in Language Models.

    Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Denny Zhou [pdf] 2022.7

  16. Language Model Cascades.

    David Dohan, Winnie Xu, Aitor Lewkowycz, Jacob Austin, David Bieber, Raphael Gontijo Lopes, Yuhuai Wu, Henryk Michalewski, Rif A. Saurous, Jascha Sohl-dickstein, Kevin Murphy, Charles Sutton [pdf] 2022.7

  17. Text and Patterns: For Effective Chain of Thought, It Takes Two to Tango.

    Aman Madaan, Amir Yazdanbakhsh [pdf] 2022.9

  18. Compositional Semantic Parsing with Large Language Models.

    Andrew Drozdov, Nathanael Schärli, Ekin Akyürek, Nathan Scales, Xinying Song, Xinyun Chen, Olivier Bousquet, Denny Zhou [pdf] 2022.9

  19. Language Models are Multilingual Chain-of-Thought Reasoners.

    Freda Shi, Mirac Suzgun, Markus Freitag, Xuezhi Wang, Suraj Srivats, Soroush Vosoughi, Hyung Won Chung, Yi Tay, Sebastian Ruder, Denny Zhou, Dipanjan Das, Jason Wei [pdf] 2022.10

  20. Automatic Chain of Thought Prompting in Large Language Models.

    Zhuosheng Zhang, Aston Zhang, Mu Li, Alex Smola [pdf] 2022.10

  21. Binding Language Models in Symbolic Languages.

    Zhoujun Cheng, Tianbao Xie, Peng Shi, Chengzu Li, Rahul Nadkarni, Yushi Hu, Caiming Xiong, Dragomir Radev, Mari Ostendorf, Luke Zettlemoyer, Noah A. Smith, Tao Yu [pdf] 2022.10

  22. ReAct: Synergizing Reasoning and Acting in Language Models.

    Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, Yuan Cao [pdf] 2022.10

  23. Ask Me Anything: A simple strategy for prompting language models.

    Simran Arora, Avanika Narayan, Mayee F. Chen, Laurel Orr, Neel Guha, Kush Bhatia, Ines Chami, Frederic Sala, Christopher Ré [pdf], [code] 2022.10

  24. Language Models of Code are Few-Shot Commonsense Learners.

    Aman Madaan, Shuyan Zhou, Uri Alon, Yiming Yang, Graham Neubig [pdf], [code] 2022.10

  25. Large Language Models Can Self-Improve.

    Jiaxin Huang, Shixiang Shane Gu, Le Hou, Yuexin Wu, Xuezhi Wang, Hongkun Yu, Jiawei Han [pdf] 2022.10

About

A trend starts from "Chain of Thought Prompting Elicits Reasoning in Large Language Models".

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published