Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Questions about initial generation #15

Open
wj210 opened this issue Sep 24, 2023 · 1 comment
Open

Questions about initial generation #15

wj210 opened this issue Sep 24, 2023 · 1 comment

Comments

@wj210
Copy link

wj210 commented Sep 24, 2023

I want to use self-refine for reasoning task, such as open-book qa for example.
For the few-shot examples for the initial generation. Does the examples have to be bad examples?
If I have good examples, could I use them for the initial stage and hope that through iterations, it gets even better?
However if i were to use already good examples, it might be tough to come up with even better ones in the few-shot examples for the refine stage?

@madaan
Copy link
Owner

madaan commented Oct 11, 2023

Good question!

I think it depends on the task, and how much "prior" knowledge you expect the users to have.

For example, in tasks like dialog response generation and code optimization we provide a prompt that has all the information. Some other tasks, like code readability, only used an instruction.

In general, it is possible that a better engineered prompt will lead to a better initial output. But to be realistic, who wants to do endless prompt engineering? Isn't it better to start with something lightweight/minimal and let the model refine the outputs? 😄 That is the key point of the idea of Self-Refine.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants