Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question: the response I got by using terminal is way better than using ollama.generate #117

Open
wangyeye66 opened this issue Apr 13, 2024 · 1 comment

Comments

@wangyeye66
Copy link

wangyeye66 commented Apr 13, 2024

I use llama2 7b to for text generation. The prompt I attampted:
"""Task: Turn the input into (subject, predicate, object).
Input: Sam Johnson is eating breakfast.
Output: (Dolores Murphy, eat, breakfast)
Input: Joon Park is brewing coffee.
Output: (Joon Park, brew, coffee)
Input: Jane Cook is sleeping.
Output: (Jane Cook, is, sleep)
Input: Michael Bernstein is writing email on a computer.
Output: (Michael Bernstein, write, email)
Input: Percy Liang is teaching students in a classroom.
Output: (Percy Liang, teach, students)
Input: Merrie Morris is running on a treadmill.
Output: (Merrie Morris, run, treadmill)
Input: John Doe is drinking coffee.
Output: (John Doe,"""

using ollama.generate will generate a chat rather than keep generating the text.
In terminal, it seems understand what I would like to do. Did I call wrong function in python? How can I let the model know I don't need a chat-like response?

@93andresen
Copy link

I've never tried this libary, but maybe "ollama.chat" works like the terminal amd "ollama.generate" is like autocomplete?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants