Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

need enhancement for LLM response of function calling embedded in markdown #238

Open
1WorldCapture opened this issue May 13, 2024 · 2 comments

Comments

@1WorldCapture
Copy link

1WorldCapture commented May 13, 2024

Scenario: function calling
Steps:

  1. run cookbook under "cookbook\llms\ollama\tools\app.py" using "streamlit run app.py"
  2. select llama3 model
  3. ask question about stock price of some company, like APPLE or GOOGLE

Problem:
Sometimes LLM responses raw JSON text, and sometimes it embeds the reponse in markdown, like

{
....
}

or

{
.....
}

Either case could not be handled correctly in current code.

So is it possible or necessary to enhance the cookcode to handle such case? thanks.

@ashpreetbedi
Copy link
Contributor

yup working on this :)

@ju1987yetchung
Copy link

I have same problem, I think it is because one parameter should be chosen: tool_choice <Union[str, Dict[str, Any]]>. On the document it is said that if there is tool defined, it is set to "auto" as default value. this means the AI robot can choose from answering a message or using a tool. So, under this conditon, from the documentation, this parameter seemly should be specified, but the pattern is defined blurry, hard to write right code.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants