-
Notifications
You must be signed in to change notification settings - Fork 879
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LLM Tools #2164
LLM Tools #2164
Conversation
docs/llm_tools_invocation.md
Outdated
function_response = await llm.async_call_tool( | ||
self.hass, tool_input | ||
) | ||
except Exception as e: # pylint: disable=broad-exception-caught |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Limit the set of exceptions caught to just those thrown by well behaving tools. While in theory it is good to be defensive here, the reason for being more specific is to avoid masking bugs.
The two types of errors I can forsee here are:
HomeAssistantError
raised when a known problem happens e.g. transient error from a backend, etc- An input validation exception (e.g.
vol.Invalid
or some kind of equivalent validation exception).
Also, the response here of returning an error with a dictionary is not a common example in the home assistant code base. Given LLM calls happen from conversation agents perhaps an example with an IntentResponse
would work? e.g. https://github.com/home-assistant/core/blob/8168aff25370e5a7afcb989d83f77815cbbf6903/homeassistant/components/openai_conversation/conversation.py#L120
docs/llm_tools_invocation.md
Outdated
|
||
```python | ||
from homeassistant.helpers import llm | ||
... |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps this example should be inline in a https://developers.home-assistant.io/docs/core/entity/conversation?_highlight=conversation#process example?
Please take a look at the requested changes, and use the Ready for review button when you are done, thanks 👍 |
Co-authored-by: Allen Porter <allen.porter@gmail.com>
Co-authored-by: Allen Porter <allen.porter@gmail.com>
Co-authored-by: Allen Porter <allen.porter@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good!
docs/llm_tools_invocation.md
Outdated
) -> conversation.ConversationResult: | ||
"""Process a sentence.""" | ||
|
||
while True: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
check indent
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps the example should just try at most two llm calls, though i realize it will get verbose
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You mean remove the loop?
docs/llm_tools_invocation.md
Outdated
self.hass, tool_input | ||
) | ||
except (HomeAssistantError, vol.Invalid) as e: | ||
tool_response = {"error": type(e).__name__} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Make a ConversationResult
here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Too early. Need to feed the tool result back to LLM and wait for its response.
Apply suggestions from code review
Superseded by #2178 |
Proposed change
Add developer documentation on how to add a new LLM tool.
Type of change
Additional information