-
-
Notifications
You must be signed in to change notification settings - Fork 196
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
POC: Explore the Vercel AI SDK #3353
Comments
Thanks for the issue, our team will look into it as soon as possible! If you would like to work on this issue, please wait for us to decide if it's ready. The issue will be ready to work on once we remove the "needs triage" label. To claim an issue that does not have the "needs triage" label, please leave a comment that says ".take". If you have any questions, please reach out to us on Discord or follow up on the issue itself. For full info on how to contribute, please check out our contributors guide. |
I believe we'll need to upgrade to Next.js 14, #2021, and the AI SDK requires the App router, so if we go ahead with this, we could start with StarSearch for using the App router. |
After a quick look, they use the tools/functions pattern to know when to render a component instead of text. It's pretty elegant and makes a ton of sense. We already have this pattern for the responses; the difference is mirroring this for components rendered. https://github.com/open-sauced/api/blob/beta/src/star-search/star-search-tools.service.ts |
Proposal - send
|
I am all for the metadata approach. If we could do a POC using metadata to render one component alongside the markdown, perhaps the "who is username?" Question and rendering a dev card @isabensusan had something similar design in the starsearch Figma exploration. |
I liked the idea of sending metadata. We can definitely POC this, but I already know if we get that JSON it's totally doable. I just wonder how it gets surfaced. I think the thing to test out is we can definitely render components with that meta data, but for parts of a response that are all text, do we return that as just text. Since we're streaming it, we'd need to stream valid JSON in chunks. So maybe like this coming from the streamed response:
We could group the markdown together to render it so we don't end up with weird HTML. Data would be considered the props for the component. It could be simple data or JSON. Or would you consider the meta data components we just show at the end of the prompt response @jpmcb? Just spitballing. I'm still going to take a peek at the Vercel AI SDK. |
From the application side of things, this work is complete and we have our implementation in #3394. If we want to return a lotto factor chart in the scenario proposed above, we'd need to look at it from the API side of things. |
We're currently streaming responses for StarSearch that return markdown which we transform with React Markdown. This works fine, but we're limited to plain HTML tags.
We should explore the Vercel AI SDK to bring richer UI experiences for StarSearch.
This video is a great watch: https://www.youtube.com/watch?v=br2d_ha7alw
Also, @bdougie shared this AI Chatbot example, https://chat.vercel.ai/
Source code is available here, https://github.com/vercel/ai-chatbot.
The text was updated successfully, but these errors were encountered: