Skip to content

BIDARA is a GPT-4 chatbot that was instructed to help scientists and engineers understand, learn from, and emulate the strategies used by living things to create sustainable designs and technologies using the Biomimicry Institute's step-by-step design process.

License

nasa-petal/bidara-deep-chat

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

BIDARA : Bio-Inspired Design and Research Assistant

BIDARA is a GPT-4 chatbot that was instructed to help scientists and engineers understand, learn from, and emulate the strategies used by living things to create sustainable designs and technologies.

BIDARA can guide users through the Biomimicry Institute’s Design Process, a step-by-step method to propose biomimetic solutions to challenges. This process includes defining the problem, biologizing the challenge, discovering natural models, abstracting design strategies, and emulating nature's lessons.

🔥 Features enabled

     ✅ Multiple chats
     ✅ Code Interpreter more info, filetypes supported
     ✅ Knowledge Retrieval more info, filetypes supported
     ✅ Function Calling
         ☑️ Retrieve academic literature with Semantic Scholar
         ☑️ Generate images with DALL-E
         ☑️ Analyze images with GPT-4V(ision)
         ☑️ Detect pABC patterns in images with GPT-4V(ision)

💻 For developers

bidara-deep-chat uses Svelte and the deep-chat web component to connect to BIDARA over the OpenAI Assistants API. Template based on https://github.com/sveltejs/template

Run locally

npm install
npm run dev

Known issues

  • deep-chat speechToText: submit command word is sent in message on safari and chrome on iOS.

  • deep-chat textToSpeech: doesn't read messages aloud on safari or chrome on iOS.

Nice to haves

  • save chat logs.

  • ability rate responses and add feedback.

  • ability to send ratings, feedback and chat log to us.

  • don't do TtS unless StT has been used.

  • Proxy requests to OpenAI through an authenticated API. Users can request access. Ability to generate api keys once authenticated. Authorized API keys required to communicate with API.

  • Show the quote from the file used to generate the response when BIDARA uses knowledge retrieval. https://platform.openai.com/docs/assistants/how-it-works/message-annotations

  • GPT-4 vision support so it can 'see' images, including ones uploaded by users.

  • Functions:

  • patent search - https://developer.uspto.gov/api-catalog/bulk-search-and-download, https://patentsview.org/apis/api-faqs, https://www.npmjs.com/package/uspto-patents-view-api

  • get pdf of paper from semantic scholar link and upload to assistant.thread.messages.files for retrieval. as a temp workaround can use openAccessPdf links to download pdfs on client, and then upload them on client directly to openai assistant.

  • get all messages using openai API. Then BIDARA can use the list of messages to summarize the conversation, or save the conversation history to PDF.

  • get all code interpreter code. useful to check its work..