Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

adding pinecone test #6

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open

adding pinecone test #6

wants to merge 1 commit into from

Conversation

jekalmin
Copy link
Owner

add pinecone to perform similarity search before calling chat completion

@Teagan42
Copy link
Contributor

I had this exact idea - store the vectors of entities/attributes in a vector database, tokenize the user's input and use the embeddings to augment the prompt to only include the relevant data. Glad I found this before I started that project.

@jekalmin
Copy link
Owner Author

Glad to hear that!
It makes me more confident that this is the feature the component should provide.

I tried to hook vector db with this component, but it makes component hard to understand the border between simply using OpenAI and using OpenAI combined with vector db.

So I stopped there, and probably will try different approach.

@Danm72
Copy link

Danm72 commented Jan 4, 2024

Can you add a little more detail about where you parked this @jekalmin and what other options you see?

@jekalmin
Copy link
Owner Author

jekalmin commented Jan 4, 2024

Of course.

Work done

In this PR, I tried to use Pinecone to store entities and retrieve only relevent entities using similarity search to be used in a system prompt. I added an optional field of "Pinecone API Key" when setup, and added a few options about Pinecone.

Problem

  1. Since system prompt is computed only once on initial conversation, only a few entities resulted from Pinecone query of first conversation are used throughout conversations. For example,

    Data in Pinecone

    • switch.livingroom_light
    • sensor.livingroom_temperature
    • switch.bedroom_light
    • sensor.bedroom_temperature

    Conversation

    [system]
        Context:
        ```csv
        entity_id,state
        switch.livingroom_light,on
        sensor.livingroom_temperature,23
        ```
    [user] Is livingroom light on?
    [assistant] Yes, livingroom light is on.
    [user] How about bedroom?
    [assistant] There's no information.
    

    (Since livingroom is asked at first, entities about livingroom is added into system prompt. But it doesn't change on second conversation)

  2. After the work, I realized that the options are not straightforward. Some of options are for users who want to just use OpenAI like current version, and the other options are only for users who want to use in combination with OpenAI and Pinecone.

Next step

What I want to try next is to make a separate Pinecone integration, and hook it via function call like below.

System Prompt

  • Removed "Available Devices" context.
  • Tell LLM to retrieve entities via function call ("get_entities")
I want you to act as smart home manager of Home Assistant.
I will provide information of smart home along with a question, you will truthfully make correction or answer using information provided in one sentence in everyday language.

Current Time: {{now()}}

# remove "Available Devices"

The available devices can be retrieved via "get_entities" function #  Tell LLM to retrieve entities via function call
Use execute_services function only for requested action, not for current states.
Do not execute service without user's confirmation.
Do not restate or appreciate what user says, rather make a quick inquiry.

Functions

- spec:
    name: get_entities
    description: Get all entities of HA
    parameters:
      type: object
      properties:
        query:
          type: string
          description: User requested query
      required:
      - query
  function:
    type: script
    sequence:
    - service: pinecone.ask_database
      data:
        prompt: "{{ query }}"
        top_k: 5
        score_threshold: 0.7
      response_variable: _function_result

By creating Pinecone integration apart from extended_openai_integration,

Advantages

  • Entities can be retrieved on each conversation (which solves problem of using pinecone response in a system prompt)
  • What embedding API to use can be supported in Pinecone Integration (no need for extended_openai_integration to grow big)

Disadvantages

  • Since information about entities is retrieved via function calling, the number of function call increases, resulting in slower response time than current version.

@pajeronda
Copy link
Contributor

pajeronda commented Feb 22, 2024

deleted

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants