Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to use SQLLite on AutoRag Example #224

Open
ctur opened this issue May 7, 2024 · 3 comments
Open

Unable to use SQLLite on AutoRag Example #224

ctur opened this issue May 7, 2024 · 3 comments

Comments

@ctur
Copy link

ctur commented May 7, 2024

Hello guys,

I have tried using the SQLLite as storage on the AutoRag example but am receiving this error.
Failed to create new assistant run in storage

I have enabled SQL logs and it looks like there is a problem with upsert operation.

import logging

logging.basicConfig()
logging.getLogger("sqlalchemy.engine").setLevel(logging.INFO)
INFO:sqlalchemy.engine.Engine:BEGIN (implicit)
INFO:sqlalchemy.engine.Engine:INSERT INTO test (run_id, name, run_name, user_id, llm, memory, assistant_data, run_data, user_data, task_data, created_at) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?) ON CONFLICT (run_id) DO UPDATE SET name = ?, run_name = ?, user_id = ?, llm = ?, memory = ?, assistant_data = ?, run_data = ?, user_data = ?, task_data = ?
INFO:sqlalchemy.engine.Engine:[no key 0.00019s] ('run_id', 'auto_rag_assistant', None, 'user', '{"model": "gpt-4-turbo", "name": "OpenAIChat", "metrics": {}}', '{"chat_history": [], "llm_messages": [], "references": []}', 'null', 'null', 'null', 'null', '2024-05-07 00:49:56.389927', 'auto_rag_assistant', None, 'user', '{"model": "gpt-4-turbo", "name": "OpenAIChat", "metrics": {}}', '{"chat_history": [], "llm_messages": [], "references": []}', 'null', 'null', 'null', 'null')
INFO:sqlalchemy.engine.Engine:ROLLBACK
INFO:sqlalchemy.engine.Engine:BEGIN (implicit)
INFO:sqlalchemy.engine.Engine:SELECT test.run_id, test.name, test.run_name, test.user_id, test.llm, test.memory, test.assistant_data, test.run_data, test.user_data, test.task_data, test.created_at, test.updated_at
FROM test
WHERE test.run_id = ?
INFO:sqlalchemy.engine.Engine:[generated in 0.00007s] ('run_id',)
INFO:sqlalchemy.engine.Engine:ROLLBACK
@ysolanky
Copy link
Contributor

ysolanky commented May 7, 2024

@ctur, can you also share how you have defined SQLLite as storage?

@ctur
Copy link
Author

ctur commented May 7, 2024

@ysolanky thank you for the reply, it is same as in rag_with_lance_and_sqllite, and passed it to the assistant

storage = SqlAssistantStorage(table_name='test', db_file='data.db')
storage.create()  # Create the storage if it doesn't exist

@ctur
Copy link
Author

ctur commented May 7, 2024

I forgot to mention LanceDB is also used as vector DB

Full assistant.py example

from typing import Optional

from phi.assistant import Assistant
from phi.knowledge import AssistantKnowledge
from phi.llm.openai import OpenAIChat
from phi.tools.duckduckgo import DuckDuckGo
from phi.embedder.openai import OpenAIEmbedder
from phi.storage.assistant.sqllite import SqlAssistantStorage
from phi.vectordb.lancedb.lancedb import LanceDb

db_url = "./lancedb"


def get_auto_rag_assistant(
    llm_model: str = "gpt-4-turbo",
    user_id: Optional[str] = None,
    run_id: Optional[str] = None,
    debug_mode: bool = True,
) -> Assistant:
    """Get an Auto RAG Assistant."""

    storage = SqlAssistantStorage(table_name="test", db_file="data.db")
    storage.create()  # Create the storage if it doesn't exist

    return Assistant(
        name="auto_rag_assistant",
        run_id=run_id,
        user_id=user_id,
        llm=OpenAIChat(model=llm_model),
        storage=storage,
        knowledge_base=AssistantKnowledge(
            vector_db=LanceDb(
                table_name="auto_rag_assistant_openai",  # Table name in the vectore database
                uri=db_url,  # Location to initiate/create the vector database
                embedder=OpenAIEmbedder(
                    model="text-embedding-3-small", dimensions=1536
                ),
            ),
            # 3 references are added to the prompt
            num_documents=3,
        ),
        description="You are a helpful Assistant called 'AutoRAG' and your goal is to assist the user in the best way possible.",
        instructions=[
            "Given a user query, first ALWAYS search your knowledge base using the `search_knowledge_base` tool to see if you have relevant information.",
            "If you dont find relevant information in your knowledge base, use the `duckduckgo_search` tool to search the internet.",
            "If you need to reference the chat history, use the `get_chat_history` tool.",
            "If the users question is unclear, ask clarifying questions to get more information.",
            "Carefully read the information you have gathered and provide a clear and concise answer to the user.",
            "Do not use phrases like 'based on my knowledge' or 'depending on the information'.",
        ],
        # Show tool calls in the chat
        show_tool_calls=True,
        # This setting gives the LLM a tool to search the knowledge base for information
        search_knowledge=True,
        # This setting gives the LLM a tool to get chat history
        read_chat_history=True,
        tools=[DuckDuckGo()],
        # This setting tells the LLM to format messages in markdown
        markdown=True,
        # Adds chat history to messages
        add_chat_history_to_messages=True,
        add_datetime_to_instructions=True,
        debug_mode=debug_mode,
    )


Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants