Multi-modal AI Assistant
VT.ai is a multi-modal AI Chatbot Assistant, offering a chat interface to interact with Large Language Models (LLMs) from various providers. Both via remote API or running locally with Ollama.
The application supports multi-modal conversations, seamlessly integrating text, images, and vision processing with LLMs.
[Beta] Multi-modal AI Assistant support via OpenAI's Assistant API function calling.
- [Beta] Assistant support: Enjoy the assistance of Multi-modal AI Assistant through OpenAI's Assistant API. It can write and run code to answer math questions.
- Multi-Provider Support: Choose from a variety of LLM providers including OpenAI, Anthropic, and Google, with more to come.
- Multi-Modal Conversations: Experience rich, multi-modal interactions by uploading text and image files. You can even drag and drop images for the model to analyze.
- Real-time Responses: Stream responses from the LLM as they are generated.
- Dynamic Settings: Customize model parameters such as temperature and top-p during your chat session.
- Clean and Fast Interface: Built using Chainlit, ensuring a smooth and intuitive user experience.
- Advanced Conversation Routing: Utilizes SemanticRouter for accurate and efficient modality selection.
- Python 3.7 or higher
- (Optioal)
rye
as the Python dependencies manager (installation guide below)
- Clone the repository:
git clone https://github.com/vinhnx/VT.ai.git vtai
(optional: rename the cloned directory tovtai
) - Navigate to the project directory:
cd vtai
- You can use native Python
pip
to install packages dependencies without installingrye
. If so, you can skip these steps and proceed to the Usage section below. - If want to use
rye
, and had it installed from the Prerequisites step, you can skip these steps and proceed to the Usage section below. Otherwise you can installrye
by following these steps:
a. Install
rye
(Python packages manager):``` curl -sSf https://rye-up.com/get | bash ```
b. Source the Rye env file to update PATH (add this to your shell configuration file, e.g.,
.zprofile
or.zshrc
):``` source "$HOME/.rye/env" ```
- Rename the
.env.example
file to.env
and configure your private LLM provider API keys. - Packages management:
- Using pip, start dependencies sync, by running this command:
pip install -r requirements.txt
- If you use
rye
, start dependencies sync, by running this command:rye sync
- Activate the Python virtual environment:
source .venv/bin/activate
- Run the app with optional hot reload:
chainlit run src/app.py -w
- Open the provided URL in your web browser (e.g.,
localhost:8000
). - Select an LLM model and start chatting or uploading files for multi-modal processing.
- Chainlit: A powerful library for building chat applications with LLMs, providing a clean and fast front-end.
- LiteLLM: A versatile library for interacting with LLMs, abstracting away the complexities of different providers.
- SemanticRouter: A high-performance library for accurate conversation routing, enabling dynamic modality selection.
Contributions are welcome! Here's how you can contribute:
- Fork the repository
- Create a new branch:
git checkout -b my-new-feature
- Make your changes and commit them:
git commit -m 'Add some feature'
- Push to the branch:
git push origin my-new-feature
- Submit a pull request
This project is licensed under the MIT License.
For questions, suggestions, or feedback, feel free to reach out:
- Twitter: @vinhnx