Skip to content

jybaek/llm-with-slack

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

72 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Python 3.x

image

This is an image created with DALL-E 2. Use it for your Slatbot profile image.

LLM API with FastAPI

This repository connects the LLM API to Slack. It currently supports implementations using OpenAI's ChatGPT and Google's Gemini model. The basic structure is straightforward. When a message arrives through Slack, we generate a response using the LLM's API. It has multimodal capabilities, enabling us to process and analyze images.

All settings are set via environment variables. See here.

envrionment description values
slack_token A Slack token that begins with XOXB required
gemini_slack_token A Slack token that begins with XOXB required
openai_token An OpenAI token that begins with sk required
number_of_messages_to_keep Set how many conversation histories to keep (5)
system_content Enter the system content for ChatGPT
gpt_model GPT Model (gpt-3.5-turbo)
gemini_model Gemini Model (gemini-1.5-pro-preview-0409)

Prerequisite

  • Docker

Before running the application, make sure that Docker is installed and running on your system.

important: Set and use all the environment variables in app/config/constants.py.

Local Execution Guide

  1. First, to run this application in your local environment, please execute the following command to install the required libraries.
pip install -r requirements.txt
  1. Once the necessary libraries have been installed, execute the following command to run the application.
uvicorn app.main:app --reload

This command will run the application based on the app object in the main module of the app package. You can use the --reload option to automatically reload the application when file changes are detected.

image

Installation

  1. Clone the repository:
https://github.com/jybaek/llm-with-slack.git
cd llm-with-slack
  1. Build the Docker image:
docker build -t llm-api .
  1. Run the Docker container:
docker run --rm -it -p8000:8000 llm-api
  1. Open your web browser and go to http://localhost:8000/docs to access the Swagger UI and test the API.

Sample

Gemini GPT
Gemini GPT

API Documentation

The API documentation can be found at http://localhost:8000/docs once the Docker container is running.

License

This project is licensed under the terms of the Apache license. See LICENSE for more information.