Skip to content

idiom-bytes/flaskGPT

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

High Level

This project was created to be thin and enable you to build on top.
It uses Langchain, OpenAI, Flask, and Vercel to KISS and deploy a One-Shot AI server in seconds.

Doesn't have what you're looking for?
Go build it!

Deploy on Vercel

Use this flow to yolo deploy to Vercel.

Deploy directly from github

  1. Fork the repostory.
  2. Go to Vercel and choose this repository to deploy.
  3. Go to your Vercel, this project's page & settings after it completes deployment, and configure the Environment Variables by setting.
OPEN_API_KEY=your-api-key
  1. Curl your endpoint and test your server!
curl -X POST https://flaskgpt-your-account-vercel.app/api/prompt -H "Content-Type: application/json" -d "{\"prompt\": \"What is the funniest joke you've ever heard?\"}"
  1. Have fun!

Deploy locally

Use this flow to run the server locally and test it.

Terminal #1 - Setup your server

  1. Clone the repository onto your local filesystem.
  2. Configure your .env by setting
OPEN_API_KEY=your-api-key
  1. Setup your venv and activate it
python3 -m venv venv
source venv/bin/activate
  1. Install dependencies
pip install -r requirements.txt
  1. Start your server
python3 flaskGPT.py

Terminal #2 - Test your server

  1. Start a new terminal
  2. In your 2nd terminal, curl your server
curl -X POST https://flaskgpt-your-account-vercel.app/api/prompt -H "Content-Type: application/json" -d "{\"prompt\": \"What is the funniest joke you've ever heard?\"}"
  1. Have fun!

About

Waffer-thin FlaskGPT on Vercel.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages