Skip to content

buckylee2019/Auto-LLAMA

Repository files navigation

Auto-local-GPT: An Autonomous LLM Experiment

This project uses Auto-GPT to experiment the posibility of using local LLM. Reference gpt-llmam.cpp to build custom API.

Run

To run the auto-local-gpt:

  1. Put the models you'd like to try to certain directory.
For example:  
wget https://huggingface.co/eachadea/ggml-vicuna-13b-1.1/resolve/main/ggml-vicuna-13b-1.1-q4_1.bin
  1. Set the environment variables to .env
EMBED_DIM=5120
OPENAI_API_BASE_URL=localhost:8000/v1
OPENAI_API_KEY=<if use custom url replace it with model path>
  1. Run the docker command, which will automatically start the API endpoint at port 8000
docker run -it -d -v <your models directory>:/llama.cpp/models -p 8000:8000 buckylee/auto-local-gpt:latest
  1. Run Auto-gpt For Linux:
./run.sh

About

No description, website, or topics provided.

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Sponsor this project

 

Packages

No packages published