Skip to content

noodnik2/gochat

Repository files navigation

gochat

main

Description

A simple AI "chat" shell (similar to ChatGPT) which can be used to "converse" with one of the supported large language models, and optionally record your "chat transcript" in a local file, making it easy to later reference your conversations with these models.

Currently Available!

Model Support

This app is designed to support access to multiple underlying chat models, including:

If you're not yet signed up to access either of these popular online language models, what are you waiting for? Clear instructions for obtaining your SDK Key needed to access each model on its "SDK" page using the link(s) above!

Configuration

Copy the file config/config.yaml into config/config-local.yaml in order to create your own, private configuration.

In your private configuration (i.e., config-local.yaml), select the model you wish to use, and enter your credentials for accessing that model before running.

If you wish to create transcripts of your chat sessions, set the Scriber adapter value to Template, and - if needed - customize its formatting in the TemplateScribe sections, taking guidance from the go Templating language documentation.

Running

You will need to have go installed in order to run gochat in your terminal. If you don't already have this, see here to find out how.

In a go (version 1.21 or later) environment, run the app using the Makefile target, e.g.:

$ make run-chat
go run cmd/chat/main.go
Using model: gpt-4-1106-preview
Type 'exit' to quit
Ask me anything: 
> Tell me in three words who you are.
Artificial Intelligence Assistant
> exit
Goodbye!
Bye bye!
$ 

Future Roadmap

Several envisioned enhancements for gochat in its Roadmap include:

Retrieval Augmented Generation (RAG)

Support for Fully Local LLMs

  • Add support for "chats" against confidential information, and/or without the need for internet connection, such as possible with ollama.

Multimodal Support

  • Now that many LLMs (including Gemini) support media files (both as prompts and as responses), support for this paradigm could be really useful, and not so hard to implement.
  • Envisioned is some sort of "meta-command" syntax to support access to local or remote media, TBD, ...

Speech-to-text / Text-to-speech

  • Use something like Google's STT and TTS APIs to allow users to interact with the LLM using speech / audio.

Want to Contribute? Have Ideas?

  • Please get in touch!
  • Please submit a PR!