Skip to content

jhuapl-fomo/ralf

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

49 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ralf_logo

ralf

Documentation PyPI version License

ralf is a Python library intended to assist developers in creating applications that involve calls to Large Language Models (LLMs). A core concept in ralf is the idea of composability, which allows chaining together LLM calls such that the output of one call can be used to form the prompt of another. ralf makes it easy to chain together both LLM-based and Python-based actions— enabling developers to construct complex information processing pipelines composed of simpler building blocks. Using LLMs in this way can lead to more capable, robust, steerable and inspectable applications.

Currently, the ralf base library offers generic functionality for action chaining (through the Dispatcher and Action classes) as well as text classificaiton (through the ZeroShotClassifier class). Check out the other projects within the RALF ecosystem for more specialized functionality, like dialogue management and information extraction.

Quickstart Guide

This quickstart guide is intended to get you up and running with ralf within a few minutes.

Installation

We recommend creating a Conda environment before installing the package:

conda create -n ralf python=3.10
conda activate ralf

Install from PyPI

You may install ralf from PyPI using pip:

pip install ralf-jhuapl

Install from Source

Alternatively, you can build the package from source. First, clone the Github repository:

git clone https://github.com/jhuapl-fomo/ralf.git

Next, install the requirements using pip:

cd ralf
pip install -r requirements.txt

Then, build the package using flit and install it using pip:

flit build
pip install .

Or if you would like an editable installation, you can instead use:

pip install -e .

OpenAI Configuration

ralf currently relies on language models provided by OpenAI, either directly via the OpenAI API or through Microsoft Azure. In either case, you must save your API key as an environment variable by executing the following in bash:

echo "export OPENAI_API_KEY='your_key'" >> ~/.bashrc
source ~/.bashrc

OpenAI Configuration (Azure)

If you are accessing OpenAI models through Azure, you must additionally provide the URL for your Azure endpoint.

echo "export OPENAI_API_KEY='https://yourendpoint.openai.azure.com/'" >> ~/.bashrc
source ~/.bashrc

Running the Demos

To test if installation was successful, try running the demo scripts:

cd demos
python dispatcher_demo.py
python classifier_demo.py

If the scripts execute successfully, you are good to go! You may want to look through the demo scripts to learn about some of the things ralf can do, or follow the more detailed tutorials.

Documentation & Tutorials

The best way to get started with ralf is to follow the tutorials, which can be found in the full documentation.

License

This project is released under the MIT License.

Copyright © 2023 The Johns Hopkins University Applied Physics Laboratory

Contact: ralf@jhuapl.edu