Skip to content

๐Ÿฆ– ๐—Ÿ๐—ฒ๐—ฎ๐—ฟ๐—ป about ๐—Ÿ๐—Ÿ๐— ๐˜€, ๐—Ÿ๐—Ÿ๐— ๐—ข๐—ฝ๐˜€, and ๐˜ƒ๐—ฒ๐—ฐ๐˜๐—ผ๐—ฟ ๐——๐—•๐˜€ for free by designing, training, and deploying a real-time financial advisor LLM system ~ ๐˜ด๐˜ฐ๐˜ถ๐˜ณ๐˜ค๐˜ฆ ๐˜ค๐˜ฐ๐˜ฅ๐˜ฆ + ๐˜ท๐˜ช๐˜ฅ๐˜ฆ๐˜ฐ & ๐˜ณ๐˜ฆ๐˜ข๐˜ฅ๐˜ช๐˜ฏ๐˜จ ๐˜ฎ๐˜ข๐˜ต๐˜ฆ๐˜ณ๐˜ช๐˜ข๐˜ญ๐˜ด

License

richizo/hands-on-llms

ย 
ย 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

Hands-on LLMs Course

Learn to Train and Deploy a Real-Time Financial Advisor

by Paul Iusztin, Pau Labarta Bajo and Alexandru Razvant

Table of Contents


1. Building Blocks

Using the 3-pipeline design, this is what you will learn to build within this course โ†“

1.1. Training Pipeline

Training pipeline that:

  • loads a proprietary Q&A dataset
  • fine-tunes an open-source LLM using QLoRA
  • logs the training experiments on Comet ML's experiment tracker & the inference results on Comet ML's LLMOps dashboard
  • stores the best model on Comet ML's model registry

The training pipeline is deployed using Beam as a serverless GPU infrastructure.

-> Found under the modules/training_pipeline directory.

๐Ÿ’ป Minimum Hardware Requirements

  • CPU: 4 Cores
  • RAM: 14 GiB
  • VRAM: 10 GiB (mandatory CUDA-enabled Nvidia GPU)

Note: Do not worry if you don't have the minimum hardware requirements. We will show you how to deploy the training pipeline to Beam's serverless infrastructure and train the LLM there.

1.2. Streaming Real-time Pipeline

Real-time feature pipeline that:

  • ingests financial news from Alpaca
  • cleans & transforms the news documents into embeddings in real-time using Bytewax
  • stores the embeddings into the Qdrant Vector DB

The streaming pipeline is automatically deployed on an AWS EC2 machine using a CI/CD pipeline built in GitHub actions.

-> Found under the modules/streaming_pipeline directory.

๐Ÿ’ป Minimum Hardware Requirements

  • CPU: 1 Core
  • RAM: 2 GiB
  • VRAM: -

1.3. Inference Pipeline

Inference pipeline that uses LangChain to create a chain that:

  • downloads the fine-tuned model from Comet's model registry
  • takes user questions as input
  • queries the Qdrant Vector DB and enhances the prompt with related financial news
  • calls the fine-tuned LLM for financial advice using the initial query, the context from the vector DB, and the chat history
  • persists the chat history into memory
  • logs the prompt & answer into Comet ML's LLMOps monitoring feature

The inference pipeline is deployed using Beam as a serverless GPU infrastructure, as a RESTful API. Also, it is wrapped under a UI for demo purposes, implemented in Gradio.

-> Found under the modules/financial_bot directory.

๐Ÿ’ป Minimum Hardware Requirements

  • CPU: 4 Cores
  • RAM: 14 GiB
  • VRAM: 8 GiB (mandatory CUDA-enabled Nvidia GPU)

Note: Do not worry if you don't have the minimum hardware requirements. We will show you how to deploy the inference pipeline to Beam's serverless infrastructure and call the LLM from there.


architecture

1.4. Financial Q&A Dataset

We used GPT3.5 to generate a financial Q&A dataset to fine-tune our open-source LLM to specialize in using financial terms and answering financial questions. Using a large LLM, such as GPT3.5 to generate a dataset that trains a smaller LLM (e.g., Falcon 7B) is known as fine-tuning with distillation.

โ†’ To understand how we generated the financial Q&A dataset, check out this article written by Pau Labarta.

โ†’ To see a complete analysis of the financial Q&A dataset, check out the dataset_analysis subsection of the course written by Alexandru Razvant.

EDA

2. Setup External Services

Before diving into the modules, you have to set up a couple of additional external tools for the course.

NOTE: You can set them up as you go for every module, as we will point you in every module what you need.

2.1. Alpaca

financial news data source

Follow this document to show you how to create a FREE account and generate the API Keys you will need within this course.

Note: 1x Alpaca data connection is FREE.

2.2. Qdrant

serverless vector DB

Go to Qdrant and create a FREE account.

After, follow this document on how to generate the API Keys you will need within this course.

Note: We will use only Qdrant's freemium plan.

2.3. Comet ML

serverless ML platform

Go to Comet ML and create a FREE account.

After, follow this guide to generate an API KEY and a new project, which you will need within the course.

Note: We will use only Comet ML's freemium plan.

2.4. Beam

serverless GPU compute | training & inference pipelines

Go to Beam and create a FREE account.

After, you must follow their installation guide to install their CLI & configure it with your Beam credentials.

To read more about Beam, here is an introduction guide.

Note: You have ~10 free compute hours. Afterward, you pay only for what you use. If you have an Nvidia GPU >8 GB VRAM & don't want to deploy the training & inference pipelines, using Beam is optional.

Troubleshooting

When using Poetry, we had issues locating the Beam CLI inside a Poetry virtual environment. To fix this, after installing Beam, we create a symlink that points to Poetry's binaries, as follows:

 export COURSE_MODULE_PATH=<your-course-module-path> # e.g., modules/training_pipeline
 cd $COURSE_MODULE_PATH
 export POETRY_ENV_PATH=$(dirname $(dirname $(poetry run which python)))

 ln -s /usr/local/bin/beam ${POETRY_ENV_PATH}/bin/beam

2.5. AWS

cloud compute | feature pipeline

Go to AWS, create an account, and generate a pair of credentials.

After, download and install their AWS CLI v2.11.22 and configure it with your credentials.

Note: You will pay only for what you use. You will deploy only a t2.small EC2 VM, which is only ~$0.023 / hour. If you don't want to deploy the feature pipeline, using AWS is optional.

3. Install & Usage

Every module has its dependencies and scripts. In a production setup, every module would have its repository, but in this use case, for learning purposes, we put everything in one place:

Thus, check out the README for every module individually to see how to install & use it:

  1. q_and_a_dataset_generator
  2. training_pipeline
  3. streaming_pipeline
  4. inference_pipeline

4. Video lectures

4.0 Intro to the course

4.1 Fine-tuning our open-source LLM (overview)

4.2 Fine-tuning our open-source LLM (Hands-on!)

4.3 Real-time text embedding pipeline

4.4 Inference pipeline

5. Articles

To understand the entire code step-by-step, check out our articles โ†“

System design

Feature pipeline

Training pipeline

Inference pipeline

6. License

This course is an open-source project released under the MIT license. Thus, as long you distribute our LICENSE and acknowledge our work, you can safely clone or fork this project and use it as a source of inspiration for whatever you want (e.g., university projects, college degree projects, etc.).

7. Contributors & Teachers

Pau Labarta Bajo | Senior ML & MLOps Engineer
Main teacher. The guy from the video lessons.

LinkedIn
Twitter/X
Youtube
Real-World ML Newsletter
Real-World ML Site
Alexandru Razvant | Senior ML Engineer
Second chef. The engineer behind the scenes.

LinkedIn
Neura Leaps
Paul Iusztin | Senior ML & MLOps Engineer
Main chef. The guys who randomly pop in the video lessons.

LinkedIn
Twitter/X
Decoding ML Newsletter
Personal Site | ML & MLOps Hub

About

๐Ÿฆ– ๐—Ÿ๐—ฒ๐—ฎ๐—ฟ๐—ป about ๐—Ÿ๐—Ÿ๐— ๐˜€, ๐—Ÿ๐—Ÿ๐— ๐—ข๐—ฝ๐˜€, and ๐˜ƒ๐—ฒ๐—ฐ๐˜๐—ผ๐—ฟ ๐——๐—•๐˜€ for free by designing, training, and deploying a real-time financial advisor LLM system ~ ๐˜ด๐˜ฐ๐˜ถ๐˜ณ๐˜ค๐˜ฆ ๐˜ค๐˜ฐ๐˜ฅ๐˜ฆ + ๐˜ท๐˜ช๐˜ฅ๐˜ฆ๐˜ฐ & ๐˜ณ๐˜ฆ๐˜ข๐˜ฅ๐˜ช๐˜ฏ๐˜จ ๐˜ฎ๐˜ข๐˜ต๐˜ฆ๐˜ณ๐˜ช๐˜ข๐˜ญ๐˜ด

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 91.5%
  • Python 7.6%
  • Other 0.9%