Skip to content

This project is a collection of lab works that demonstrate how to use large language models (LLMs) for various generative AI tasks, such as text summarization, PEFT, RL, RLHF.

Notifications You must be signed in to change notification settings

Sujan-Roy/Lab-work-of-Generative-AI-with-Large-Language-Models

Repository files navigation

Lab-work-of-Generative-AI-with-Large-Language-Models

The project is based on the course Generative AI with Large Language Models offered by Coursera.org, in collaboration with AWS and DeepLearning.ai. The project aims to provide a hands-on experience of how to train and fine-tune LLMs using state-of-the-art tools and techniques, as well as how to evaluate and deploy them in real-world applications. The project is organized into three main folders:

  • Week 1: This folder contains the lab work for week 1 of the course. The lab work covers generative AI use cases, project lifecycles, model pre-training, and scaling laws. The lab work Lab_1_summarize_dialogue.ipynb is included in this folder.
  • Week 2: This folder contains the lab work for week 2 of the course. The lab work covers fine-tuning and evaluating LLMs using prompt datasets and parameter-efficient fine tuning (PEFT). The lab work Lab_2_fine_tune_generative_ai_model.ipynb is included in this folder.
  • Week 3: This folder contains the lab work for week 3 of the course. The lab work covers applying LLMs to specific domains such as natural language understanding (NLU), natural language generation (NLG),reinforcement learning (RL), and reinforcement learning with human feedback (RLHF). The lab work Lab_3_fine_tune_model_to_detoxify_summaries.ipynb is included in this folder.

About

This project is a collection of lab works that demonstrate how to use large language models (LLMs) for various generative AI tasks, such as text summarization, PEFT, RL, RLHF.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published