Skip to content

capjamesg/autodistill-llama

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Autodistill LLaMA 2 Module

Tip

Autodistill is a framework for using large, foundation vision models to train smaller, faster models. Autodistill does not officially support distilling LLMs. This repository is an experiment.

This repository contains the code supporting the LLaMA 2 base model for use with Autodistill.

LLaMA 2 is an open-source Large Language Model (LLM) developed by Meta AI. You can use LLaMA 2 to label data for use in fine-tuning an LLM, or generating data for use in training another type of language model (i.e. text classifier).

Installation

To use LLaMA 23 with autodistill, you need to install the following dependency:

pip3 install autodistill-llama-2

Quickstart

from autodistill_llama_2 import LLaMA

base_model = LLaMA()

result = base_model.predict("What is a cookie?")

# data must be in a JSONL file with the structure
# {"data": "[question] [answer]" }
base_model.label("./data.jsonl")

License

The source code for this project is licensed under an MIT license. To use LLaMA, you must agree to Meta AI's LLaMA 2 terms and conditions.

🏆 Contributing

This module is currently experimental and not ready for external contributions. When this changes, we will update this section.

About

Use LLaMA to label data for use in training a fine-tuned LLM.

Topics

Resources

Stars

Watchers

Forks