Tip
Autodistill is a framework for using large, foundation vision models to train smaller, faster models. Autodistill does not officially support distilling LLMs. This repository is an experiment.
This repository contains the code supporting the LLaMA 2 base model for use with Autodistill.
LLaMA 2 is an open-source Large Language Model (LLM) developed by Meta AI. You can use LLaMA 2 to label data for use in fine-tuning an LLM, or generating data for use in training another type of language model (i.e. text classifier).
To use LLaMA 23 with autodistill, you need to install the following dependency:
pip3 install autodistill-llama-2
from autodistill_llama_2 import LLaMA
base_model = LLaMA()
result = base_model.predict("What is a cookie?")
# data must be in a JSONL file with the structure
# {"data": "[question] [answer]" }
base_model.label("./data.jsonl")
The source code for this project is licensed under an MIT license. To use LLaMA, you must agree to Meta AI's LLaMA 2 terms and conditions.
This module is currently experimental and not ready for external contributions. When this changes, we will update this section.