Skip to content

Latest commit

 

History

History

training

Training Experiences

The whole SELF-ALIGN process involves four distinct stages. In our paper, we provide a detailed description of each of these stages.

Update (Dromedary-2)

The new SELF-ALIGN process in Dromedary-2 only involves two stages, We replace the first stage with diverse user prompts from ShareGPT, Dolly-15k, OpenAssistant, and OpenOrca, and create an improved prompt with one additional exemplar that encourages the LLM AI-assistant to generate responses in a general-specific-general response style, i.e., initiate with an overview, delve into specifics, and wrap up with a summary. Specifically, we directly take the one-shot exemplar from FastChat as this additional exemplar.

By utilizing the new principle-driven self-alignment prompt, we found that the LLaMA-2 base model with the improved ICL exemplars can achieve enhanced performance even without the verbose cloning phase nor inference-time few-shot examples. Therefore, we also drop the last stage of the original SELF-ALIGN process.

Prerequisites

For efficiency concerns, we utilize the model parallel scheme from llama when generating synthetic instructions and self-aligned responses. To prepare the sharded model checkpoints of LLaMA and Dromedary on your own machine/cluster, please refer to our inference guide.

Stage 1: Topic-Guided Red-Teaming Self-Instruct (no longer needed in Dromedary-2)

The first stage is called Topic-Guided Red-Teaming Self-Instruct, which employs the language model itself to generate synthetic instructions and enhance diversity via a topic-guided red-teaming approach. This stage is no longer used in Dromedary-2. Please check the dromedary_v1 branch for more details.

Update (Dromedary-2)

The new first stage, is merely subsampling and cleaning prompts from ShareGPT and OpenOrca.

Running the code
cd step1_prompt_cleaning

python subsample_openorca_prompts.py \
    --train_data_path "/path/to/your/l1M-GPT4-Augmented.parquet (obtained from OpenOrca)" \
    --output_path "/path/to/your/openorca_prompts.json"

python aggregate_sharegpt_prompts.py \
    --data_files=zetavg/ShareGPT-Processed,path/to/sg_90k_part1.json.json,path/to/sg_90k_part1.json (obtained from ShareGPT_Vicuna_unfiltered) \
    --output_path "/path/to/sharegpt_prompts.json"

python clean_and_merge_prompts.py \
    --sharegpt_prompt_path "/path/to/sharegpt_prompts.json" \
    --openorca_prompt_path "/path/to/openorca_prompts.json" \
    --output_file "/path/to/your/merged_prompts.json"

Stage 2: Principle-Driven Self-Alignment

The second stage, Principle-Driven Self-Alignment, establishes a set of principles that the AI model must adhere to and provides in-context learning demonstrations for constructing helpful, ethical, and reliable responses. The prompt we used can be found here.

Running the code
cd step2_principle_driven_self_alignment

salloc --nodes 64 --time 6:00:00 --gres=gpu:32g:6 srun bash scripts/self_align_generate_70b_base.sh

python merge_and_fileter_self_align_with_dummy.py \
    --data_file_pattern "/path/to/your/llama2_70b_self_align_32shards_*.jsonl" \
    --dummy_data_file "../dummy_data/vicuna_dummy_data.json" \
    --output_file "/path/to/your/llama2_70b_self_align_merged.json"

Stage 3: Principle Engraving

The third stage, Principle Engraving, fine-tunes the base language model by pruning principles and demonstrations, empowering the model to directly generate appropriate responses.

Running the code
cd step3_principle_engraving

salloc --nodes 1 --time 24:00:00 --gres=gpu:80g:8 srun bash scripts/finetune_dromedary2_70b_sft.sh

Stage 4: Verbose Cloning (no longer needed in Dromedary-2)

Finally, the fourth stage, Verbose Cloning, serves as a complementary step to address challenges arising from overly-brief or indirect responses by refining the model to produce detailed and comprehensive answers to user queries. The prompt we used can be found here. This stage is no longer used in Dromedary-2. Please check the dromedary_v1 branch for more details.