Skip to content

[NAACL Findings 2024] PersonaLLM: Investigating the Ability of Large Language Models to Express Personality Traits

License

Notifications You must be signed in to change notification settings

hjian42/PersonaLLM

Repository files navigation

PersonaLLM: Investigating the Ability of Large Language Models to Express Personality Traits

License: MIT arXiv

This repo contains the official code for running LLM persona experiments and subsequent analyses in the PersonaLLM paper.

Simulate LLM personas

We first create 10 personas for each of 32 personality types.

conda activate audiencenlp
python3.9 run_bfi.py --model "GPT-3.5-turbo-0613"
python3.9 run_bfi.py --model "GPT-4-0613"
python3.9 run_bfi.py --model "llama-2"

Generate stories with LLM personas

python3.9 run_creative_writing.py --model "GPT-3.5-turbo-0613"
python3.9 run_creative_writing.py --model "GPT-4-0613"
python3.9 run_creative_writing.py --model "llama-2"

References

If you use this repository in your research, please kindly cite our paper:

@article{jiang2023personallm,
  title={PersonaLLM: Investigating the Ability of Large Language Models to Express Personality Traits},
  author={Hang Jiang and Xiajie Zhang and Xubo Cao and Cynthia Breazeal and Deb Roy and Jad Kabbara},
  booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
  year={2024}
}

Acknowledgement

PersonaLLM is a research program from MIT Center for Constructive Communication (@mit-ccc), MIT Media Lab, and Stanford University. We are interested in drawing from social and cognitive sciences to understand the behaviors of foundation models.

About

[NAACL Findings 2024] PersonaLLM: Investigating the Ability of Large Language Models to Express Personality Traits

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published