Skip to content

ruanchaves/napolab

Repository files navigation

Natural Portuguese Language Benchmark (Napolab)

The Napolab is your go-to collection of Portuguese datasets with the following characteristics:

  • 🌿 Natural: As much as possible, datasets consist of natural Portuguese text or professionally translated text.
  • Reliable: Metrics correlate reliably with human judgments (accuracy, F1 score, Pearson correlation, etc.).
  • 🌐 Public: Every dataset is available through a public link.
  • 👩‍🔧 Human: Expert human annotations only. No automatic or unreliable annotations.
  • 🎓 General: No domain-specific knowledge or advanced preparation is needed to solve dataset tasks.

Napolab currently includes the following datasets:

assin assin2 rerelem
hatebr reli-sa faquad-nli
porsimplessent

💡 Contribute: We're open to expanding Napolab! Suggest additions in the issues. Plus, if you've evaluated models on this benchmark, we'd love to hear about it, especially results from recent LLMs. For more information, read our CONTRIBUTING.md.

🌍 For broader accessibility, all datasets have translations in Catalan, English, Galician and Spanish using the facebook/nllb-200-1.3B model via Easy-Translate.

Quick Start 🚀

The simplest way to use the Napolab benchmark is to run the commands:

pip install napolab
python -m napolab

This fetches all datasets from Hugging Face Hub and saves them as CSVs in your current folder.

For the datasets library format:

from napolab import load_napolab_benchmark

napolab = load_napolab_benchmark(include_translations=True)

benchmark = napolab["datasets"]
translated_benchmark = napolab["translations"]

Napolab is structured similarly to benchmarks like GLUE and PLUE. All datasets come with either two or three fields: 'sentence1', 'sentence2', 'label' or just 'sentence1', 'label'. To evaluate LLMs using Napolab, you simply need to design prompts to get label predictions from the model.

Leaderboard

The Open PT LLM Leaderboard incorporates datasets from Napolab.

🤖 Models

We've made several models, fine-tuned on this benchmark, available on Hugging Face Hub:

Datasets mDeBERTa v3 BERT Large BERT Base
ASSIN 2 - STS Link Link Link
ASSIN 2 - RTE Link Link Link
ASSIN - STS Link Link Link
ASSIN - RTE Link Link Link
HateBR Link Link Link
FaQUaD-NLI Link Link Link
PorSimplesSent Link Link Link

For model fine-tuning details and benchmark results, visit EVALUATION.md.

🎮 Demos

Experience our fine-tuned models on Hugging Face Spaces. Check out:

Citation

Our research is ongoing, and we are currently working on describing our experiments in a paper, which will be published soon. In the meanwhile, if you would like to cite our work or models before the publication of the paper, please use the following BibTeX citation for this repository:

@software{Chaves_Rodrigues_napolab_2023,
author = {Chaves Rodrigues, Ruan and Tanti, Marc and Agerri, Rodrigo},
doi = {10.5281/zenodo.7781848},
month = {3},
title = {{Natural Portuguese Language Benchmark (Napolab)}},
url = {https://github.com/ruanchaves/napolab},
version = {1.0.0},
year = {2023}
}

Disclaimer

The HateBR dataset, including all its components, is provided strictly for academic and research purposes. The use of the HateBR dataset for any commercial or non-academic purpose is expressly prohibited without the prior written consent of SINCH.