Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add scores_precision parameter to FeatureAttributionOutput.save #202

Open
gsarti opened this issue Jul 12, 2023 · 0 comments
Open

Add scores_precision parameter to FeatureAttributionOutput.save #202

gsarti opened this issue Jul 12, 2023 · 0 comments
Labels
enhancement New feature or request good first issue Good for newcomers help wanted Extra attention is needed
Milestone

Comments

@gsarti
Copy link
Member

gsarti commented Jul 12, 2023

Description

This issue addresses the high space requirements of large attribution scores tensors by adding a scores_precision parameter to FeatureAttributionOutput.save method.

Proposant: @g8a9

Motivation

Currently, tensors in FeatureAttributionOutput objects (attributions and step scores) are serialized in float32 precision as a default when using out.save(). While it is possible to compress the representation of these values with ndarray_compact=True, the resulting JSON files are usually quite large. Using more parsimonious data types could reduce the size of saved objects and facilitate systematic analyses leveraging large amounts of data.

Proposal

float32 precision should probably remain the default behavior, as we do not want to cause any information loss by default.

float16 and float8 should also be considered, both in the signed and unsigned variants, since leveraging the strictly positive nature of some score types would allow supporting greater precision while halving space requirements. Unsigned values will be used as defaults if no negative scores are present in a tensor.

float16 can be easily used by casting tensors to the native torch.float16 data type, which would preserve precision up to 4 decimal values for scores normalized in the [-1;1] interval (8 for unsigned tensors). This corresponds to 2 or 4 decimal places for float8. However, this data type is not supported natively in Pytorch, so tensors should be converted to torch.int8 and torch.uint8 instead and transformed in floats upon reloading the object.

@gsarti gsarti added enhancement New feature or request help wanted Extra attention is needed good first issue Good for newcomers labels Jul 12, 2023
@gsarti gsarti added this to the v0.5 milestone Jul 28, 2023
LuukSuurmeijer added a commit to LuukSuurmeijer/inseq that referenced this issue May 10, 2024
inseq-team#202
Adds functionality for saving feature attributions objects and tensors in float16 or float8 format,
depending on `scores_precision` parameters.
Tensors are saved in huggingface safetensor format, and quantized using
zeropoint quantization. Because safetensors are bytes objects, they are
encoded with b64 to be saved in the output json and decoded upon
reloading.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request good first issue Good for newcomers help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

1 participant