Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

new_instances_benchmark problems #1608

Open
AlbinSou opened this issue Feb 28, 2024 · 2 comments
Open

new_instances_benchmark problems #1608

AlbinSou opened this issue Feb 28, 2024 · 2 comments
Labels
Feature - Medium Priority New feature or request, medium priority

Comments

@AlbinSou
Copy link
Collaborator

馃悰 Describe the bug

One of this problem comprises a bug, which gives the following error when calling classes_in_this_experience on an experience. The same bug already happened for class_incremental_benchmark in combination with benchmark_with_validation_stream (here I am also using benchmark_with_validation_stream).

AttributeError: 'DatasetExperience' object has no attribute 'classes_in_this_experience'

Otherwise, these are just general things about this method make it unusable. Ideally, when calling this, I would like to do something like this:


train_dataset = TinyImagenet(
            root=dataset_root, train=True, transform=train_transform
        )
test_dataset = TinyImagenet(
            root=dataset_root, train=False, transform=eval_transform
 )

benchmark = new_instances_benchmark(
            train_dataset,
            test_dataset,
            balance_experiences=True,
            shuffle=shuffle,
            num_experiences=n_experiences,
            seed=seed,
        )

benchmark = benchmark_with_validation_stream(
            benchmark, validation_size=val_size, shuffle=True
        )

However, if I want to get it work, I currently have to do something like this

train_dataset = TinyImagenet(
            root=dataset_root, train=True, transform=train_transform
        )
test_dataset = TinyImagenet(
            root=dataset_root, train=False, transform=eval_transform
 )

train_dataset = ClassificationDataset(
            train_dataset,
            data_attributes=[
                DataAttribute(train_dataset.targets, "targets"),
                DataAttribute([0] * len(train_dataset), "targets_task_labels", use_in_getitem=True),
            ],
        )

test_dataset = ClassificationDataset(
            test_dataset,
            data_attributes=[
                DataAttribute(test_dataset.targets, "targets"),
                DataAttribute([0] * len(test_dataset), "targets_task_labels", use_in_getitem=True),
            ],
        )

benchmark = new_instances_benchmark(
            train_dataset,
            test_dataset,
            balance_experiences=True,
            shuffle=shuffle,
            num_experiences=n_experiences,
            seed=seed,
        )

benchmark = benchmark_with_validation_stream(
            benchmark, validation_size=val_size, shuffle=True
        )

@AlbinSou AlbinSou added the bug Something isn't working label Feb 28, 2024
@AntonioCarta
Copy link
Collaborator

You can add the class timeline after creating the benchmark. new_instances_benchmark needs to work for methods that don't have class labels, and this is why it doesn't add them.

I agree with the verbosity of the dataset API, and we can work on that. However, that is a separate issue.

Did you check the FZTH notebook? It describes the updated API. If you have some doubts it would to expand that to clarify the API.

@AlbinSou
Copy link
Collaborator Author

Ok, thanks for the precisions. No I didn't know it existed. I will check that.

@AlbinSou AlbinSou added Feature - Medium Priority New feature or request, medium priority and removed bug Something isn't working labels Feb 29, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Feature - Medium Priority New feature or request, medium priority
Projects
None yet
Development

No branches or pull requests

2 participants