New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WIP: Documentation & Typos #1629
base: master
Are you sure you want to change the base?
Conversation
Thank you! ping me once you are ready for a review. |
@AntonioCarta Thanks for the quick reply! I have a question not directly related to documentation: In a codebase using a past avalanche version there is this import
What's the reason that only as_classification_dataset(
dataset,
transform_groups=init_transform_groups(
target_transform=lambda x: x, # or whatever
...
),
) This could also be achieved via a class method (the constructor seems a bit complex already), e.g. as_classification_dataset(
dataset,
transform_groups=TransformGroups.create(
target_transform=lambda x: x, # or whatever
...
),
) Kind regards 🙂 PS: Providing a |
Thank you for highlighting these issues. The methods changed over time because at first
Therefore, the goal of
I think it makes sense to add the arguments to
I like this less. Classification transform groups are different from other transformations and TransformGroups tries to be agnostic to these differences (as much as possible). Also, It would not solve the verbosity problem.
This is probably a mistake from the earlier API version. |
Pull Request Test Coverage Report for Build 8382229381Details
💛 - Coveralls |
btw, if you have other comments about usability/pain point in the API/documentation do let me know. Fixing these issues takes a lot of time so we can't do everything quickly, but we try to keep track of it and improve over time. |
…et, as_taskaware_classification_dataset
Hi @jneuendorf I see that the PR is still a draft but if there are no blocking issue I think we could merge it. Doc improvements can be easily split into multiple PRs. |
Hi, your Github-comment email got lost between the other ones from Github like So feel free to review/cherry-pick the changes and decide which ones are ok for you. 😉 |
I currently have a problem with training a benchmark from a custom dataset: My custom dataset is a PyTorch class ClassificationMixin(torch.utils.data.Dataset, ISupportedClassificationDataset):
@cached_property
def targets(self):
return [] # whatever...
class TensorDataset(torch.utils.data.TensorDataset, ClassificationMixin):
... This way, it (and its modified splits) is compatible with benchmark = class_incremental_benchmark(
{
'train': as_classification_dataset(train_dataset),
'test': as_classification_dataset(test_dataset),
},
num_experiences=1,
) I get the following error:
For me, this is a hint that there is some protocol mismatch:
? |
Still PR is still in progress and not "clean for review" - just some things, I stumbled across. I opened it already, so you know there is some progress on certain aspects that others don't have to worry about 😉
Relates to #886