Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Meet errors when using Naive method on generated benchmark #1622

Open
YuanXun2024 opened this issue Mar 16, 2024 · 2 comments
Open

Meet errors when using Naive method on generated benchmark #1622

YuanXun2024 opened this issue Mar 16, 2024 · 2 comments

Comments

@YuanXun2024
Copy link

I generated a benchmark with "class_incremental_benchmark". But 'Naive' strategy cannot be applied to this benchmark. Can you pls give me some advice?

Code as follows:

datadir = default_dataset_location('mnist')
train_MNIST = as_classification_dataset(MNIST(datadir, train=True, download=True))
test_MNIST = as_classification_dataset(MNIST(datadir, train=False, download=True))
scenario = class_incremental_benchmark({'train': train_MNIST, 'test': test_MNIST}, num_experiences=5)

n_classes = 10
model = SimpleMLP(num_classes=n_classes)

cl_strategy = Naive(
model, SGD(model.parameters(), lr=0.001, momentum=0.9),
CrossEntropyLoss(), train_mb_size=500, train_epochs=1, eval_mb_size=100,
)

print('Starting experiment...')
results = []
for experience in scenario.train_stream:
print("Start of experience: ", experience.current_experience)
print("Current Classes: ", experience.classes_in_this_experience)

# train returns a dictionary which contains all the metric values
print(experience)
res = cl_strategy.train(experience)
print('Training completed')

print('Computing accuracy on the whole test set')
# test also returns a dictionary which contains all the metric values
results.append(cl_strategy.eval(scenario.test_stream))

error as follows:
Traceback (most recent call last):
File "main.py", line 71, in
res = cl_strategy.train(experience)
File "/home/yuanxun/CL/avalanche/avalanche/training/templates/base_sgd.py", line 211, in train
super().train(experiences, eval_streams, **kwargs)
File "/home/yuanxun/CL/avalanche/avalanche/training/templates/base.py", line 162, in train
self._before_training_exp(**kwargs)
File "/home/yuanxun/CL/avalanche/avalanche/training/templates/base_sgd.py", line 291, in _before_training_exp
self.make_train_dataloader(**kwargs)
File "/home/yuanxun/CL/avalanche/avalanche/training/templates/base_sgd.py", line 456, in make_train_dataloader
self.dataloader = TaskBalancedDataLoader(
File "/home/yuanxun/CL/avalanche/avalanche/benchmarks/utils/data_loader.py", line 404, in init
task_labels_field = getattr(data, "targets_task_labels")
AttributeError: 'ClassificationDataset' object has no attribute 'targets_task_labels'

@YuanXun2024
Copy link
Author

Same error with "new_instances_benchmark". I guess Naive method can only work with "nc_benchmark". Am I correct?

datadir = default_dataset_location('mnist')
train_MNIST = MNIST(datadir, train=True, download=True)
test_MNIST = MNIST(datadir, train=False, download=True)

train_transforms = torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])
eval_transforms = torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])
train_MNIST = as_classification_dataset(
train_MNIST,
transform_groups={
'train': train_transforms,
'eval': eval_transforms
}
)
test_MNIST = as_classification_dataset(
test_MNIST,
transform_groups={
'train': train_transforms,
'eval': eval_transforms
}
)

scenario_val = new_instances_benchmark(
train_MNIST,
test_MNIST,
balance_experiences=True,
shuffle=True,
num_experiences=5,
seed=0,
)

n_classes = 10
model = SimpleMLP(num_classes=n_classes)

cl_strategy = Naive(
model, SGD(model.parameters(), lr=0.001, momentum=0.9),
CrossEntropyLoss(), train_mb_size=500, train_epochs=1, eval_mb_size=100,
)

print('Starting experiment...')
results = []
for experience in scenario_val.train_stream:
print("Start of experience: ", experience.current_experience)

res = cl_strategy.train(experience)
print('Training completed')

print('Computing accuracy on the whole test set')
results.append(cl_strategy.eval(scenario_val.test_stream))

@wang-xulong
Copy link

Yes, I came cross the similiar issues, "class_incremental_benchmark" and "task_incremental_benchmark" only were presented in demo example. when taking them into train strategy, you may meet many errors

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants