Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

index1 = lcnn_line_map[idx, 0] IndexError: index 0 is out of bounds for axis 1 with size 0 #46

Open
aquexexl opened this issue Mar 13, 2023 · 3 comments

Comments

@aquexexl
Copy link

Hi,
I 've create my own dataset in wireframe format but made a preprocessed with wireframe.py in L-CNN github. There is no error during the preprocessed but when I tried to train with Hawpv3, I had this issue :

Traceback (most recent call last):
File "/opt/conda/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/opt/conda/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/project_ghent/HAWP/hawp/ssl/train.py", line 233, in
main()
File "/project_ghent/HAWP/hawp/ssl/train.py", line 145, in main
train(model, train_loader, optimizer, scheduler, loss_reducer, arguments, output_dir)
File "/project_ghent/HAWP/hawp/ssl/train.py", line 170, in train
for it, data in enumerate(train_loader):
File "/opt/conda/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 521, in next
data = self._next_data()
File "/opt/conda/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1203, in _next_data
return self._process_data(data)
File "/opt/conda/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1229, in _process_data
data.reraise()
File "/opt/conda/lib/python3.9/site-packages/torch/_utils.py", line 425, in reraise
raise self.exc_type(msg)
IndexError: Caught IndexError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/opt/conda/lib/python3.9/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
data = fetcher.fetch(index)
File "/opt/conda/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/opt/conda/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 44, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/project_ghent/HAWP/hawp/ssl/datasets/wireframe_dataset.py", line 1341, in getitem
data = self.train_preprocessing(data)
File "/project_ghent/HAWP/hawp/ssl/datasets/wireframe_dataset.py", line 871, in train_preprocessing
line_map_neg = self.convert_line_map(line_neg, num_junctions)
File "/project_ghent/HAWP/hawp/ssl/datasets/wireframe_dataset.py", line 667, in convert_line_map
index1 = lcnn_line_map[idx, 0]
IndexError: index 0 is out of bounds for axis 1 with size 0

I saw there was empty Lneg (cause only one line in a picture for the train maybe) I don't know if it came from this variable. What do you suggest ?

Thank in advance !

@aquexexl
Copy link
Author

The problem came from the shape of Lneg but I don't understand why
Example from my train:
lneg (120, 2, 3)
Lneg (120, 0, 3)
lpos (22, 2, 3)
Lpos (22, 2)

The original code from wireframe.py:
for i0, i1 in combinations(range(len(junc)), 2):
if frozenset([i0, i1]) not in lineset:
v0, v1 = junc[i0], junc[i1]
vint0, vint1 = to_int(v0[:2] / 2), to_int(v1[:2] / 2) #
rr, cc, value = skimage.draw.line_aa(*vint0, *vint1)
lneg.append([v0, v1, i0, i1, np.average(np.minimum(value, llmap[rr, cc]))])
assert len(lneg)!=0
lneg.sort(key=lambda l: -l[-1])
lneg = np.array([l[:2] for l in lneg[:2000]], dtype=np.float32)
Lneg = np.array([l[2:4] for l in lneg][:4000], dtype=int)
print('lneg',np.shape(lneg))
print('Lneg',np.shape(Lneg))
junc = np.array(junc, dtype=np.float32)
Lpos = np.array(lnid, dtype=int)
print('lpos',np.shape(lpos))
print('Lpos',np.shape(Lpos))
lpos = np.array(lpos, dtype=np.float32)
Have you got a solution ?

@cherubicXN
Copy link
Owner

Hi @aquexexl, the data format of HAWPv3 would be different from the HAWPv2 because I wanted to avoid any mistakes in using human-labeled annotations for self-supervised learning. So, if you want to use the HAWPv3 model in the v2 pipeline, a better way should be to copy the model files to the v2 directory.

For the negative example used in L-CNN, it was used in our HAWPv1 (the CVPR version). But for HAWPv2 and HAWPv3, the negative examples were not used. You can safely set an empty or a dummy negative example to adapt the data format.

Sorry for the late reply, but I am suffering from a difficult time in my life. Hope my reply can provide some information for you.

@aquexexl
Copy link
Author

Thank you for your answer,
I solved the problem with Lneg, but in your trainv3, it expected line_map in the tensor, but I only have line_map_pos or line_map_neg. Must I force line_map_pos ?
Traceback (most recent call last):
File "/opt/conda/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/opt/conda/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/project_ghent/HAWP/hawp/ssl/train.py", line 232, in
main()
File "/project_ghent/HAWP/hawp/ssl/train.py", line 145, in main
train(model, train_loader, optimizer, scheduler, loss_reducer, arguments, output_dir)
File "/project_ghent/HAWP/hawp/ssl/train.py", line 178, in train
loss_dict, extra_info = model(images,annotations)
File "/opt/conda/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/project_ghent/HAWP/hawp/ssl/models/detector.py", line 358, in forward
return self.forward_train(images, annotations=annotations)
File "/project_ghent/HAWP/hawp/ssl/models/detector.py", line 365, in forward_train
targets , metas = self.hafm_encoder(annotations)
File "/project_ghent/HAWP/hawp/ssl/models/hafm.py", line 24, in call
edge_indices = annotations['line_map'][batch_id].triu().nonzero()
KeyError: 'line_map'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants