Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AttributeError: '_output_randomize' not found #5631

Open
Xue-JW opened this issue Jul 6, 2023 · 1 comment · May be fixed by #5752
Open

AttributeError: '_output_randomize' not found #5631

Xue-JW opened this issue Jul 6, 2023 · 1 comment · May be fixed by #5752
Assignees

Comments

@Xue-JW
Copy link

Xue-JW commented Jul 6, 2023

Describe the bug:

When I try to execute:

ModelSpeedup(model, torch.rand(8, 3, 512, 512).to(device), masks).speedup_model()

I encountered an error. The error message is as follows:

 File "/home/gavinx/Downloads/YOLOv5-Lite/test_nni_pruning2.py", line 143, in prune
    ModelSpeedup(model, torch.rand(1, 3, 512, 512).to(device), masks).speedup_model()
  File "/home/gavinx/Downloads/nni/nni/compression/pytorch/speedup/v2/model_speedup.py", line 433, in speedup_model
    self.update_direct_sparsity()
  File "/home/gavinx/Downloads/nni/nni/compression/pytorch/speedup/v2/model_speedup.py", line 286, in update_direct_sparsity
    self.node_infos[node].mask_updater.direct_update_process(self, node)
  File "/home/gavinx/Downloads/nni/nni/compression/pytorch/speedup/v2/mask_updater.py", line 261, in direct_update_process
    del model_speedup.node_infos[to_delete]._output_randomize
AttributeError: _output_randomize
Exception has occurred: TypeError
unsupported operand type(s) for *: 'NoneType' and 'Tensor'
  File "/home/gavinx/Downloads/nni/nni/compression/pytorch/speedup/v2/mask_updater.py", line 401, in <lambda>
    input_grad = tree_map_zip(lambda t, m: (t * m).type_as(t) if isinstance(m, torch.Tensor) else t, \
  File "/home/gavinx/Downloads/nni/nni/compression/pytorch/speedup/v2/utils.py", line 82, in <listcomp>
    return tree_unflatten([fn(*args) for args in zip(*flat_args_list)], spec_list[0])
  File "/home/gavinx/Downloads/nni/nni/compression/pytorch/speedup/v2/utils.py", line 82, in tree_map_zip
    return tree_unflatten([fn(*args) for args in zip(*flat_args_list)], spec_list[0])
  File "/home/gavinx/Downloads/nni/nni/compression/pytorch/speedup/v2/mask_updater.py", line 401, in indirect_getitem
    input_grad = tree_map_zip(lambda t, m: (t * m).type_as(t) if isinstance(m, torch.Tensor) else t, \
  File "/home/gavinx/Downloads/nni/nni/compression/pytorch/speedup/v2/mask_updater.py", line 463, in indirect_update_process
    indirect_fn(model_speedup, node)
  File "/home/gavinx/Downloads/nni/nni/compression/pytorch/speedup/v2/model_speedup.py", line 305, in update_indirect_sparsity
    self.node_infos[node].mask_updater.indirect_update_process(self, node)
  File "/home/gavinx/Downloads/nni/nni/compression/pytorch/speedup/v2/model_speedup.py", line 434, in speedup_model
    self.update_indirect_sparsity()
  File "/home/gavinx/Downloads/YOLOv5-Lite/test_nni_pruning2.py", line 140, in prune
    ModelSpeedup(model, torch.rand(8, 3, 512, 512).to(device), masks).speedup_model()
  File "/home/gavinx/Downloads/YOLOv5-Lite/test_nni_pruning2.py", line 170, in <module>
    prune(opt)

Environment:

  • NNI version: 8dc1a83
  • Training service: local
  • Python version: 3.10.11
  • PyTorch version: 1.13.1
  • Cpu or cuda version: Cpu

Reproduce the problem

  • Code|Example:

  • How to reproduce:

I found possible solution to address this issue:
change this line from

del model_speedup.node_infos[to_delete]._output_randomize

to

del model_speedup.node_infos[to_delete].output_randomize

and this line from

input_grad = tree_map_zip(lambda t, m: (t * m).type_as(t) if isinstance(m, torch.Tensor) else t, \

to

input_grad = tree_map_zip(lambda t, m: (t * m).type_as(t) if isinstance(m, torch.Tensor)  and t is not None else t, \
@lminer
Copy link

lminer commented Aug 25, 2023

I'm getting this error as well

@saravanabalagi saravanabalagi linked a pull request Mar 7, 2024 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants