Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TypeError: ~ (operator.invert) is only implemented on integer and Boolean-type tensors #199

Open
LiXiaoli921 opened this issue May 13, 2024 · 1 comment

Comments

@LiXiaoli921
Copy link

I met the same error while using the model.py file.
[https://github.com//issues/69]
which is
AttributeError: 'FlashMHA' object has no attribute 'batch_first'
then I modified the code referred to the Issue #69 solution, but I meet another error:

i. TypeError: ~ (operator.invert) is only implemented on integer and Boolean-type tensors

so I modified this file https://github.com/bowang-lab/scGPT/blob/main/scgpt/model/model.py
if not src_key_padding_mask.any().item(): # no padding tokens in src src_key_padding_mask_ = None else: if src_key_padding_mask.dtype != torch.bool: src_key_padding_mask = src_key_padding_mask.bool() # NOTE: the FlashMHA uses mask 0 for padding tokens, which is the opposite src_key_padding_mask_ = ~src_key_padding_mask

the line 703 src_key_padding_mask_ = ~src_key_padding_mask was modified to 'src_key_padding_mask_ = None'
image
The training log is linked: https://wandb.ai/alto921/scGPT/runs/3o7yd13r/logs?nw=nwuserxiaolili921.

then the file could be run, but the model performance was really bad, I think I can not modify the code like that, so how could I make it work?

Thanks so much for your help.

@LiXiaoli921
Copy link
Author

I tried to modify this code as follows, it also works, but I am not sure if I am correct.
# src_key_padding_mask_ = ~src_key_padding_mask src_key_padding_mask_ = ~Tensor.bool(src_key_padding_mask)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant