You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It would be great to have typing annotations on our forward functions be mirrored in the type annotations for call. E.g, LSTM actually returns tuple[Tensor, tuple[Tensor, Tensor]] and our forward function says so, but that information is overriden by pytorch's implementation of nn.Module.__call__. So either we should override that in our own modules and add the type annotation or we should find a way to programmatically propagate the type annotation from forward to __call__
The text was updated successfully, but these errors were encountered:
@glencoe I will have a look at it. Unfortunately, I have not yet started to integrate our own tensors into our modules. That's why I haven't encountered the problem yet. But that is the next step. Then I will try to get a fix for it.
@julianhoever the issue arises from how pytorch treats their type annotations. It's not a bug ;) and not related to introducing our own tensor types. It's not urgent, i just added so we do not forget it.
It would be great to have typing annotations on our forward functions be mirrored in the type annotations for call. E.g, LSTM actually returns
tuple[Tensor, tuple[Tensor, Tensor]]
and our forward function says so, but that information is overriden by pytorch's implementation ofnn.Module.__call__
. So either we should override that in our own modules and add the type annotation or we should find a way to programmatically propagate the type annotation from forward to__call__
The text was updated successfully, but these errors were encountered: