-
Notifications
You must be signed in to change notification settings - Fork 498
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
inference from a model with different types of layers based on forwar… #989
base: 0.3
Are you sure you want to change the base?
Conversation
…dSequentialModuleWithPadMask
Hi @mmbejani! Thank you for your pull request and welcome to our community. Action RequiredIn order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you. ProcessIn order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA. Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with If you have received this in error or have any questions, please contact us at cla@fb.com. Thanks! |
Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR! This approach unfortunately isn't general. It assumes that T
for a given layer is in the 1st indexed dimension, which is architecture dependent.
The idea with forwardWithPadMask
is that the padding will be computed for a given sequential, but that the user will resize the mask for each layer depending on its input size and expected dimensions.
You could reimplement this function in your user-space code, or you might consider creating a module that does this resizing for you in a way that you'd expect given a wrapped module (e.g. Conv2D). If you're able to think up a way to do this in a clean way, I'm happy to take a look at an updated PR.
auto ntwrkSeq = std::dynamic_pointer_cast<fl::Sequential>(ntwrk); | ||
auto output = input; | ||
for (auto& module : ntwrkSeq->modules()) { | ||
auto tr = std::dynamic_pointer_cast<fl::Transformer>(module); | ||
auto cfr = std::dynamic_pointer_cast<fl::Conformer>(module); | ||
if (tr != nullptr || cfr != nullptr) { | ||
/* input dims of Transformer module should be CxTxBx1 */ | ||
int T = output.dims(1); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As mentioned -- this is a big assumption that's architecture and layer dependent. Better to resize the padding mask per layer to fit whatever layers are in the architecture or add some abstraction to automatically do this if interested.
Summary
The function
forwardSequentialModuleWithPadMask
work for those networks which are built based on justTransformer
family. If the network has non-Transformer
based layer, e.g. Conv2D, then the length of the input sequence is changed however the length of the sequence is computed based initial length which raise the following runtime error:I compute the length of the sequence based on itself after applying each layer on the input.
Test Plan (required)
Create networks with different combinations of modules
Transformers
and non-Transformers
and applying the networks on random inputs.