We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement dynamic batch and dynamic shapes support for layer norm converter. Add the following testcase once it is implemented
def test_layernorm_with_dynamic_shape(self): class LayerNorm(torch.nn.Module): def forward(self, x): return torch.ops.aten.layer_norm.default( x, torch.tensor([3, 224, 224]), torch.ones((3, 224, 224)), torch.zeros((3, 224, 224)), 1e-05, True, ) input_specs = [ Input( shape=(-1, 3, 224, 224), dtype=torch.float32, shape_ranges=[((1, 3, 224, 224), (1, 3, 224, 224), (2, 3, 224, 224))], ), ] self.run_test_with_dynamic_shape( LayerNorm(), input_specs, )
Steps to reproduce the behavior:
Build information about Torch-TensorRT can be found by turning on debug messages
conda
pip
libtorch
The text was updated successfully, but these errors were encountered:
No branches or pull requests
Bug Description
Implement dynamic batch and dynamic shapes support for layer norm converter. Add the following testcase once it is implemented
To Reproduce
Steps to reproduce the behavior:
Expected behavior
Environment
conda
,pip
,libtorch
, source):Additional context
The text was updated successfully, but these errors were encountered: