Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tensor shape inference #71

Open
lutzroeder opened this issue Feb 4, 2018 · 14 comments
Open

Tensor shape inference #71

lutzroeder opened this issue Feb 4, 2018 · 14 comments

Comments

@lutzroeder
Copy link
Owner

No description provided.

@Flamefire
Copy link

This does not even require much work for ONNX: You can run the shape_inference method on the ONNX model which populates model.graph.value_info which can be read afterwards.

If you don't want to do that in Netron, let the user do that first and have it stored in the model:

import onnx
from onnx import shape_inference
path = "..."
onnx.save(onnx.shape_inference.infer_shapes(onnx.load(path)), path)

@ysh329
Copy link

ysh329 commented Dec 3, 2018

hope support shape infer. 😆

@lovettchris
Copy link

this would be very useful...

@suntao2012
Copy link

This does not even require much work for ONNX: You can run the shape_inference method on the ONNX model which populates model.graph.value_info which can be read afterwards.

If you don't want to do that in Netron, let the user do that first and have it stored in the model:

import onnx
from onnx import shape_inference
path = "..."
onnx.save(onnx.shape_inference.infer_shapes(onnx.load(path)), path)

it does work on onnx, but is there any way to add infer shape on mxnet json/model?

@lutzroeder
Copy link
Owner Author

@suntao2012 Netron runs in the browser without any Python dependencies.

@lgeiger
Copy link

lgeiger commented Nov 14, 2019

It would be excellent if netron would be able to show the shape of the activations between two layers. Similar to the way it shows the shape of the input layer:
Screenshot 2019-11-14 at 11 01 48

This would be incredibly useful when developing models and seems to be pretty viable to implement for the Keras backend since all the Tensor shape information should be available.

@dsplabs
Copy link

dsplabs commented Mar 5, 2020

This does not even require much work for ONNX: You can run the shape_inference method on the ONNX model which populates model.graph.value_info which can be read afterwards.

If you don't want to do that in Netron, let the user do that first and have it stored in the model:

import onnx
from onnx import shape_inference
path = "..."
onnx.save(onnx.shape_inference.infer_shapes(onnx.load(path)), path)

Very helpful, thank you @Flamefire.

Left is without shape inference. Right is with shape inference.

Screen Shot 2020-03-04 at 8 18 02 PM

One important thing to note. Currently, for the above to work, you must use opset version <9. The above was generated with opset version 8. I checked opset 7 also. Both worked fine.

At present, for opset >=9, shapes will not be included and will not show as pointed out here:

Ah, you mean for opset9 or better.
That change basically removed constants from inputs since inputs are not constants. In older onnx versions constants had to be part of the inputs, in opset9 that changed.
Possible a onnx issue.
onnx/tensorflow-onnx#674 (comment)

@lookup1980
Copy link

@dsplabs thank you for the comment, and it's very helpful!

My understanding is that we can't set opset version when exporting Pytorch to ONNX, right?

@dsplabs
Copy link

dsplabs commented Mar 11, 2020

@lookup1980 yes we can, by setting the opset_version argument, e.g.:

torch.onnx.export(model, model_inputs, onnx_file, opset_version=8)

Works for me in PyTorch version 1.4.

If you need to convert existing ONNX file, you can do so using: onnx.version_converter.convert_version(...). I usually throw-in also onnx.utils.polish_model(...) which (among other things) does shape inference using onnx.shape_inference.infer_shapes(...), e.g.:

import onnx
import onnx.utils
import onnx.version_converter

model_file = 'model.onnx'
onnx_model = onnx.load(model_file)
onnx_model = onnx.version_converter.convert_version(onnx_model, target_version=8)
onnx_model = onnx.utils.polish_model(onnx_model)
onnx.save(onnx_model, model_file)

@fzyzcjy
Copy link

fzyzcjy commented Mar 19, 2020

Does not work for me...

model:

    model = keras.applications.mobilenet.MobileNet(
        # input_shape=(32, 32, 1),
        input_shape=(224, 224, 3),
        weights=None,
        include_top=False,
    )

    model.compile(optimizer=keras.optimizers.Adam(learning_rate=learning_rate),
                  loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
                  metrics=['accuracy'])


Code:

keras_model = model

onnx_model = onnxmltools.convert_keras(keras_model)

onnx_model = onnx.shape_inference.infer_shapes(onnx_model)

p = dtemp / 'temp.onnx'
# onnxmltools.utils.save_model(onnx_model, str(p))
onnx.save(onnx_model, str(p))

Result:

image

Thanks for any suggestions!

@sizhky
Copy link

sizhky commented Mar 29, 2020

This does not even require much work for ONNX: You can run the shape_inference method on the ONNX model which populates model.graph.value_info which can be read afterwards.

If you don't want to do that in Netron, let the user do that first and have it stored in the model:

import onnx
from onnx import shape_inference
path = "..."
onnx.save(onnx.shape_inference.infer_shapes(onnx.load(path)), path)

@Flamefire
Running this kills jupyter kernel. I was trying to load and save an inception model in this case.

@soyebn
Copy link

soyebn commented May 1, 2020

Nice feature addition. Some of the segmentation models need OpSet=11. Is there as way we can get this working for OpSet=11?

@kobygold
Copy link

kobygold commented Feb 8, 2023

Is there any estimate on when this feature could be added? I see it was requested long time ago, and it would be extremely useful!
I see there is a workaround above for ONNX by creating a new version of the model with layer sizes.
Is there a similar workaround for Keras .h5 files?

@lutzroeder
Copy link
Owner Author

lutzroeder commented Feb 9, 2023

@kobygold Keras files do not store inferred shapes. If you want to work on implementing this for Keras, acuity.js and darknet.js already have some support for reference.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests