Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Loop, If and Scan support #168

Open
dshirron opened this issue Oct 25, 2018 · 13 comments
Open

Loop, If and Scan support #168

dshirron opened this issue Oct 25, 2018 · 13 comments

Comments

@dshirron
Copy link

Is there roadmap for supporting operators of loop type:
For example:
ONNX Loop
Caffe2 ONNXWhile

@lutzroeder
Copy link
Owner

Can you share a sample file for each and describe what support you are looking for?

@dshirron
Copy link
Author

dshirron commented Oct 28, 2018

Currently Netron shows onnxwhile as an op without an option to explore the inner network which is a parameter in the onnxwhile op. It would be helpfull to be able to double click the op and see the inner network. The below code defines a simple caffe2 network which is using onnxwhile and converts it to onnx (this part currenly doesnt work since caffe2 onnx exporter doesnt support this yet).

`from caffe2.python import workspace, model_helper,core
import numpy as np
from caffe2.proto import caffe2_pb2

Now import the caffe2 mobile exporter

from caffe2.python.predictor import mobile_exporter

from caffe2.python.onnx import frontend
import onnx

Create the initial input data

workspace.ResetWorkspace()
max_trip_count = np.full(1,20).astype(np.int64)
condition = np.full(1,True).astype(np.bool)
first_init = np.full((1),1).astype(np.float32)
second_init = np.full((1),1).astype(np.float32)
workspace.FeedBlob("max_trip_count", max_trip_count)
workspace.FeedBlob("condition", condition)
workspace.FeedBlob("first_init", first_init)
workspace.FeedBlob("second_init", second_init)

Create body net

body_net = caffe2_pb2.NetDef()

Two loop carried dependencies: first and second

body_net.external_input.extend(['i', 'cond', 'first', 'second'])
body_net.external_output.extend(['cond_new', 'second', 'third', 'third','cond','cond'])
add_op = core.CreateOperator(
'Add',
['first', 'second'],
['third'],
)
print_cond = core.CreateOperator(
'Print',
['cond'],
[],
)
print3 = core.CreateOperator(
'Print',
['third'],
[],
)
limit_const = core.CreateOperator(
'ConstantFill',
[],
['limit_const'],
shape=[1],
dtype=caffe2_pb2.TensorProto.FLOAT,
value=1000.0,
)
cond = core.CreateOperator(
'LT',
['third', 'limit_const'],
['cond_new'],
broadcast=1
)
body_net.op.extend([add_op, print_cond,print3, limit_const, cond])

while_op = core.CreateOperator(
'ONNXWhile',
['max_trip_count', 'condition', 'first_init', 'second_init'],
['first_b', 'second_a', 'kabiba','kusa','asd'],
body=body_net,
has_cond=True,
has_trip_count=True,
save_scopes=0,
)

main_net = caffe2_pb2.NetDef()
main_net.op.extend([while_op])
main_net.external_input.extend(['max_trip_count', 'condition', 'first_init', 'second_init'])
main_net.external_output.extend(['first_b', 'second_a', 'kabiba','kusa','asd'])

workspace_global_options = ['--caffe2_log_level=1']
workspace_global_options += ['--caffe2_print_blob_sizes_at_exit=0']
workspace.GlobalInit(['caffe2'] + workspace_global_options)

init_net, predict_net = mobile_exporter.Export(workspace, main_net, main_net.external_input)
# Let's also save the init_net and predict_net to a file that we will later use for running them on mobile
print("Saving caffe2 predict and init pb files...")
with open('init_net.pb', "wb") as fopen:
fopen.write(init_net.SerializeToString())
with open('predict_net.pb', "wb") as fopen:
fopen.write(predict_net.SerializeToString())
#with open('init_net.pbtxt', "w") as fopen:

fopen.write(str(init_net))

#with open('predict_net.pbtxt', "w") as fopen:

fopen.write(str(predict_net))

Convert to ONNX

onnx_value_info = {
'first_init': (onnx.TensorProto.FLOAT, first_init.shape),
'second_init': (onnx.TensorProto.FLOAT, second_init.shape),
#'starts_tensor': (onnx.TensorProto.INT32, starts_tensor.shape),
#'ends_tensor': (onnx.TensorProto.INT32, ends_tensor.shape),
#'add_tensor': (onnx.TensorProto.INT32, add_tensor.shape),
#'sqrt2_const': (onnx.TensorProto.FLOAT, sqrt2_const.shape),
#'skip_add_out': (onnx.TensorProto.FLOAT, skip_add_out.shape),
}

onnx_model = frontend.caffe2_net_to_onnx_model(
predict_net,
init_net,
onnx_value_info,
)

Run network

workspace.RunNetOnce(main_net)
print(workspace.FetchBlob('kabiba'))
print(workspace.FetchBlob('kusa'))
print(workspace.FetchBlob('asd'))
`

This sample is based on "onnxwhile" op testing code of caffe2 whith add of conversion code to onnx

@dshirron
Copy link
Author

dshirron commented Oct 28, 2018

onnxwhile.zip

lutzroeder added a commit that referenced this issue Oct 28, 2018
@dshirron
Copy link
Author

Since caffe2 export to onnx doesnt support this op yet (I opened an issue in pytorch/caffe2 repo) i dont have an ONNX file.

@demid5111
Copy link

@lutzroeder is it now possible to review the contents of such while layers in Netron? Where can I find the example of such file? I tried the attached file and could not open it.

@lutzroeder lutzroeder changed the title Support for loop type operators Loop, If and Scan support Jan 4, 2019
@kfir-st
Copy link

kfir-st commented Apr 28, 2019

Hi, are you planning to also expand subgraphs in the future ?
See an example here (IF node at the end of visible graph):
https://github.com/caffe2/models/tree/master/mask_rcnn_2go/model/fp32

@lutzroeder
Copy link
Owner

lutzroeder commented May 31, 2019

issue_168.zip

@purshottamv
Copy link

+1. It would be great to be able to visualize the sub-graphs in the ONNX models. Looking forward to this feature.

@IgnacioJPickering
Copy link

+1 The models I'm currently working with start off with If operators and netron can't visualize the model at all, is there any current workaround for this?

@lutzroeder
Copy link
Owner

lutzroeder commented Aug 27, 2020

@mneilly-et
Copy link

mneilly-et commented Oct 13, 2020

Here is an example of DLRM using loop operators. The symbol for the loop operators can be seen and the attributes show the graph value but none of the operators (gather, slice, etc.) inside the loop can be viewed (w/ Netron 4.5.5).

dlrm_s_pytorch.onnx.zip

image

The following shows the operators inside the top loop:

  %42 : Float(3, 32, strides=[32, 1], requires_grad=1, device=cpu) = onnx::Loop(%40, %181) # .../lib/python3.7/site-packages/torch/nn/functional.py:1993:0
    block0(%43 : Long(device=cpu), %cond.1 : bool):
      %45 : Tensor = onnx::Gather[axis=0](%32, %43)
      %46 : Tensor = onnx::Gather[axis=0](%37, %43)
      %47 : Tensor = onnx::Unsqueeze[axes=[0]](%45)
      %48 : Tensor = onnx::Unsqueeze[axes=[0]](%46)
      %49 : Tensor = onnx::Slice(%indices_0, %47, %48, %27)
      %50 : Tensor = onnx::Gather[axis=0](%emb_l.0.weight, %49)
      %51 : Tensor = onnx::ReduceSum[axes=[0], keepdims=0](%50)
      %52 : bool = onnx::Cast[to=9](%26)
      -> (%52, %51)

@lutzroeder
Copy link
Owner

torch_onnx_export.zip

@harishch4
Copy link

Any update on this feature implementation?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

8 participants