Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DRAFT: Add a feature to inferencing output shape in the interpreter #12981

Closed

Conversation

shs-park
Copy link
Contributor

@shs-park shs-park commented May 10, 2024

This PR allows the interpreter to determine the unknown output shape, when it is able to infer the output shape.
As a result, the output value of the inferred unknown shape can also be output.


for issue #12979

This PR allows the interpreter to determine the unknown output shape,
when it is able to infer the output shape.
As a result, the output value of the inferred unknown shape can also be output.

Signed-off-by: Seungho Henry Park <shs.park@samsung.com>
@shs-park shs-park self-assigned this May 10, 2024
@shs-park
Copy link
Contributor Author

shs-park commented May 10, 2024

This draft changes to get the size of output_node from the runtime module, not from the shape of origin model.

The runtime module actually calculates the model's output along with the input value, and ultimately calculates and holds the shape of the result.

throw std::runtime_error("Cannot find tensor size for output node named \"" + name + "\".");
}

size_t tensor_size = luci::size(output_node->dtype());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be better to use luci-interpreter's interface (luci-interpreter::size() is different from luci::size()).

Suggested change
size_t tensor_size = luci::size(output_node->dtype());
size_t tensor_size = luci_interpreter::size(tensor->element_type());

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jinevening,
Thanks for the suggestion!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jinevening,

The luci_interpreter::size() doesn't seem to exist.
Should I make this one..? or do I miss something..?

/git/ONE/compiler/luci-interpreter/src/Interpreter.cpp:137:42: error: ‘size’ is not a member of ‘luci_interpreter’
  137 |   size_t tensor_size = luci_interpreter::size(tensor->element_type());

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, sorry for confusion. I've seen different interface in luci_interpreter in onert-micro (it uses the same namespace..).

luci-interpreter::getDataTypeSize() may be useful. :)

Comment on lines 138 to 139
for (int i = 0; i < tensor->shape().num_dims(); i++)
tensor_size *= tensor->shape().dim(i);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(optional) There is interface for this purpose.

Suggested change
for (int i = 0; i < tensor->shape().num_dims(); i++)
tensor_size *= tensor->shape().dim(i);
tensor_size *= tensor->shape().num_elements();

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you!

Signed-off-by: Seungho Henry Park <shs.park@samsung.com>
@shs-park
Copy link
Contributor Author

Close as all related PRs are merged into master.

@shs-park shs-park closed this May 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants