Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal to Add TensorFlow Backend Support for WASI-NN #41

Open
q82419 opened this issue May 19, 2023 · 1 comment
Open

Proposal to Add TensorFlow Backend Support for WASI-NN #41

q82419 opened this issue May 19, 2023 · 1 comment

Comments

@q82419
Copy link

q82419 commented May 19, 2023

Greetings from the WasmEdge runtime maintenance team,

The WASI-NN API effectively supports OpenVINO, PyTorch, and TensorFlow-Lite backends. However, the design of the set-input and get-output APIs isn't in sync with the standard TensorFlow C API usage.

TensorFlow C API Inference

The TensorFlow backend, like others, employs the TF_SessionRun API for computation. However, in contrast to OpenVINO and PyTorch, this API requires the specification of output tensors during execution.

Whereas in PyTorch the sequence would look like this:

load(path)
ctx = init_execution_context()
set_input(ctx, index1, tensor1)
set_input(ctx, index2, tensor2)
compute(ctx)
out_tensor1 = get_output(ctx, index1)
out_tensor2 = get_output(ctx, index2)
out_tensor3 = get_output(ctx, index3)

In TensorFlow, it would need to be:

load(path)
ctx = init-execution-context()
set_input(ctx, index1, tensor1)
set_input(ctx, index2, tensor2)
set_output(ctx, index1, out_tensor1)
set_output(ctx, index2, out_tensor2)
set_output(ctx, index3, out_tensor3)
compute(ctx)
# Outputs are filled post-computation

Of course, the original invocation sequence works. But at the implementation level, it will cause repeated computation as TF_SessionRun would have to be invoked during the compute phase.

Additionally, unlike OpenVINO, PyTorch, and TensorFlow-Lite, which support index-based input/output tensor selection, TensorFlow only offers the TF_GraphOperationByName API to obtain input and output operations. Hence, the sequence in TensorFlow would need to include names rather than indexes:

load(path)
ctx = init_execution_context()
set_input_by_name(ctx, name1, tensor1)
set_input_by_name(ctx, name2, tensor2)
set_output_by_name(ctx, name1, out_tensor1)
set_output_by_name(ctx, name2, out_tensor2)
set_output_by_name(ctx, name3, out_tensor3)
compute(ctx)
# Outputs are filled post-computation

Proposed Specification Changes

To incorporate the TensorFlow backend and balance developer/user experience with performance, we suggest considering the following functions:

set_input_by_name

Parameters:

  • ctx: handle
  • name: string
  • tensor: tensor-data

Expected result:

  • expected<unit, error>

set_output_by_name

Parameters:

  • ctx: handle
  • name: string
  • tensor: tensor-data buffer for receiving output

Expected result:

  • expected<(), error>

unload

A function required to relinquish loaded resources. We have a FaaS use scenario to register and de-register the loaded models.

Parameters:

  • graph: handle

Expected result:

  • expected<(), error>

finalize_execution_context

A function is needed to release execution contexts.

Parameters:

  • ctx: handle

Expected result:

  • expected<(), error>

Final Thoughts

While the existing WASI-NN API supports the backends as mentioned above, we encountered challenges when trying to implement the TensorFlow backend. Consequently, we advocate for this feature. We would appreciate any suggestions for refining these APIs, as our familiarity with TensorFlow may not be comprehensive. Thank you!

@abrown
Copy link
Collaborator

abrown commented May 22, 2023

set_input_by_name ... set_output_by_name

It is unfortunate that the TF API does not match the wasi-nn API here, but there is a workaround. @brianjjones, in his PR to add TF to Wasmtime, maps a given index to the sorted index of the key names (see here). In other words, for keys {a, b, c} one would use 0 = a, 1 = b, 2 = c. This is not ideal (the user has to be aware of this mapping), but it is better than the alternative: adding set_input_by_name and set_output_by_name would not make sense for other frameworks.

However, in contrast to OpenVINO and PyTorch, this API requires the specification of output tensors during execution.

I think this is also unfortunate, but this may be an artifact of the TF C API that can also be avoided. In @brianjjones' PR, the TensorFlow backend has to maintain more state between the wasi-nn API calls (see here) but is able to avoid recomputation. Take a look at that unfortunate "dance" and see what you think.

unload ... finalize_execution_context

It is not clear to me how these proposed methods differ. My impression was that resource deallocation would happen automatically as a part of the switch to WIT so I have just been waiting for that to happen instead of explicitly specifying those functions. It's been a while, but I think this issue answers that question: #22.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants