Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding some Boosted tree ops to the 'allowed' list #50801

Closed
Koushik667 opened this issue Jul 16, 2021 · 16 comments
Closed

Adding some Boosted tree ops to the 'allowed' list #50801

Koushik667 opened this issue Jul 16, 2021 · 16 comments
Assignees
Labels
comp:lite TF Lite related issues stat:awaiting tensorflower Status - Awaiting response from tensorflower TF 2.4 for issues related to TF 2.4 type:feature Feature requests

Comments

@Koushik667
Copy link

Please make sure that this is a feature request. As per our GitHub Policy, we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:feature_template

System information

  • TensorFlow version (you are using):2.4
  • Are you willing to contribute it (Yes/No):Yes/will need some help

Describe the feature and the current behavior/state.

Currently, we cant convert tensorflow boosted tree model to tensorflow lite using tf.lite.TFLiteConverter.from_saved_model even after having
converter.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS, # enable TensorFlow Lite ops.
tf.lite.OpsSet.SELECT_TF_OPS
# enable TensorFlow ops.
]
I am getting this error,

`ConverterError: :0: error: loc("boosted_trees"): 'tf.BoostedTreesEnsembleResourceHandleOp' op is neither a custom op nor a flex op
:0: error: loc("boosted_trees/BoostedTreesPredict"): 'tf.BoostedTreesPredict' op is neither a custom op nor a flex op
:0: error: loc("boosted_trees/head/predictions/str_classes"): 'tf.AsString' op is neither a custom op nor a flex op
:0: error: failed while converting: 'main': Ops that need custom implementation (enabled via setting the -emit-custom-ops flag):
tf.AsString {device = "", fill = "", precision = -1 : i64, scientific = false, shortest = false, width = -1 : i64}
tf.BoostedTreesEnsembleResourceHandleOp {container = "", device = "", shared_name = "boosted_trees/"}
tf.BoostedTreesPredict {device = "", logits_dimension = 7 : i64, num_bucketized_features = 18 : i64}

`
Will this change the current api? How?
Not sure

Who will benefit with this feature?

Anyone who want to use boosted tree tensorflow lite model will be benifted.

Any Other info.
Thanks to @MeghnaNatraj and @abattery for responding to case #50667 .After referring this [https://www.tensorflow.org/lite/guide/op_select_allowlist#add_tensorflow_core_operators_to_the_allowed_list], i have raised this feature request to add the unsupported ops.

@Koushik667 Koushik667 added the type:feature Feature requests label Jul 16, 2021
@UsharaniPagadala UsharaniPagadala added comp:lite TF Lite related issues TF 2.4 for issues related to TF 2.4 comp:ops OPs related issues labels Jul 16, 2021
@ymodak ymodak added stat:awaiting tensorflower Status - Awaiting response from tensorflower and removed comp:ops OPs related issues labels Jul 16, 2021
@ymodak ymodak assigned MeghnaNatraj and unassigned ymodak Jul 16, 2021
@Koushik667
Copy link
Author

In order to do code changes i have tried to build tensorflow from source in my mac , But it gave me an error in the last step where we install the wheel file of tensorflow through pip, i have raised the following case for this #50829 .Please let me know how i can proceed.

@Koushik667
Copy link
Author

Koushik667 commented Jul 28, 2021

Hello, i was able to build from source in a linux machine(Ubuntu 18.04.5). However i am getting this errror after i add the necessary ops required in the code.

`Hello from TensorFlow C library version 2.6.0-rc1
The elapsed time is 0.000060 seconds
INFO: Created TensorFlow Lite delegate for select TF ops.
2021-07-28 11:52:42.296825: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: SSE3 SSE4.1 SSE4.2 AVX AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
INFO: TfLiteFlexDelegate delegate: 37 nodes delegated out of 117 nodes with 4 partitions.

2021-07-28 11:52:42.331907: W tensorflow/core/framework/op_kernel.cc:1692] OP_REQUIRES failed at quantile_ops.cc:461 : Not found: Container localhost does not exist. (Could not find resource: localhost/boosted_trees/QuantileAccumulator/)
ERROR: Container localhost does not exist. (Could not find resource: localhost/boosted_trees/QuantileAccumulator/)
(while executing 'BoostedTreesQuantileStreamResourceGetBucketBoundaries' via Eager)
ERROR: Node number 117 (TfLiteFlexDelegate) failed to invoke.`

Attaching the diff file here : tf_diff.txt

PS: I am using Tensorflow c api and the code is attached here: boosted_tree_tflite.c

Thank you,
Koushik

@MeghnaNatraj
Copy link
Member

Here are a couple of suggested solutions:

  1. #28287#comment
  2. (related to 1.)StackOverflow: Solution A and StackOverflow: Solution B
  3. discusses possible cause

Let me know if this works.

@Koushik667
Copy link
Author

Hi Meghna, Thanks for the response.
I don't understand how i can use the 1 and 2 points you have mentioned as i am using a C API whereas they are using Python API for inference.
About the 3rd link, i did not understand the following comment by @mrry :

Thanks for tracking that down! I think this is a legitimate bug, introduced in 9f4118d. That change modifies most iterators to use the same Device, FunctionLibraryRuntime, and ResourceMgr as the op that created them, which enables the resource-capturing logic to be simplified, because handles are valid in both the caller and the callee.

Does it mean i have to essentially train the model in the same device and then load the model in the same device for it to work?

@MeghnaNatraj
Copy link
Member

I think the environment is fine, but the Boosted trees implementation may be using streaming mechanisms that isn't supported by TFLite on-device. Looks like #41226 faces a similar issue. I'll look into this further and get back to you.

@abattery
Copy link
Contributor

abattery commented Aug 3, 2021

@Koushik667 could you share the reproducible steps for creating the above TensorFlow model to debug?

@Koushik667
Copy link
Author

Sure,

I have created a sample program for the titanic dataset which takes only 2 input numerical features for simplicity.
Attaching codes as .txt files as githhub is not allowing .c or .py attachments.

Dataset csv files:
train.csv
eval.csv

Here is the python program to create the tensorflow model which uses above csv files
BoostedTree.txt

Here is the converter code which converts TensorFlow model generated to TensorFlow lite :converter_code.txt

Here is the diff file which applied after building from source : tf_diff.txt

Here is the C code which loads and runs the tensorflow lite model :
test.txt

I run this code using this command:
gcc -I../tensorflow/ test.c -Wl,-rpath=/home/luser/tensorflow/bazel-bin/tensorflow/lite/c/ -L/home/luser/tensorflow/bazel-bin/tensorflow/lite/c/ -ltensorflowlite_c -o test.o

Also attaching tensorflow and tensorflow lite models in zip file :
Archive.zip

@abattery
Copy link
Contributor

abattery commented Aug 4, 2021

Could you format the above script in a format of the CoLab if possible?

@Koushik667
Copy link
Author

@Koushik667
Copy link
Author

@abattery Please let me know if you need anything from my side.

@Koushik667
Copy link
Author

Hi @abattery @MeghnaNatraj , Can you please let me know if this issue is solvable or not? or if it is a work in progress?

@Koushik667
Copy link
Author

@renjie-liu Any comments on this issue?

@renjie-liu
Copy link
Member

Adding Karim who may have a better idea

@Koushik667
Copy link
Author

@karimnosseir Any updates on this issue?

@Koushik667
Copy link
Author

Hi @MeghnaNatraj, Can you please comment on what can be done for this issue?

@terryheo
Copy link
Member

These ops were removed from TF so TFLite doesn't support it either.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
comp:lite TF Lite related issues stat:awaiting tensorflower Status - Awaiting response from tensorflower TF 2.4 for issues related to TF 2.4 type:feature Feature requests
Projects
None yet
Development

No branches or pull requests

8 participants