Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

how to use TensorArrayV3? #221

Open
mmetwalli96 opened this issue Nov 9, 2022 · 8 comments
Open

how to use TensorArrayV3? #221

mmetwalli96 opened this issue Nov 9, 2022 · 8 comments

Comments

@mmetwalli96
Copy link

mmetwalli96 commented Nov 9, 2022

2022-11-09 11:29:11.466957: I tensorflow/cc/saved_model/reader.cc:45] Reading SavedModel from: ./resources/models/dummy_model
2022-11-09 11:29:11.470546: I tensorflow/cc/saved_model/reader.cc:89] Reading meta graph with tags { serve }
2022-11-09 11:29:11.470586: I tensorflow/cc/saved_model/reader.cc:130] Reading SavedModel debug info (if present) from: ./resources/models/dummy_model
2022-11-09 11:29:11.470803: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-11-09 11:29:11.487647: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:354] MLIR V1 optimization pass is not enabled
2022-11-09 11:29:11.488828: I tensorflow/cc/saved_model/loader.cc:229] Restoring SavedModel bundle.
2022-11-09 11:29:11.547895: I tensorflow/cc/saved_model/loader.cc:213] Running initialization op on SavedModel bundle at path: ./resources/models/dummy_model
2022-11-09 11:29:11.556822: I tensorflow/cc/saved_model/loader.cc:305] SavedModel load for tags { serve }; Status: success: OK. Took 89859 microseconds.
2022-11-09 11:29:11.561676: W tensorflow/core/framework/op_kernel.cc:1757] OP_REQUIRES failed at op_kernel.cc:148 : UNIMPLEMENTED: Op TensorArrayV2 is not available in GraphDef version 1205. It has been removed in version 26. Use TensorArrayV3.
terminate called after throwing an instance of 'std::runtime_error'
  what():  Op TensorArrayV2 is not available in GraphDef version 1205. It has been removed in version 26. Use TensorArrayV3.
@mmetwalli96
Copy link
Author

I created that type in raw_ops.h
here is the code that I got to do this

inline tensor tensor_array_v3(const tensor& size, datatype dtype, const std::vector<int64_t>& element_shape, bool dynamic_size=false, bool clear_after_read=true, const std::string& tensor_array_name="") {

    // Define Op
    std::unique_ptr<TFE_Op, decltype(&TFE_DeleteOp)> op(TFE_NewOp(context::get_context(), "TensorArrayV3", context::get_status()), &TFE_DeleteOp);
    status_check(context::get_status());

    // Required input arguments
    
    TFE_OpAddInput(op.get(), size.tfe_handle.get(), context::get_status());
    status_check(context::get_status());
    

    // Attributes
    TFE_OpSetAttrType(op.get(), "dtype", dtype);
    
    TFE_OpSetAttrShape(op.get(), "element_shape", element_shape.data(), static_cast<int>(element_shape.size()), context::get_status());
    status_check(context::get_status());
    
    TFE_OpSetAttrBool(op.get(), "dynamic_size", (unsigned char)dynamic_size);
    TFE_OpSetAttrBool(op.get(), "clear_after_read", (unsigned char)clear_after_read);
    TFE_OpSetAttrString(op.get(), "tensor_array_name", (void*) tensor_array_name.c_str(), tensor_array_name.size());

    // Execute Op
    int num_outputs_op = 1;
    TFE_TensorHandle* res[1] = {nullptr};
    TFE_Execute(op.get(), res, &num_outputs_op, context::get_status());
    status_check(context::get_status());
    return tensor(res[0]);
}

I get this error

2022-11-09 11:33:10.619252: I tensorflow/cc/saved_model/reader.cc:45] Reading SavedModel from: ./resources/models/dummy_model
2022-11-09 11:33:10.625114: I tensorflow/cc/saved_model/reader.cc:89] Reading meta graph with tags { serve }
2022-11-09 11:33:10.625179: I tensorflow/cc/saved_model/reader.cc:130] Reading SavedModel debug info (if present) from: ./resources/models/dummy_model
2022-11-09 11:33:10.625426: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-11-09 11:33:10.645336: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:354] MLIR V1 optimization pass is not enabled
2022-11-09 11:33:10.646504: I tensorflow/cc/saved_model/loader.cc:229] Restoring SavedModel bundle.
2022-11-09 11:33:10.699616: I tensorflow/cc/saved_model/loader.cc:213] Running initialization op on SavedModel bundle at path: ./resources/models/dummy_model
2022-11-09 11:33:10.708967: I tensorflow/cc/saved_model/loader.cc:305] SavedModel load for tags { serve }; Status: success: OK. Took 89693 microseconds.
terminate called after throwing an instance of 'std::runtime_error'
  what():  Expecting 2 outputs, but *num_retvals is 1

@mmetwalli96
Copy link
Author

mmetwalli96 commented Nov 9, 2022

here is the model details

The given SavedModel SignatureDef contains the following input(s):
  inputs['dense_input'] tensor_info:
      dtype: DT_FLOAT
      shape: (-1, 7)
      name: serving_default_dense_input:0
The given SavedModel SignatureDef contains the following output(s):
  outputs['dense'] tensor_info:
      dtype: DT_FLOAT
      shape: (-1, 1)
      name: StatefulPartitionedCall:0
Method name is: tensorflow/serving/predict

this is the input that I am using to score the model

 // create a [7,1] tensor with the input values
auto input = tensor_array_v3({param1, param2, param3, param4, param5, param6, param7}, TF_FLOAT, {7, 1}, "input");

auto output = model({{"serving_default_dense_input:0", input}},{"StatefulPartitionedCall:0"});

@serizba
Copy link
Owner

serizba commented Nov 10, 2022

Hi,

From the error you are getting, what(): Expecting 2 outputs, but *num_retvals is 1, it looks like you should add one more output to the operation.

Hope this helps!

@mmetwalli96
Copy link
Author

Hi, it seems that didn't work. I think this is because of this code here in the tensor_array_v3 definition

// Execute Op
    int num_outputs_op = 1;
    TFE_TensorHandle* res[1] = {nullptr};
    TFE_Execute(op.get(), res, &num_outputs_op, context::get_status());
    status_check(context::get_status());
    return tensor(res[0]);

changing num_outputs_op from one to zero throws a different error related to the data type. Here is the error message

  what():  cannot compute TensorArrayV3 as input #0(zero-based) was expected to be a int32 tensor but is a float tensor

@serizba
Copy link
Owner

serizba commented Nov 14, 2022

@mmetwalli96

I guess you mean changing from 1 to 2. The error is saying you should change the datatype, have you tried this?

@mmetwalli96
Copy link
Author

mmetwalli96 commented Nov 14, 2022

@serizba

You are right, I tried that, and it didn't make it work.
I tried to use the function fill, and this is the error that I get in this picture
WhatsApp Image 2022-11-09 at 6 34 21 PM

@serizba
Copy link
Owner

serizba commented Nov 14, 2022

All these errors you are posting don't look related to the library itself, but rather bugs or incorrect usage. Now it looks like you are providing tensors with the wrong shape. I recommend you to read careful the errors and try to debug it by yourself.

Hope this helps

@mmetwalli96
Copy link
Author

No problem, I will try to look further into that. Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants