Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TensorArray and ops::tensor_array_write* shorthand function are broken #358

Open
Corallus-Caninus opened this issue Apr 15, 2022 · 4 comments

Comments

@Corallus-Caninus
Copy link
Contributor

Corallus-Caninus commented Apr 15, 2022

I will be referencing the following documentation:
https://www.tensorflow.org/api_docs/cc/class/tensorflow/ops/tensor-array
https://www.tensorflow.org/api_docs/cc/class/tensorflow/ops/tensor-array-write

as well as the following source code:
src/graph.rs @ line 1195 function get_attr_metadata and the following get_attr_*
src/ops/ops_impl @ line 123054 struct TensorArrayWriteV3 and the implementations

TensorArray is expected to return handle and flow according to the C++ API

These Output types are required for tensor_array_write (tensor_array_write_v3 in my case).

However, going through the source in graph.rs I can only read attributes as native type to recover these outputs, e.g.: &str and f32. TensorArrayV3,s constructor only returns one Output type.

tensor_array_write_v3() has all its generics set as into which is correct, but I cannot retrieve these outputs from the TensorArrayV3::build() constructor which only returns one Output type.

I have tried passing this Output type into tensor_array_write_v3 by cloning my TensorArray object for handle and flow parameters, hoping Tensorflow will associate the handle and flow attributes as Output types but this throws an internal error that is

thread 'layers::test_concat_split' panicked at 'called `Result::unwrap()` on an `Err` value: {inner:0x20a622a30e0, InvalidArgument: Input 'flow_in' passed resource expected float while building NodeDef 'TensorArrayWriteV3' using Op<name=TensorArrayWriteV3; signature=handle:resource, index:int32, value:T, flow_in:float -> flow_out:float; attr=T:type; is_stateful=true>}', src\layers.rs:176:58
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
test layers::test_concat_split ... FAILED

for posterity, this is my function being developed and tested:

///merge layers take in a vector of layers and concatenates them into a single layer.
fn merge<O1: Into<Output>>(
    inputs: Vec<O1>,
    scope: &mut Scope,
) -> Result<(Output, Operation), Status> {
    //create an ops::constant for len inputs
    let size = ops::constant(inputs.len() as i32, scope)?;
    let axis_zero = ops::constant(0, scope)?;

    //create a tensorarray to hold inputs, dont dynamically size so we can statically
    //optimize memory allocation shape a little.
    let input_tensor: TensorArrayV3 = TensorArrayV3::new()
        .dynamic_size(false)
        .dtype(DataType::Float);
    let input_tensor_op = input_tensor.build(size, scope)?;
    //now insert each entry in inputs into the input_tensor
    for (i, input) in inputs.into_iter().enumerate() {
        //create an tensorarraywrite op for each input
        let index_constant = ops::constant(&[i as i32], scope)?;

        let writer: Output =
            ops::tensor_array_write_v3(
                // input_tensor_op.clone().get_attr_string("handle")?.into(),
                input_tensor_op.clone(),
                index_constant,
                input,
                //TODO: TensorArray doesnt return flow, hopefully the flow attribute
                //      of this op will be read when xla builds the graph if I pass the TensorArray here
                // input_tensor_op.clone().get_attr_float("flow")?.into(),
                input_tensor_op.clone(),
                scope,
            )?
            .into();
    }

    let group_op = ops::concat::<Operation, Operation>(axis_zero, input_tensor_op, scope)?;
    Ok((group_op.clone().into(), group_op.clone()))
}
@Corallus-Caninus
Copy link
Contributor Author

Corallus-Caninus commented Apr 17, 2022

I was able to work around this by manually creating outputs after using some reflection on the tensorflow objects to find the outputs then indexing the Outputs from the object (method table?)


//verify the output exists by tensorflow reflection:
        let output_list_lengths = input_tensor_op.clone().output_list_length("flow");
        println!("flow output has length: {:?}", output_list_lengths);
        let output_list_lengths = input_tensor_op.clone().output_list_length("handle");
        println!("handle output has length: {:?}", output_list_lengths);
       
//I guessed until these were the correct type given index, the documentation seems to list them in index order:
        //create an output manually by getting its index
        let handle_output = tf_output {
            operation: input_tensor_op.clone(),
            index: 0,
        };
        let flow_output = tf_output {
            operation: input_tensor_op.clone(),
            index: 1,
        };

however this does not close this issue.

@Corallus-Caninus Corallus-Caninus changed the title TensorArray and tensor_array_write* are broken TensorArray and tensor_array_write* shorthand function are broken Apr 17, 2022
@Corallus-Caninus Corallus-Caninus changed the title TensorArray and tensor_array_write* shorthand function are broken TensorArray and ops::tensor_array_write* shorthand function are broken Apr 17, 2022
@dskkato
Copy link
Contributor

dskkato commented Apr 18, 2022

Duplicate #357
edited: This is not dup of the above issue.

@Corallus-Caninus
Copy link
Contributor Author

is there any plan for making Operation Output returns a little more ergonomic? I currently have to count the position of an Operation's Outputs then index it according to C++ documentation. I think I can write some bindgen for each output with .get_output_* methods but I am still learning the codebase.

@Corallus-Caninus
Copy link
Contributor Author

Can we close this with Operation instances?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants