You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to backprop/minimize my network without the input vector since I already have output and label vectors
This is the relevant code I'm trying to implement:
...
//backprop
let mut run_args = SessionRunArgs::new();
run_args.add_target(&self.minimize);
let error_squared_fetch = run_args.request_fetch(&self.Error, 0);
// set output feed manually
//TODO: runtime says we need to feed input
run_args.add_feed(&self.Output_op, 0, &output);
run_args.add_feed(&self.Label, 0, &labels);
self.session.run(&mut run_args)?;
let res: Tensor<f32> = run_args.fetch(error_squared_fetch)?;
...
where Output_op and Label are my output and label operations respectively and output and labels are my output and label tensors.
self.minimize is either GradientDescent optimizer or Adadelta optimizer. Error operation is defined as a function of output and label exclusively. The network is very similar to the xor example in this repository and is from my NormNet repo (its very messy and initial so beware).
Based on my understanding of backprop this should be possible. Is this feature missing or did I make a mistake? Please let me know how I can clarify this further.
stdout log from runtime:
thread 'tests::test_evaluate' panicked at 'called Result::unwrap() on an Err value: {inner:0x2147c7c9480, InvalidArgument: You must feed a value for placeholder tensor 'input' with dtype float and shape [1,2]
[[{{node input}}]]}', src\lib.rs:1197:94
note: run with RUST_BACKTRACE=1 environment variable to display a backtrace
test tests::test_evaluate ... FAILED
The text was updated successfully, but these errors were encountered:
I am unable to "backfeed" the output_op. It seems it is overridden by the input op placeholder since the network forward propagates anyways, after feeding to the Output_op operation. I believe I have to rework my graph, but this seems like it should be possible. please advise on the order of operations in the graph (pun unintended). From what I have gathered feeds are set in the graph but overriden if mutated during session.run() calls.
I am trying to backprop/minimize my network without the input vector since I already have output and label vectors
This is the relevant code I'm trying to implement:
where Output_op and Label are my output and label operations respectively and output and labels are my output and label tensors.
self.minimize is either GradientDescent optimizer or Adadelta optimizer. Error operation is defined as a function of output and label exclusively. The network is very similar to the xor example in this repository and is from my NormNet repo (its very messy and initial so beware).
Based on my understanding of backprop this should be possible. Is this feature missing or did I make a mistake? Please let me know how I can clarify this further.
stdout log from runtime:
thread 'tests::test_evaluate' panicked at 'called
Result::unwrap()
on anErr
value: {inner:0x2147c7c9480, InvalidArgument: You must feed a value for placeholder tensor 'input' with dtype float and shape [1,2][[{{node input}}]]}', src\lib.rs:1197:94
note: run with
RUST_BACKTRACE=1
environment variable to display a backtracetest tests::test_evaluate ... FAILED
The text was updated successfully, but these errors were encountered: