Container __per_step_0 does not exist #50631
Labels
stale
This label marks the issue/pr stale - to be closed automatically if no activity
stat:awaiting response
Status - Awaiting response from author
TF 2.3
Issues related to TF 2.3
type:others
issues not falling in bug, perfromance, support, build and install or feature
Please make sure that this is an issue related to performance of TensorFlow.
As per our
GitHub Policy,
we only address code/doc bugs, performance issues, feature requests and
build/installation issues on GitHub. tag:performance_template
System information
Describe the current behavior
While running multiple object detection inference in parallel the session crashed and the below error occurred.
Traceback (most recent call last):
output_dict = model(input_tensor)
File "/home//.conda/envs//lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 1605, in call
return self._call_impl(args, kwargs)
File "/home//.conda/envs//lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 1645, in _call_impl
return self._call_flat(args, self.captured_inputs, cancellation_manager)
File "/home//.conda/envs//lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 1746, in _call_flat
ctx, args, cancellation_manager=cancellation_manager))
File "/home//.conda/envs//lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 598, in call
ctx=ctx)
File "/home//.conda/envs//lib/python3.6/site-packages/tensorflow/python/eager/execute.py", line 60, in quick_execute
inputs, attrs, num_outputs)
tensorflow.python.framework.errors_impl.NotFoundError: Container __per_step_0 does not exist. (Could not find resource: __per_step_0/_tensor_arraysBatchMultiClassNonMaxSuppression/map/TensorArray_11_950)
[[node BatchMultiClassNonMaxSuppression/map/while/TensorArrayWrite_5/TensorArrayWriteV3 (defined at /home/workspace//utils/field_detection.py:15) ]] [Op:__inference_pruned_41885]
Function call stack:
pruned
Describe the expected behavior
Standalone code to reproduce the issue
Reproducing the issue was not possible.
The text was updated successfully, but these errors were encountered: