New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Segmentation Fault with TensorRT create interference graph #27100
Comments
@isra60 I was not able to reproduce. Could you add more description about how you setup the tensorflow/tensorrt repository locally? Thanks. |
@aaroey I have the same problem。 have you solved it? |
Here is my informations. System information
TensorRt5.0.2 |
@isra60 @huaifeng1993 could you try to use tf-nightly-gpu and see if it can reproduce? |
@aaroey I have similar issue and same stack with Tf2.1 but works with tf2.2 dev builds. Do you know of a change that might have gone in to address that? |
Hi There, We are checking to see if you still need help on this issue, as you are using an older version of tensorflow(1.x) which is officially considered as end of life. We recommend that you upgrade to 2.4 or later version and let us know if the issue still persists in newer versions. This issue will be closed automatically 7 days from now. If you still need help with this issue, Please open a new issue for any help you need against 2.x, and we will get you the right help. |
System information
Have I written custom code (as opposed to using a stock example script provided in TensorFlow):
OS Platform and Distribution: Linux Ubuntu 18.04
TensorFlow installed from (source or binary): bynary (tensorflow-gpu)
TensorFlow version (use command below): b'v1.13.1-0-g6612da8951' 1.13.1
Python version: Python 3.6.7
CUDA/cuDNN version: CUDA 10
GPU model and memory: NVIDIA 1060 GTX
Describe the current behavior
i'm trying to optimize a tensorflow model to tensort optimization. i'm using the example of object detection given by https://github.com/tensorflow/tensorrt/tree/master/tftrt/examples/object_detection. So the tensorflow model loads perfect but when I try to optimize it a segmentation fault raise.
Describe the expected behavior
Code to reproduce the issue
with tf.Graph().as_default() as tf_graph: with tf.Session(config=tf_config) as tf_sess: frozen_graph = trt.create_inference_graph( input_graph_def=frozen_graph, outputs=output_names, max_batch_size=max_batch_size, max_workspace_size_bytes=max_workspace_size_bytes, precision_mode=precision_mode, minimum_segment_size=minimum_segment_size, is_dynamic_op=True, maximum_cached_engines=maximum_cached_engines)
So the segmentation fault occurs in trt create_create_interference_graph.
Other info / logs
This is the log from python output
And this is the callstack from gdb .
The text was updated successfully, but these errors were encountered: