Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tensorrt_yolo/lidar_centerpoint - Build Failed - "Workspace is too small!" #4118

Open
3 tasks done
harishkumarbalaji opened this issue Jan 24, 2024 · 4 comments
Open
3 tasks done
Labels
app:awsim AWSIM Autoware Simulator component:simulation Virtual environment setups and simulations. type:bug Software flaws or errors.

Comments

@harishkumarbalaji
Copy link

Checklist

  • I've read the contribution guidelines.
  • I've searched other issues and no duplicate issues were found.
  • I'm convinced that this is not my fault but a bug.

Description

Hello Team, I am sure that my PC meets the basic requirements and when I try to build the awsim-stable branch I get an error "Workspace is too small!" when trying to build the tensorrt_yolo or lidar_centerpoint package.

My Laptop Specs,

  • 16GB RAM with 32GB Swap
  • i7 - 9th gen - 12 core
  • 500GB SSHD
  • Nvidia GeForce GTX 1660 Ti Mobile - 6GB
  • Ubuntu 22
  • Development Docker(latest-cuda) image
  • Cuda and Nvidia-docker2 installed

Note: I tried the Troubleshooting# steps for memory but no luck.

Expected behavior

All the packages should be built as per the documentation - https://autowarefoundation.github.io/autoware-documentation/main/installation/autoware/docker-installation-devel/

Actual behavior

/home/hkb/autoware/src/universe/autoware.universe/perception/tensorrt_yolo/lib/include/cuda_utils.hpp(110): error: namespace "cuda::std" has no member "runtime_error"
      throw std::runtime_error("Workspace is too small!");
                 ^

6 errors detected in the compilation of "/home/hkb/autoware/src/universe/autoware.universe/perception/tensorrt_yolo/lib/src/plugins/nms.cu".
CMake Error at nms_plugin_generated_nms.cu.o.Release.cmake:280 (message):
  Error generating file
  /home/hkb/autoware/build/tensorrt_yolo/CMakeFiles/nms_plugin.dir/lib/src/plugins/./nms_plugin_generated_nms.cu.o

gmake[2]: *** [CMakeFiles/nms_plugin.dir/build.make:1189: CMakeFiles/nms_plugin.dir/lib/src/plugins/nms_plugin_generated_nms.cu.o] Error 1
gmake[1]: *** [CMakeFiles/Makefile2:198: CMakeFiles/nms_plugin.dir/all] Error 2
gmake: *** [Makefile:146: all] Error 2
---
Failed   <<< tensorrt_yolo [5.26s, exited with code 2]

Steps to reproduce

colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release

inside latest-cuda docker container.

Versions

No response

Possible causes

No response

Additional context

No response

@xmfcx
Copy link
Contributor

xmfcx commented Jan 26, 2024

error: namespace "cuda::std" has no member "runtime_error"

Here the error is about compilation, not that you have this std::runtime_error("Workspace is too small!") error.

Could you run nvcc -V in the docker container and share the results?

@harishkumarbalaji
Copy link
Author

harishkumarbalaji commented Jan 26, 2024

error: namespace "cuda::std" has no member "runtime_error"

Here the error is about compilation, not that you have this std::runtime_error("Workspace is too small!") error.

Could you run nvcc -V in the docker container and share the results?

I am using ghcr.io/autowarefoundation/autoware-universe:latest-cuda but nvcc -V command is not working inside docker container

bash: nvcc: command not found

But after updating the PATH to detect cuda, using,

echo 'export PATH=/usr/local/cuda/bin${PATH:+:${PATH}}' >> ~/.bashrc
echo 'export LD_LIBRARY_PATH=/usr/local/cuda/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}' >> ~/.bashrc

I get this output,

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Wed_Nov_22_10:17:15_PST_2023
Cuda compilation tools, release 12.3, V12.3.107
Build cuda_12.3.r12.3/compiler.33567101_0

@xmfcx
Copy link
Contributor

xmfcx commented Jan 29, 2024

This issue was discussed on this Autoware Discord Thread

Problem Cause:

  • awsim-stable branch is on old CUDA version of the Autoware.
  • a08fc46
    • This is the commit that updated the CUDA 11.6 to 12.3 .
    • The docker image generated right before that should work hopefully.

Actions proposed:

For a temporary solution,

@xmfcx xmfcx added type:bug Software flaws or errors. app:awsim AWSIM Autoware Simulator component:simulation Virtual environment setups and simulations. labels Jan 29, 2024
@harishkumarbalaji
Copy link
Author

Thanks for the reply, I tried with ghcr.io/autowarefoundation/autoware-universe:humble-20231101-cuda docker image in the awsim-stable branch and the tensorrt_yolo package is building successfully. Leaving this issue open as I can see this issue is going to be taken care of by Simulation WG. Feel free to close this issue if required. Thanks a lot for the help.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
app:awsim AWSIM Autoware Simulator component:simulation Virtual environment setups and simulations. type:bug Software flaws or errors.
Projects
Development

No branches or pull requests

2 participants