Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

While setting the acquired shared memory to zero a fatal SIGBUS signal appeared caused by memset. #2287

Closed
liaoyinyu opened this issue May 10, 2024 · 5 comments · Fixed by #2293

Comments

@liaoyinyu
Copy link

Required information

Operating system:
Ubuntu 20.04 docker

Eclipse iceoryx version:
v2.0.6

Observed result or behaviour:
I compiled ros2_humble in docker
The docker image is ubuntu:focal
After compiling I run the cmd
/root/ros2_humble/install/iceoryx_posh/bin/iox-roudi --config-file /etc/iceoryx/roudi_config.toml

Expected result or behaviour:
Log level set to: [Warning] Reserving 66761736 bytes in the shared memory [iceoryx_mgmt] [ Reserving shared memory successful ] Reserving 149264720 bytes in the shared memory [root] While setting the acquired shared memory to zero a fatal SIGBUS signal appeared caused by memset. The shared memory object with the following properties [ name = root, sizeInBytes = 149264720, access mode = AccessMode::READ_WRITE, open mode = OpenMode::PURGE_AND_CREATE, baseAddressHint = (nil), permissions = 0 ] maybe requires more memory than it is currently available in the system.

Conditions where it occurred / Performed steps:
Describe how one can reproduce the bug.

Additional helpful information

If there is a core dump, please run the following command and add the output to the issue in a separate comment

/usr/local/bin/iox-roudi --config-file /etc/iceoryx/roudi_config.toml

roudi_config.toml

# Adapt this config to your needs and rename it to e.g. roudi_config.toml
[general]
version = 1

[[segment]]

[[segment.mempool]]
size = 128
count = 10000

[[segment.mempool]]
size = 1024
count = 5000

[[segment.mempool]]
size = 16384
count = 1000

[[segment.mempool]]
size = 131072
count = 200

[[segment.mempool]]
size = 524288
count = 50

[[segment.mempool]]
size = 1048576
count = 30

[[segment.mempool]]
size = 4194304
count = 10
@elfenpiff
Copy link
Contributor

@liaoyinyu this should only happen when there is not enough memory available. Could you please check the output of cat /proc/meminfo if there is enough available?

@wkaisertexas
Copy link

I am having the same issue while running inside a docker container. I have the following output.

MemTotal:       65522540 kB
MemFree:        44017876 kB
MemAvailable:   57798540 kB
Buffers:          615656 kB
Cached:         14164872 kB
SwapCached:            0 kB
Active:          7129900 kB
Inactive:       11335056 kB
Active(anon):    5038324 kB
Inactive(anon):        0 kB
Active(file):    2091576 kB
Inactive(file): 11335056 kB
Unevictable:     1214692 kB
Mlocked:              64 kB
SwapTotal:       2097148 kB
SwapFree:        2097148 kB
Zswap:                 0 kB
Zswapped:              0 kB
Dirty:                72 kB
Writeback:             0 kB
AnonPages:       4899380 kB
Mapped:          1291320 kB
Shmem:           1353896 kB
KReclaimable:    1084116 kB
Slab:            1384876 kB
SReclaimable:    1084116 kB
SUnreclaim:       300760 kB
KernelStack:       32336 kB
PageTables:        71720 kB
SecPageTables:         0 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:    34858416 kB
Committed_AS:   18142792 kB
VmallocTotal:   34359738367 kB
VmallocUsed:      136400 kB
VmallocChunk:          0 kB
Percpu:            15552 kB
HardwareCorrupted:     0 kB
AnonHugePages:         0 kB
ShmemHugePages:        0 kB
ShmemPmdMapped:        0 kB
FileHugePages:         0 kB
FilePmdMapped:         0 kB
Unaccepted:            0 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
Hugetlb:               0 kB
DirectMap4k:      901056 kB
DirectMap2M:    18706432 kB
DirectMap1G:    48234496 kB

@elBoberido
Copy link
Member

@wkaisertexas can you try a memory config which requires less memory. Just to be sure that it's not related to not having enough memory available in docker

@wkaisertexas
Copy link

Sorry @elBoberido I was able to get this working.

You need to explicitly specify shared memory in the version of docker that I was using with the '--shm-size ' flag. The default was REALLY small (basically zero).

@elBoberido
Copy link
Member

@wkaisertexas great. I'll then close this issue

elfenpiff added a commit to elfenpiff/iceoryx that referenced this issue May 16, 2024
elfenpiff added a commit to elfenpiff/iceoryx that referenced this issue May 16, 2024
elBoberido added a commit that referenced this issue May 16, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants