Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mongo 5.0.0 crashes but 4.4.6 works #485

Closed
roadsidev opened this issue Jul 16, 2021 · 26 comments · Fixed by #491
Closed

Mongo 5.0.0 crashes but 4.4.6 works #485

roadsidev opened this issue Jul 16, 2021 · 26 comments · Fixed by #491

Comments

@roadsidev
Copy link

roadsidev commented Jul 16, 2021

EDIT: conclusion here #485 (comment)


Already tried too many things and 5.0.0 (or latest) won't work on my Debian distro but it works fine on WSL2 (also Debian) for some reason. If I specify the 4.4.6 version it works great.

The latest version wont go up and will go on a restart loop. Docker logs are empty as well so I couldn't see what was happening.

image

My docker-compose.yml:

version: "3.8"
services:
 mongodb:
  image : mongo:latest
  container_name: mongodb
  environment:
  - PUID=1000
  - PGID=1000
  volumes:
  - /home/roadside/mongodb/database:/data/db
  ports:
  - 27017:27017
  restart: unless-stopped
@wglambert
Copy link

Sounds like the same issue as #484

Though I'm not able to reproduce, and without logs it's hard to narrow it down any further than something in the host environment seems to be involved

$ uname -mrv
4.19.0-14-amd64 #1 SMP Debian 4.19.171-2 (2021-01-30) x86_64

$ cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 10 (buster)"
NAME="Debian GNU/Linux"
VERSION_ID="10"
VERSION="10 (buster)"
VERSION_CODENAME=buster
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"

$ docker-compose up -d
Creating network "mongo_default" with the default driver
Pulling mongodb (mongo:latest)...
latest: Pulling from library/mongo
Digest: sha256:f4ff7bb4291eb5d3f530a726fc524ba8e4318d652e64f2d58560ff87d083a84c
Status: Downloaded newer image for mongo:latest
Creating mongodb ... done

$ docker ps
CONTAINER ID   IMAGE          COMMAND                  CREATED          STATUS          PORTS                                           NAMES
b44e7fa4219a   mongo:latest   "docker-entrypoint.s…"   11 minutes ago   Up 11 minutes   0.0.0.0:27017->27017/tcp, :::27017->27017/tcp   mongodb
docker logs
$ docker logs mongodb
{"t":{"$date":"2021-07-16T15:28:46.345+00:00"},"s":"I",  "c":"CONTROL",  "id":23285,   "ctx":"-","msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"}
{"t":{"$date":"2021-07-16T15:28:46.346+00:00"},"s":"I",  "c":"NETWORK",  "id":4915701, "ctx":"main","msg":"Initialized wire specification","attr":{"spec":{"incomingExternalClient":{"minWireVersion":0,"maxWireVersion":13},"incomingInternalClient":{"minWireVersion":0,"maxWireVersion":13},"outgoing":{"minWireVersion":0,"maxWireVersion":13},"isInternalClient":true}}}
{"t":{"$date":"2021-07-16T15:28:46.352+00:00"},"s":"W",  "c":"ASIO",     "id":22601,   "ctx":"main","msg":"No TransportLayer configured during NetworkInterface startup"}
{"t":{"$date":"2021-07-16T15:28:46.352+00:00"},"s":"I",  "c":"NETWORK",  "id":4648601, "ctx":"main","msg":"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize."}
{"t":{"$date":"2021-07-16T15:28:46.356+00:00"},"s":"W",  "c":"ASIO",     "id":22601,   "ctx":"main","msg":"No TransportLayer configured during NetworkInterface startup"}
{"t":{"$date":"2021-07-16T15:28:46.356+00:00"},"s":"I",  "c":"REPL",     "id":5123008, "ctx":"main","msg":"Successfully registered PrimaryOnlyService","attr":{"service":"TenantMigrationDonorService","ns":"config.tenantMigrationDonors"}}
{"t":{"$date":"2021-07-16T15:28:46.356+00:00"},"s":"I",  "c":"REPL",     "id":5123008, "ctx":"main","msg":"Successfully registered PrimaryOnlyService","attr":{"service":"TenantMigrationRecipientService","ns":"config.tenantMigrationRecipients"}}
{"t":{"$date":"2021-07-16T15:28:46.357+00:00"},"s":"I",  "c":"CONTROL",  "id":4615611, "ctx":"initandlisten","msg":"MongoDB starting","attr":{"pid":1,"port":27017,"dbPath":"/data/db","architecture":"64-bit","host":"b44e7fa4219a"}}
{"t":{"$date":"2021-07-16T15:28:46.357+00:00"},"s":"I",  "c":"CONTROL",  "id":23403,   "ctx":"initandlisten","msg":"Build Info","attr":{"buildInfo":{"version":"5.0.0","gitVersion":"1184f004a99660de6f5e745573419bda8a28c0e9","openSSLVersion":"OpenSSL 1.1.1f  31 Mar 2020","modules":[],"allocator":"tcmalloc","environment":{"distmod":"ubuntu2004","distarch":"x86_64","target_arch":"x86_64"}}}}
{"t":{"$date":"2021-07-16T15:28:46.357+00:00"},"s":"I",  "c":"CONTROL",  "id":51765,   "ctx":"initandlisten","msg":"Operating System","attr":{"os":{"name":"Ubuntu","version":"20.04"}}}
{"t":{"$date":"2021-07-16T15:28:46.357+00:00"},"s":"I",  "c":"CONTROL",  "id":21951,   "ctx":"initandlisten","msg":"Options set by command line","attr":{"options":{"net":{"bindIp":"*"}}}}
{"t":{"$date":"2021-07-16T15:28:46.359+00:00"},"s":"I",  "c":"STORAGE",  "id":22297,   "ctx":"initandlisten","msg":"Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem","tags":["startupWarnings"]}
{"t":{"$date":"2021-07-16T15:28:46.359+00:00"},"s":"I",  "c":"STORAGE",  "id":22315,   "ctx":"initandlisten","msg":"Opening WiredTiger","attr":{"config":"create,cache_size=485M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),builtin_extension_config=(zstd=(compression_level=6)),file_manager=(close_idle_time=600,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],"}}
{"t":{"$date":"2021-07-16T15:28:46.892+00:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1626449326:892958][1:0x7fc195dc4c80], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Set global recovery timestamp: (0, 0)"}}
{"t":{"$date":"2021-07-16T15:28:46.893+00:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1626449326:893279][1:0x7fc195dc4c80], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Set global oldest timestamp: (0, 0)"}}
{"t":{"$date":"2021-07-16T15:28:46.898+00:00"},"s":"I",  "c":"STORAGE",  "id":4795906, "ctx":"initandlisten","msg":"WiredTiger opened","attr":{"durationMillis":539}}
{"t":{"$date":"2021-07-16T15:28:46.898+00:00"},"s":"I",  "c":"RECOVERY", "id":23987,   "ctx":"initandlisten","msg":"WiredTiger recoveryTimestamp","attr":{"recoveryTimestamp":{"$timestamp":{"t":0,"i":0}}}}
{"t":{"$date":"2021-07-16T15:28:46.905+00:00"},"s":"I",  "c":"STORAGE",  "id":4366408, "ctx":"initandlisten","msg":"No table logging settings modifications are required for existing WiredTiger tables","attr":{"loggingEnabled":true}}
{"t":{"$date":"2021-07-16T15:28:46.906+00:00"},"s":"I",  "c":"STORAGE",  "id":22262,   "ctx":"initandlisten","msg":"Timestamp monitor starting"}
{"t":{"$date":"2021-07-16T15:28:46.908+00:00"},"s":"W",  "c":"CONTROL",  "id":22120,   "ctx":"initandlisten","msg":"Access control is not enabled for the database. Read and write access to data and configuration is unrestricted","tags":["startupWarnings"]}
{"t":{"$date":"2021-07-16T15:28:46.908+00:00"},"s":"W",  "c":"CONTROL",  "id":22178,   "ctx":"initandlisten","msg":"/sys/kernel/mm/transparent_hugepage/enabled is 'always'. We suggest setting it to 'never'","tags":["startupWarnings"]}
{"t":{"$date":"2021-07-16T15:28:46.909+00:00"},"s":"I",  "c":"STORAGE",  "id":20320,   "ctx":"initandlisten","msg":"createCollection","attr":{"namespace":"admin.system.version","uuidDisposition":"provided","uuid":{"uuid":{"$uuid":"922d5d62-54fb-4f84-8e44-daaa4b09d46d"}},"options":{"uuid":{"$uuid":"922d5d62-54fb-4f84-8e44-daaa4b09d46d"}}}}
{"t":{"$date":"2021-07-16T15:28:46.925+00:00"},"s":"I",  "c":"INDEX",    "id":20345,   "ctx":"initandlisten","msg":"Index build: done building","attr":{"buildUUID":null,"namespace":"admin.system.version","index":"_id_","commitTimestamp":null}}
{"t":{"$date":"2021-07-16T15:28:46.926+00:00"},"s":"I",  "c":"REPL",     "id":20459,   "ctx":"initandlisten","msg":"Setting featureCompatibilityVersion","attr":{"newVersion":"5.0"}}
{"t":{"$date":"2021-07-16T15:28:46.926+00:00"},"s":"I",  "c":"NETWORK",  "id":4915702, "ctx":"initandlisten","msg":"Updated wire specification","attr":{"oldSpec":{"incomingExternalClient":{"minWireVersion":0,"maxWireVersion":13},"incomingInternalClient":{"minWireVersion":0,"maxWireVersion":13},"outgoing":{"minWireVersion":0,"maxWireVersion":13},"isInternalClient":true},"newSpec":{"incomingExternalClient":{"minWireVersion":0,"maxWireVersion":13},"incomingInternalClient":{"minWireVersion":13,"maxWireVersion":13},"outgoing":{"minWireVersion":13,"maxWireVersion":13},"isInternalClient":true}}}
{"t":{"$date":"2021-07-16T15:28:46.927+00:00"},"s":"I",  "c":"NETWORK",  "id":4915702, "ctx":"initandlisten","msg":"Updated wire specification","attr":{"oldSpec":{"incomingExternalClient":{"minWireVersion":0,"maxWireVersion":13},"incomingInternalClient":{"minWireVersion":13,"maxWireVersion":13},"outgoing":{"minWireVersion":13,"maxWireVersion":13},"isInternalClient":true},"newSpec":{"incomingExternalClient":{"minWireVersion":0,"maxWireVersion":13},"incomingInternalClient":{"minWireVersion":13,"maxWireVersion":13},"outgoing":{"minWireVersion":13,"maxWireVersion":13},"isInternalClient":true}}}
{"t":{"$date":"2021-07-16T15:28:46.927+00:00"},"s":"I",  "c":"STORAGE",  "id":5071100, "ctx":"initandlisten","msg":"Clearing temp directory"}
{"t":{"$date":"2021-07-16T15:28:46.927+00:00"},"s":"I",  "c":"CONTROL",  "id":20536,   "ctx":"initandlisten","msg":"Flow Control is enabled on this deployment"}
{"t":{"$date":"2021-07-16T15:28:46.929+00:00"},"s":"I",  "c":"FTDC",     "id":20625,   "ctx":"initandlisten","msg":"Initializing full-time diagnostic data capture","attr":{"dataDirectory":"/data/db/diagnostic.data"}}
{"t":{"$date":"2021-07-16T15:28:46.929+00:00"},"s":"I",  "c":"STORAGE",  "id":20320,   "ctx":"initandlisten","msg":"createCollection","attr":{"namespace":"local.startup_log","uuidDisposition":"generated","uuid":{"uuid":{"$uuid":"8f4a4648-e995-448f-bf46-93296146c08f"}},"options":{"capped":true,"size":10485760}}}
{"t":{"$date":"2021-07-16T15:28:46.935+00:00"},"s":"I",  "c":"INDEX",    "id":20345,   "ctx":"initandlisten","msg":"Index build: done building","attr":{"buildUUID":null,"namespace":"local.startup_log","index":"_id_","commitTimestamp":null}}
{"t":{"$date":"2021-07-16T15:28:46.938+00:00"},"s":"I",  "c":"STORAGE",  "id":20320,   "ctx":"LogicalSessionCacheRefresh","msg":"createCollection","attr":{"namespace":"config.system.sessions","uuidDisposition":"generated","uuid":{"uuid":{"$uuid":"ecf1ee3c-304a-42aa-b1b0-8ded4f98ddf5"}},"options":{}}}
{"t":{"$date":"2021-07-16T15:28:46.940+00:00"},"s":"I",  "c":"CONTROL",  "id":20712,   "ctx":"LogicalSessionCacheReap","msg":"Sessions collection is not set up; waiting until next sessions reap interval","attr":{"error":"NamespaceNotFound: config.system.sessions does not exist"}}
{"t":{"$date":"2021-07-16T15:28:46.940+00:00"},"s":"I",  "c":"NETWORK",  "id":23015,   "ctx":"listener","msg":"Listening on","attr":{"address":"/tmp/mongodb-27017.sock"}}
{"t":{"$date":"2021-07-16T15:28:46.940+00:00"},"s":"I",  "c":"NETWORK",  "id":23015,   "ctx":"listener","msg":"Listening on","attr":{"address":"0.0.0.0"}}
{"t":{"$date":"2021-07-16T15:28:46.941+00:00"},"s":"I",  "c":"NETWORK",  "id":23016,   "ctx":"listener","msg":"Waiting for connections","attr":{"port":27017,"ssl":"off"}}
{"t":{"$date":"2021-07-16T15:28:46.947+00:00"},"s":"I",  "c":"INDEX",    "id":20345,   "ctx":"LogicalSessionCacheRefresh","msg":"Index build: done building","attr":{"buildUUID":null,"namespace":"config.system.sessions","index":"_id_","commitTimestamp":null}}
{"t":{"$date":"2021-07-16T15:28:46.948+00:00"},"s":"I",  "c":"INDEX",    "id":20345,   "ctx":"LogicalSessionCacheRefresh","msg":"Index build: done building","attr":{"buildUUID":null,"namespace":"config.system.sessions","index":"lsidTTLIndex","commitTimestamp":null}}
{"t":{"$date":"2021-07-16T15:29:46.915+00:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"Checkpointer","msg":"WiredTiger message","attr":{"message":"[1626449386:914877][1:0x7fc18d5b2700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 34, snapshot max: 34 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0)"}}
{"t":{"$date":"2021-07-16T15:30:46.924+00:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"Checkpointer","msg":"WiredTiger message","attr":{"message":"[1626449446:924072][1:0x7fc18d5b2700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 36, snapshot max: 36 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0)"}}
{"t":{"$date":"2021-07-16T15:31:46.930+00:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"Checkpointer","msg":"WiredTiger message","attr":{"message":"[1626449506:929925][1:0x7fc18d5b2700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 37, snapshot max: 37 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0)"}}
{"t":{"$date":"2021-07-16T15:32:46.935+00:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"Checkpointer","msg":"WiredTiger message","attr":{"message":"[1626449566:935232][1:0x7fc18d5b2700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 38, snapshot max: 38 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0)"}}
{"t":{"$date":"2021-07-16T15:33:46.940+00:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"Checkpointer","msg":"WiredTiger message","attr":{"message":"[1626449626:940551][1:0x7fc18d5b2700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 39, snapshot max: 39 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0)"}}
{"t":{"$date":"2021-07-16T15:34:46.947+00:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"Checkpointer","msg":"WiredTiger message","attr":{"message":"[1626449686:946981][1:0x7fc18d5b2700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 40, snapshot max: 40 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0)"}}
{"t":{"$date":"2021-07-16T15:35:46.952+00:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"Checkpointer","msg":"WiredTiger message","attr":{"message":"[1626449746:951982][1:0x7fc18d5b2700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 41, snapshot max: 41 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0)"}}
{"t":{"$date":"2021-07-16T15:36:46.961+00:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"Checkpointer","msg":"WiredTiger message","attr":{"message":"[1626449806:961640][1:0x7fc18d5b2700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 42, snapshot max: 42 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0)"}}
{"t":{"$date":"2021-07-16T15:37:46.967+00:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"Checkpointer","msg":"WiredTiger message","attr":{"message":"[1626449866:967245][1:0x7fc18d5b2700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 43, snapshot max: 43 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0)"}}
{"t":{"$date":"2021-07-16T15:38:46.975+00:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"Checkpointer","msg":"WiredTiger message","attr":{"message":"[1626449926:975698][1:0x7fc18d5b2700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 44, snapshot max: 44 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0)"}}

@roadsidev
Copy link
Author

roadsidev commented Jul 16, 2021

One thing I didn't mention is that my Debian is running through Proxmox so maybe the fact that it's a virtual machine is messing things up? Weird thing is it works for 4.4.6 but not for 5.0. Maybe I should edit the main post and add this information

I will try and mount a Debian fresh install in my Proxmox server and see if this happens again. Will report later

EDIT: Tried a fresh install of Debian (latest netinst image) and it's the same behavior (5.0 doesnt work and 4.4.6 does)

image

EDIT 2: I also want to add that I tried with a fresh install of Ubuntu server on my proxmox VE and it does the same.

@yosifkit
Copy link
Member

It might be helpful to get the logs of the crashing container. Can you replicate the failure via a plain a docker run -it --rm?


I'd just like to point out that the environment variables PUID ang PGID have no effect on the mongo images. If you are trying to run as a specific user, then you need to set user: (docker run --user).

@roadsidev
Copy link
Author

roadsidev commented Jul 16, 2021

It might be helpful to get the logs of the crashing container. Can you replicate the failure via a plain a docker run -it --rm?

I'd just like to point out that the environment variables PUID ang PGID have no effect on the mongo images. If you are trying to run as a specific user, then you need to set user: (docker run --user).

If I do docker run -it --rm mongo:4.4.6 it gives a full log.
image

But if I do docker run -it --rm mongo:latest it just disappears from existence without leaving any trace.
image

@yosifkit
Copy link
Member

It seems to be related to running on a Debian Buster host but it works fine on Debian Bullseye. I tried --security-opt seccomp=unconfined with no effect. There was no change when updating libseccomp2 on Buster to the backports version (from 2.3.3-4 to 2.4.4-1~bpo10+1).

mongo and mongod crash in the same way.

I am out of my depth to figure out where the issue lies. Maybe this debugging output will help someone dig deeper. Here is part of the strace:

...
mprotect(0x7f8e26431000, 4096, PROT_READ) = 0
munmap(0x7f8e26401000, 11584)           = 0
set_tid_address(0x7f8e24f76f50)         = 653
set_robust_list(0x7f8e24f76f60, 24)     = 0
rt_sigaction(SIGRTMIN, {sa_handler=0x7f8e25f91bf0, sa_mask=[], sa_flags=SA_RESTORER|SA_SIGINFO, sa_restorer=0x7f8e25f9f3c0}, NULL, 8) = 0
rt_sigaction(SIGRT_1, {sa_handler=0x7f8e25f91c90, sa_mask=[], sa_flags=SA_RESTORER|SA_RESTART|SA_SIGINFO, sa_restorer=0x7f8e25f9f3c0}, NULL, 8) = 0
rt_sigprocmask(SIG_UNBLOCK, [RTMIN RT_1], NULL, 8) = 0
prlimit64(0, RLIMIT_STACK, NULL, {rlim_cur=8192*1024, rlim_max=RLIM64_INFINITY}) = 0
--- SIGILL {si_signo=SIGILL, si_code=ILL_ILLOPN, si_addr=0x563861eb30da} ---
+++ killed by SIGILL (core dumped) +++
Illegal instruction

The backtrace from the core dump:

root@e701f3df7942:/# gdb -c core mongo
GNU gdb (Ubuntu 9.2-0ubuntu1~20.04) 9.2
...
Core was generated by `/usr/bin/mongo'.
Program terminated with signal SIGILL, Illegal instruction.
#0  0x00005646a7f8a0da in tcmalloc::SizeMap::Init() ()
(gdb) bt
#0  0x00005646a7f8a0da in tcmalloc::SizeMap::Init() ()
#1  0x00005646a7f929f7 in tcmalloc::Static::InitStaticVars() ()
#2  0x00005646a7f94447 in tcmalloc::ThreadCache::InitModule() ()
#3  0x00005646a7f945dd in tcmalloc::ThreadCache::CreateCacheIfNecessary() ()
#4  0x00005646a803cf35 in tcmalloc::allocate_full_malloc_oom(unsigned long) ()
#5  0x00007f618d7e211a in __newlocale (category_mask=<optimized out>, 
    locale=<optimized out>, base=<optimized out>) at newlocale.c:200
#6  0x00007f618cf580dd in ?? () from /lib/x86_64-linux-gnu/libp11-kit.so.0
#7  0x00007f618df78b8a in ?? () from /lib64/ld-linux-x86-64.so.2
#8  0x00007f618df78c91 in ?? () from /lib64/ld-linux-x86-64.so.2
#9  0x00007f618df6813a in ?? () from /lib64/ld-linux-x86-64.so.2
#10 0x0000000000000001 in ?? ()
#11 0x00007fffd3f058d7 in ?? ()
#12 0x0000000000000000 in ?? ()

This mongo:5 image:

$ docker pull mongo:5
5: Pulling from library/mongo
Digest: sha256:f4ff7bb4291eb5d3f530a726fc524ba8e4318d652e64f2d58560ff87d083a84c
Status: Downloaded newer image for mongo:5
docker.io/library/mongo:5

$ docker run -it --rm mongo:5 bash
$ # try mongo or mongod

@roadsidev
Copy link
Author

roadsidev commented Jul 17, 2021

It seems to be related to running on a Debian Buster host but it works fine on Debian Bullseye.

as I mentioned before, it didn't work on a fresh install of Ubuntu server which is based on Bullseye (as far as I can tell by doing cat /etc/debian_version)

image

@Leandropintogit
Copy link

Same here
The only log i found is
traps: mongod[2994] trap invalid opcode ip:56017a3a6cda sp:7ffda040e0c0 error:0 in mongod[560176496000+5022000]

@benny-conn
Copy link

I'm on pop_os which is debian based and 5.0.0 crashes on start as well. Had to revert to 4.4.6.

@matthiasradde
Copy link

Running mongodb within docker on Ubuntu on an older CPU (Core2Duo P8600) crashes with mongod 5 (the container restarts with code 132) but is running fine with 4.4.6.
Whereas on newer CPUs (Core i5-4310m or AMD Ryzen Threadripper 1920X 12-Core) the 5 is starting and running fine.

What I have tried was using gdb:

root@5f09ddb07e92:/# gdb /usr/bin/mongod
GNU gdb (Ubuntu 9.2-0ubuntu1~20.04) 9.2
Copyright (C) 2020 Free Software Foundation, Inc.
[...]
Reading symbols from /usr/bin/mongod...
(No debugging symbols found in /usr/bin/mongod)
(gdb) run
Starting program: /usr/bin/mongod
warning: Error disabling address space randomization: Operation not permitted
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".

Program received signal SIGILL, Illegal instruction.
0x0000564f3ca21cda in tcmalloc::SizeMap::Init() ()
(gdb)

Here I have found the information that restarting a container with code 132 is related to instructions not available on the CPU
https://stackoverflow.com/questions/60930359/docker-containers-exit-code-132
https://hub.mender.io/t/unable-to-start-on-premise-demo-server-per-getting-started-instructions/1414/23

In fact my Core2Duo is missing sse4_2 (and other flags in comparison to my other CPUs). But now I don't know how to proceed further, i.e. how to check the parameters used to compile mongod within the docker-image, ...
But maybe this is something an enlightet persons is able to check?

@Inglebard
Copy link

Hi,
I also encounter this error.

I notice it happens after Ubuntu upgrade : containerd.io:amd64 (1.4.6-1, 1.4.8-1).
I tried to downgrade to 1.4.6 without success.

Ubuuntu 20.04 (happens on 5.4.0-77-generic) :
Linux ns******.eu 5.4.0-80-generic #90-Ubuntu SMP Fri Jul 9 22:49:44 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux

Packages updated :

Upgrade: containerd.io:amd64 (1.4.6-1, 1.4.8-1), beamium:amd64 (2.0.7-focal, 2.0.8-bionic), libsystemd0:amd64 (245.4-4ubuntu3.7, 245.4-4ubuntu3.10), udev:amd64 (245.4-4ubuntu3.7, 245.4-4ubuntu3.10), libudev1:amd64 (245.4-4ubuntu3.7, 245.4-4ubuntu3.10), systemd-timesyncd:amd64 (245.4-4ubuntu3.7, 245.4-4ubuntu3.10), python3-distupgrade:amd64 (1:20.04.33, 1:20.04.35), ubuntu-release-upgrader-core:amd64 (1:20.04.33, 1:20.04.35), qemu-user-static:amd64 (1:4.2-3ubuntu6.16, 1:4.2-3ubuntu6.17), systemd-sysv:amd64 (245.4-4ubuntu3.7, 245.4-4ubuntu3.10), libpam-systemd:amd64 (245.4-4ubuntu3.7, 245.4-4ubuntu3.10), systemd:amd64 (245.4-4ubuntu3.7, 245.4-4ubuntu3.10), libnss-systemd:amd64 (245.4-4ubuntu3.7, 245.4-4ubuntu3.10), apache2-utils:amd64 (2.4.41-4ubuntu3.3, 2.4.41-4ubuntu3.4)

I can run bash but cannot exec docker entrypoint :

docker run --rm -it mongo:5.0 bash
root@6041c8b2a4f6:/# docker-entrypoint.sh mongod
Illegal instruction

@Puh00
Copy link

Puh00 commented Jul 22, 2021

Can also attest that mongo:latest no longer works and had to downgrade to 4.4.6 for the image to work on Raspberry Pi 4 (running Ubuntu Server 21.04).

@hbh7
Copy link

hbh7 commented Jul 22, 2021

Same thing here, looks like my test instance server updated the MongoDB image to version 5 or whatever, and then none of the mongo containers would start anymore, empty log output and endless restarting. Rolled back to tag "4" and everything worked great again. Running up-to-date Ubuntu Server 20.04.2 LTS in a VM on an R620 running Proxmox 7, since it sounds like that's relevant.

@piotr-musialek-skyrise
Copy link

piotr-musialek-skyrise commented Jul 29, 2021

I have the same problem. I'm using a virtual machine with Ubuntu and got this exit code 132. This SO thread and few others suggest that the CPU is a problem. So I changed virtual machine to the one with newer and better CPU, more RAM, but still Ubuntu. Fresh install of everything and ... same exit code 132.
I then downgraded to 4.4.7 and .... it works.

@STaRDoGG
Copy link

STaRDoGG commented Aug 2, 2021

+1 here. Same as the others; using the :latest tag on my machine gives me exit code 132 as well now. I'm running Desktop + Ubuntu (WSL2).

@martadinata666
Copy link

martadinata666 commented Aug 2, 2021

seems avx cpu feature needed to run mongo5, can somebody can confirm this? cat /proc/cpuinfo | grep --color avx

@wglambert
Copy link

@martadinata666 interesting find, on my Debian Buster machine (which can run mongo:5) it does have that avx flag

$ cat /proc/cpuinfo | grep -i avx
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm cpuid_fault invpcid_single pti ssbd ibrs ibpb fsgsbase bmi1 hle avx2 smep bmi2 erms invpcid rtm avx512f avx512dq clwb avx512cd avx512bw avx512vl xsaveopt arat pku ospke
$ uname -mrv
4.19.0-14-amd64 #1 SMP Debian 4.19.171-2 (2021-01-30) x86_64

$ cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 10 (buster)"
NAME="Debian GNU/Linux"
VERSION_ID="10"
VERSION="10 (buster)"
VERSION_CODENAME=buster
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"

$ cat docker-compose.yml | grep image
  image : mongo:5

$ docker-compose up -d
Creating mongodb ... done

$ docker ps
CONTAINER ID   IMAGE     COMMAND                  CREATED              STATUS              PORTS                                           NAMES
36ec193f8fd6   mongo:5   "docker-entrypoint.s…"   About a minute ago   Up About a minute   0.0.0.0:27017->27017/tcp, :::27017->27017/tcp   mongodb
docker logs mongo
$ docker logs mongodb
{"t":{"$date":"2021-08-02T16:19:45.152+00:00"},"s":"I",  "c":"NETWORK",  "id":4915701, "ctx":"-","msg":"Initialized wire specification","attr":{"spec":{"incomingExternalClient":{"minWireVersion":0,"maxWireVersion":13},"incomingInternalClient":{"minWireVersion":0,"maxWireVersion":13},"outgoing":{"minWireVersion":0,"maxWireVersion":13},"isInternalClient":true}}}
{"t":{"$date":"2021-08-02T16:19:45.164+00:00"},"s":"I",  "c":"CONTROL",  "id":23285,   "ctx":"main","msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"}
{"t":{"$date":"2021-08-02T16:19:45.166+00:00"},"s":"W",  "c":"ASIO",     "id":22601,   "ctx":"main","msg":"No TransportLayer configured during NetworkInterface startup"}
{"t":{"$date":"2021-08-02T16:19:45.166+00:00"},"s":"I",  "c":"NETWORK",  "id":4648601, "ctx":"main","msg":"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize."}
{"t":{"$date":"2021-08-02T16:19:45.170+00:00"},"s":"W",  "c":"ASIO",     "id":22601,   "ctx":"main","msg":"No TransportLayer configured during NetworkInterface startup"}
{"t":{"$date":"2021-08-02T16:19:45.171+00:00"},"s":"I",  "c":"REPL",     "id":5123008, "ctx":"main","msg":"Successfully registered PrimaryOnlyService","attr":{"service":"TenantMigrationDonorService","ns":"config.tenantMigrationDonors"}}
{"t":{"$date":"2021-08-02T16:19:45.171+00:00"},"s":"I",  "c":"REPL",     "id":5123008, "ctx":"main","msg":"Successfully registered PrimaryOnlyService","attr":{"service":"TenantMigrationRecipientService","ns":"config.tenantMigrationRecipients"}}
{"t":{"$date":"2021-08-02T16:19:45.171+00:00"},"s":"I",  "c":"CONTROL",  "id":4615611, "ctx":"initandlisten","msg":"MongoDB starting","attr":{"pid":1,"port":27017,"dbPath":"/data/db","architecture":"64-bit","host":"36ec193f8fd6"}}
{"t":{"$date":"2021-08-02T16:19:45.171+00:00"},"s":"I",  "c":"CONTROL",  "id":23403,   "ctx":"initandlisten","msg":"Build Info","attr":{"buildInfo":{"version":"5.0.0","gitVersion":"1184f004a99660de6f5e745573419bda8a28c0e9","openSSLVersion":"OpenSSL 1.1.1f  31 Mar 2020","modules":[],"allocator":"tcmalloc","environment":{"distmod":"ubuntu2004","distarch":"x86_64","target_arch":"x86_64"}}}}
{"t":{"$date":"2021-08-02T16:19:45.172+00:00"},"s":"I",  "c":"CONTROL",  "id":51765,   "ctx":"initandlisten","msg":"Operating System","attr":{"os":{"name":"Ubuntu","version":"20.04"}}}
{"t":{"$date":"2021-08-02T16:19:45.172+00:00"},"s":"I",  "c":"CONTROL",  "id":21951,   "ctx":"initandlisten","msg":"Options set by command line","attr":{"options":{"net":{"bindIp":"*"}}}}
{"t":{"$date":"2021-08-02T16:19:45.175+00:00"},"s":"W",  "c":"STORAGE",  "id":22271,   "ctx":"initandlisten","msg":"Detected unclean shutdown - Lock file is not empty","attr":{"lockFile":"/data/db/mongod.lock"}}
{"t":{"$date":"2021-08-02T16:19:45.175+00:00"},"s":"I",  "c":"STORAGE",  "id":22270,   "ctx":"initandlisten","msg":"Storage engine to use detected by data files","attr":{"dbpath":"/data/db","storageEngine":"wiredTiger"}}
{"t":{"$date":"2021-08-02T16:19:45.175+00:00"},"s":"W",  "c":"STORAGE",  "id":22302,   "ctx":"initandlisten","msg":"Recovering data from the last clean checkpoint."}
{"t":{"$date":"2021-08-02T16:19:45.175+00:00"},"s":"I",  "c":"STORAGE",  "id":22297,   "ctx":"initandlisten","msg":"Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem","tags":["startupWarnings"]}
{"t":{"$date":"2021-08-02T16:19:45.175+00:00"},"s":"I",  "c":"STORAGE",  "id":22315,   "ctx":"initandlisten","msg":"Opening WiredTiger","attr":{"config":"create,cache_size=485M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),builtin_extension_config=(zstd=(compression_level=6)),file_manager=(close_idle_time=600,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],"}}
{"t":{"$date":"2021-08-02T16:19:45.714+00:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1627921185:714759][1:0x7f146768cc80], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 1 through 2"}}
{"t":{"$date":"2021-08-02T16:19:45.925+00:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1627921185:925803][1:0x7f146768cc80], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 2 through 2"}}
{"t":{"$date":"2021-08-02T16:19:46.158+00:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1627921186:158221][1:0x7f146768cc80], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Main recovery loop: starting at 1/7260928 to 2/256"}}
{"t":{"$date":"2021-08-02T16:19:46.159+00:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1627921186:159567][1:0x7f146768cc80], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 1 through 2"}}
{"t":{"$date":"2021-08-02T16:19:46.231+00:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1627921186:231973][1:0x7f146768cc80], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 2 through 2"}}
{"t":{"$date":"2021-08-02T16:19:46.287+00:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1627921186:287648][1:0x7f146768cc80], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Set global recovery timestamp: (0, 0)"}}
{"t":{"$date":"2021-08-02T16:19:46.287+00:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1627921186:287891][1:0x7f146768cc80], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Set global oldest timestamp: (0, 0)"}}
{"t":{"$date":"2021-08-02T16:19:46.311+00:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1627921186:311087][1:0x7f146768cc80], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 4, snapshot max: 4 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0)"}}
{"t":{"$date":"2021-08-02T16:19:46.315+00:00"},"s":"I",  "c":"STORAGE",  "id":4795906, "ctx":"initandlisten","msg":"WiredTiger opened","attr":{"durationMillis":1139}}
{"t":{"$date":"2021-08-02T16:19:46.315+00:00"},"s":"I",  "c":"RECOVERY", "id":23987,   "ctx":"initandlisten","msg":"WiredTiger recoveryTimestamp","attr":{"recoveryTimestamp":{"$timestamp":{"t":0,"i":0}}}}
{"t":{"$date":"2021-08-02T16:19:46.319+00:00"},"s":"I",  "c":"STORAGE",  "id":4366408, "ctx":"initandlisten","msg":"No table logging settings modifications are required for existing WiredTiger tables","attr":{"loggingEnabled":true}}
{"t":{"$date":"2021-08-02T16:19:46.325+00:00"},"s":"I",  "c":"STORAGE",  "id":22262,   "ctx":"initandlisten","msg":"Timestamp monitor starting"}
{"t":{"$date":"2021-08-02T16:19:46.326+00:00"},"s":"W",  "c":"CONTROL",  "id":22120,   "ctx":"initandlisten","msg":"Access control is not enabled for the database. Read and write access to data and configuration is unrestricted","tags":["startupWarnings"]}
{"t":{"$date":"2021-08-02T16:19:46.327+00:00"},"s":"W",  "c":"CONTROL",  "id":22178,   "ctx":"initandlisten","msg":"/sys/kernel/mm/transparent_hugepage/enabled is 'always'. We suggest setting it to 'never'","tags":["startupWarnings"]}
{"t":{"$date":"2021-08-02T16:19:46.339+00:00"},"s":"I",  "c":"NETWORK",  "id":4915702, "ctx":"initandlisten","msg":"Updated wire specification","attr":{"oldSpec":{"incomingExternalClient":{"minWireVersion":0,"maxWireVersion":13},"incomingInternalClient":{"minWireVersion":0,"maxWireVersion":13},"outgoing":{"minWireVersion":0,"maxWireVersion":13},"isInternalClient":true},"newSpec":{"incomingExternalClient":{"minWireVersion":0,"maxWireVersion":13},"incomingInternalClient":{"minWireVersion":13,"maxWireVersion":13},"outgoing":{"minWireVersion":13,"maxWireVersion":13},"isInternalClient":true}}}
{"t":{"$date":"2021-08-02T16:19:46.340+00:00"},"s":"I",  "c":"STORAGE",  "id":5071100, "ctx":"initandlisten","msg":"Clearing temp directory"}
{"t":{"$date":"2021-08-02T16:19:46.340+00:00"},"s":"I",  "c":"CONTROL",  "id":20536,   "ctx":"initandlisten","msg":"Flow Control is enabled on this deployment"}
{"t":{"$date":"2021-08-02T16:19:46.344+00:00"},"s":"I",  "c":"FTDC",     "id":20625,   "ctx":"initandlisten","msg":"Initializing full-time diagnostic data capture","attr":{"dataDirectory":"/data/db/diagnostic.data"}}
{"t":{"$date":"2021-08-02T16:19:46.344+00:00"},"s":"I",  "c":"STORAGE",  "id":20320,   "ctx":"initandlisten","msg":"createCollection","attr":{"namespace":"local.startup_log","uuidDisposition":"generated","uuid":{"uuid":{"$uuid":"10795fe8-3471-4360-823c-287d43eed4fe"}},"options":{"capped":true,"size":10485760}}}
{"t":{"$date":"2021-08-02T16:19:46.352+00:00"},"s":"I",  "c":"INDEX",    "id":20345,   "ctx":"initandlisten","msg":"Index build: done building","attr":{"buildUUID":null,"namespace":"local.startup_log","index":"_id_","commitTimestamp":null}}
{"t":{"$date":"2021-08-02T16:19:46.356+00:00"},"s":"I",  "c":"NETWORK",  "id":23015,   "ctx":"listener","msg":"Listening on","attr":{"address":"/tmp/mongodb-27017.sock"}}
{"t":{"$date":"2021-08-02T16:19:46.356+00:00"},"s":"I",  "c":"NETWORK",  "id":23015,   "ctx":"listener","msg":"Listening on","attr":{"address":"0.0.0.0"}}
{"t":{"$date":"2021-08-02T16:19:46.357+00:00"},"s":"I",  "c":"NETWORK",  "id":23016,   "ctx":"listener","msg":"Waiting for connections","attr":{"port":27017,"ssl":"off"}}
{"t":{"$date":"2021-08-02T16:19:47.024+00:00"},"s":"I",  "c":"FTDC",     "id":20631,   "ctx":"ftdc","msg":"Unclean full-time diagnostic data capture shutdown detected, found interim file, some metrics may have been lost","attr":{"error":{"code":0,"codeName":"OK"}}}
{"t":{"$date":"2021-08-02T16:20:46.330+00:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"Checkpointer","msg":"WiredTiger message","attr":{"message":"[1627921246:330058][1:0x7f145ee7a700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 12, snapshot max: 12 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0)"}}

@martadinata666
Copy link

martadinata666 commented Aug 2, 2021

mine is intel g5400 that doesn't have the avx instruction, mongodb will just show error 132 illegal instructions

vendor_id       : GenuineIntel
cpu family      : 6
model           : 158
model name      : Intel(R) Pentium(R) Gold G5400 CPU @ 3.70GHz
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust smep erms invpcid mpx rdseed smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear flush_l1d
bugs            : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit srbds
bogomips        : 7399.70
clflush size    : 64
cache_alignment : 64
address sizes   : 39 bits physical, 48 bits virtual
power management:

also same error on RPI4

Hardware        : BCM2835
Revision        : c03112
Serial          : 100000007dad98df
Model           : Raspberry Pi 4 Model B Rev 1.2

BogoMIPS        : 108.00
Features        : fp asimd evtstrm crc32 cpuid
CPU implementer : 0x41
CPU architecture: 8
CPU variant     : 0x0
CPU part        : 0xd08
CPU revision    : 3

@matthiasradde
Copy link

@martadinata666 Thanks for this info - verified on my three boxes:

Ubuntu 20.04. on Intel P8600: no avx - mongo:5 is not able to run.
Windows 10 with Docker running in HyperV on Intel Core i5-4310M: checked from within a docker-container: avx2 available - mongo:5 is running
Debian 10 on AMD Ryzen Threadripper 1920X: avx2 available - mongo:5 is running

Here the complete list of flags supported by my CPUs:

status: mongo:5 not working
model name : Intel(R) Core(TM)2 Duo CPU P8600 @ 2.40GHz
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts nopl cpuid aperfmperf pni dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm sse4_1 xsave lahf_lm pti tpr_shadow vnmi flexpriority dtherm ida

status: mongo:5 working
model name : AMD Ryzen Threadripper 1920X 12-Core Processor
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl tsc_reliable nonstop_tsc cpuid extd_apicid pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ssbd ibpb vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt sha_ni xsaveopt xsavec xsaves clzero arat overflow_recov succor

status: mongo:5 working
model name : Intel(R) Core(TM) i5-4310M CPU @ 2.70GHz
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology cpuid pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm invpcid_single ssbd ibrs ibpb stibp fsgsbase bmi1 avx2 smep bmi2 erms invpcid xsaveopt flush_l1d arch_capabilities

@martadinata666
Copy link

@yosifkit
Copy link
Member

yosifkit commented Aug 3, 2021

Thanks @martadinata666 for the link.

Summary:

For Intel x86_64, MongoDB requires Sandy Bridge or later.
For AMD x86_64, MongoDB requires Bulldozer or later.

Starting in MongoDB 5.0, mongod, mongos, and the legacy mongo shell no longer support x86_64 platforms which do not meet this minimum microarchitecture requirement.

-https://docs.mongodb.com/manual/administration/production-notes/#x86_64

the underling requirement for the MongoDB 5.0 binary server packages is CPUs with AVX instructions. These are broadly Sandy Bridge or newer Intel CPUs, but there is a caveat:

Not all CPUs from the listed families support AVX. Generally, CPUs with the commercial denomination Core i3/i5/i7/i9 support them, whereas Pentium and Celeron CPUs do not.

- https://www.mongodb.com/community/forums/t/mongodb-5-0-cpu-intel-g4650-compatibility/116610/2


What does this mean for the mongo image? We currently consume the apt packages and windows downloads as provided by upstream and have no plans to compile them from source.

@yosifkit yosifkit pinned this issue Aug 3, 2021
@yosifkit yosifkit changed the title Mongo 5.0.0 crashes on Debian but 4.4.6 works fine Mongo 5.0.0 crashes but 4.4.6 works Aug 3, 2021
@tianon
Copy link
Member

tianon commented Aug 3, 2021

(I've proposed #491 which introduces a warning during container startup for affected users.)

nicolasburtey pushed a commit to GaloyMoney/blink that referenced this issue Feb 27, 2023
on mongo 5 I get this error on M1:

galoy-mongodb-1  |
galoy-mongodb-1  | WARNING: MongoDB 5.0+ requires a CPU with AVX support, and your current system does not appear to have that!
galoy-mongodb-1  |   see https://jira.mongodb.org/browse/SERVER-54407
galoy-mongodb-1  |   see also https://www.mongodb.com/community/forums/t/mongodb-5-0-cpu-intel-g4650-compatibility/116610/2
galoy-mongodb-1  |   see also docker-library/mongo#485 (comment)
galoy-mongodb-1  |

not sure if there are easy ways to solve this. haven't found any quickly
@jmhunter
Copy link

jmhunter commented Apr 26, 2023

For anyone else trying to get mongodb working on older CPUs, I went down this rabbit hole earlier today.

My approach was to build from source, as mentioned at https://www.mongodb.com/community/forums/t/mongodb-5-0-cpu-intel-g4650-compatibility/116610/2

I wasn't sure of creating a clean build environment, so I used the scripts and docker image from https://github.com/meteor/mongodb-builder with two small adjustments:

  • Add 'CCFLAGS=-march=nehalem' to build.sh
  • Adjust # CPUs in build.sh to match my machine (I used '-j 6') and adjust the --memory value in run-builder.sh similarly (I used '28g')
  • Specify a version of mongodb in run-builder.sh (I used '5.0.17' - values found from https://www.mongodb.com/try/download/community )

My test run took about 1h30 on my desktop machine, and I ended up with a .tgz file that looks promising:

drwxr-xr-x myuser/users         0 2023-04-26 22:52 mongodb-linux-x86_64-5.0.17/
drwxr-xr-x myuser/users         0 2023-04-26 22:58 mongodb-linux-x86_64-5.0.17/bin/
-rwxr-xr-x root/root     62211216 2023-04-26 22:52 mongodb-linux-x86_64-5.0.17/bin/mongos
-rwxr-xr-x root/root     86620528 2023-04-26 22:52 mongodb-linux-x86_64-5.0.17/bin/mongod

Edit: I added the two lines below to my docker-compose.yml file that I use for mongodb, to use my binaries rather than the ones in the docker image, and the image now seems to start up correctly (as far as I can tell):

 mongodb:
    image: "mongo:5.0"
    volumes:
      - "/data/docker/mongodb_data:/data/db"
      - /data/docker/mongodb-linux-x86_64-5.0.17/bin/mongos:/bin/mongos:ro
      - /data/docker/mongodb-linux-x86_64-5.0.17/bin/mongod:/bin/mongod:ro

@sputnick-dev
Copy link

sputnick-dev commented May 19, 2023

@jmhunter: did you finally get it to work?

What I've done to run on old cpu:

apt-get install -y mongodb-org=4.4.16 mongodb-org-server=4.4.16 mongodb-org-shell=4.4.16 mongodb-org-mongos=4.4.16 mongodb-org-tools=4.4.16

See https://stackoverflow.com/a/68973140/465183

# lscpu
Architecture:                    x86_64
CPU op-mode(s):                  32-bit, 64-bit
Byte Order:                      Little Endian
Address sizes:                   36 bits physical, 48 bits virtual
CPU(s):                          2
Vendor ID:                       GenuineIntel
CPU family:                      6
Model:                           77
Model name:                      Intel(R) Atom(TM) CPU  C2338  @ 1.74GHz
BogoMIPS:                        3500.14
Virtualization:                  VT-x
Flags:                           fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush
                                  dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch
                                 _perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclm
                                 ulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 movb
                                 e popcnt tsc_deadline_timer aes rdrand lahf_lm 3dnowprefetch cpuid_fault epb pt
                                 i tpr_shadow vnmi flexpriority ept vpid tsc_adjust smep erms dtherm arat

@jmhunter
Copy link

@jmhunter: did you finally get it to work?

Hi @sputnick-dev - yes, my steps above worked perfectly for me; or at least mongodb is now running in its container with no issues (I'm still working on the application that uses it..)

What I've done to run on old cpu:

apt-get install -y mongodb-org=4.4.16 mongodb-org-server=4.4.16 mongodb-org-shell=4.4.16 mongodb-org-mongos=4.4.16 mongodb-org-tools=4.4.16

That looks like an installation of a specific (older) MongoDB version from APT sources. My approach was to use version 5 via docker, but replace the binary shipped in the docker container with a version I compiled myself with the correct flags for my CPU - hence my use of https://github.com/meteor/mongodb-builder .

eliflores added a commit to serlo/api.serlo.org that referenced this issue Sep 8, 2023
To use the `4.4.6` of `mongo`.
See docker-library/mongo#485.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.