Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Server is not responding on Mac M1 #13

Open
PandaGab opened this issue Oct 3, 2022 · 32 comments
Open

Server is not responding on Mac M1 #13

PandaGab opened this issue Oct 3, 2022 · 32 comments

Comments

@PandaGab
Copy link

PandaGab commented Oct 3, 2022

Hello,
When I try to log in after a fresh installation, I get the following error:

Thank you for you help,
Gabriel

@joshmoore
Copy link
Member

What does:

docker-compose ps

show you? If one of the hosts has shutdown, we'll likely need to look into the log files. (One possibility is that there isn't enough memory)

@PandaGab
Copy link
Author

PandaGab commented Oct 3, 2022

Output from docker-compose ps:

NAME                                 COMMAND                  SERVICE             STATUS              PORTS
docker-example-omero_database_1      "docker-entrypoint.s…"   database            running             5432/tcp
docker-example-omero_omeroserver_1   "/usr/local/bin/entr…"   omeroserver         running             0.0.0.0:4063-4064->4063-4064/tcp
docker-example-omero_omeroweb_1      "/usr/local/bin/entr…"   omeroweb            running             0.0.0.0:4080->4080/tcp

Here is all the logs:

omero-server logs
Running /startup/50-config.py 
OpenSSL 1.0.2k-fips  26 Jan 2017
certificates created: /OMERO/certs/server.pem /OMERO/certs/server.p12
Running /startup/60-database.sh 
postgres connection established
Initialising database
5.11.2
INFO:omero_certificates.certificates:Setting omero.glacier2.IceSSL.DefaultDir=/OMERO/certs
INFO:omero_certificates.certificates:Setting omero.certificates.commonname=localhost
INFO:omero_certificates.certificates:Setting omero.certificates.owner=/L=OMERO/O=OMERO.server
INFO:omero_certificates.certificates:Setting omero.certificates.key=server.key
INFO:omero_certificates.certificates:Setting omero.glacier2.IceSSL.CertFile=server.p12
INFO:omero_certificates.certificates:Setting omero.glacier2.IceSSL.CAs=server.pem
INFO:omero_certificates.certificates:Setting omero.glacier2.IceSSL.Password=secret
INFO:omero_certificates.certificates:Setting omero.glacier2.IceSSL.Ciphers=HIGH
INFO:omero_certificates.certificates:Setting omero.glacier2.IceSSL.ProtocolVersionMax=TLS1_2
INFO:omero_certificates.certificates:Setting omero.glacier2.IceSSL.Protocols=TLS1_0,TLS1_1,TLS1_2
INFO:omero_certificates.certificates:Executing: openssl version
INFO:omero_certificates.certificates:Using existing key: /OMERO/certs/server.key
INFO:omero_certificates.certificates:Creating self-signed certificate: /OMERO/certs/server.pem
INFO:omero_certificates.certificates:Executing: openssl req -new -x509 -subj /L=OMERO/O=OMERO.server/CN=localhost -days 365 -key /OMERO/certs/server.key -out /OMERO/certs/server.pem -extensions v3_ca
INFO:omero_certificates.certificates:Creating PKCS12 bundle: /OMERO/certs/server.p12
INFO:omero_certificates.certificates:Executing: openssl pkcs12 -export -out /OMERO/certs/server.p12 -inkey /OMERO/certs/server.key -in /OMERO/certs/server.pem -name server -password pass:secret
2022-10-03 21:01:50,140 [omego.extern] INFO  Executing : /opt/omero/server/OMERO.server/bin/omero version
OMERO.py version:
OMERO.server version:
5.6.5-ice36-b233
2022-10-03 21:01:57,812 [omego.extern] INFO  Completed [7.665 s]
2022-10-03 21:01:57,818 [    omego.db] INFO  DbAdmin: DbAdmin OMERO.server ...
2022-10-03 21:01:57,819 [omego.extern] INFO  Running [current environment]: /opt/omero/server/OMERO.server/bin/omero config get --show-password
2022-10-03 21:01:57,820 [omego.extern] INFO  Executing : /opt/omero/server/OMERO.server/bin/omero config get --show-password
2022-10-03 21:02:04,420 [omego.extern] INFO  Completed [6.598 s]
2022-10-03 21:02:04,425 [omego.extern] INFO  Executing [custom environment]: psql -v ON_ERROR_STOP=on -d omero -h database -U omero -w -A -t --version
2022-10-03 21:02:04,465 [omego.extern] INFO  Completed [0.039 s]
2022-10-03 21:02:04,466 [    omego.db] INFO  psql version: psql (PostgreSQL) 11.16

2022-10-03 21:02:04,467 [omego.extern] INFO Running [current environment]: /opt/omero/server/OMERO.server/bin/omero config get --show-password
2022-10-03 21:02:04,467 [omego.extern] INFO Executing : /opt/omero/server/OMERO.server/bin/omero config get --show-password
2022-10-03 21:02:09,388 [omego.extern] INFO Completed [4.921 s]
2022-10-03 21:02:09,390 [omego.extern] INFO Executing [custom environment]: psql -v ON_ERROR_STOP=on -d omero -h database -U omero -w -A -t -c \conninfo
2022-10-03 21:02:09,439 [omego.extern] INFO Completed [0.049 s]
2022-10-03 21:02:09,442 [ omego.db] INFO Creating SQL: omero-20221003-210209-440521.sql
2022-10-03 21:02:09,443 [omego.extern] INFO Running [current environment]: /opt/omero/server/OMERO.server/bin/omero db script -f omero-20221003-210209-440521.sql omero
2022-10-03 21:02:09,443 [omego.extern] INFO Executing : /opt/omero/server/OMERO.server/bin/omero db script -f omero-20221003-210209-440521.sql omero
2022-10-03 21:02:22,751 [omego.extern] INFO Completed [13.306 s]
2022-10-03 21:02:22,755 [ omego.db] INFO Creating database using omero-20221003-210209-440521.sql
2022-10-03 21:02:22,756 [omego.extern] INFO Running [current environment]: /opt/omero/server/OMERO.server/bin/omero config get --show-password
2022-10-03 21:02:22,756 [omego.extern] INFO Executing : /opt/omero/server/OMERO.server/bin/omero config get --show-password
2022-10-03 21:02:27,247 [omego.extern] INFO Completed [4.490 s]
2022-10-03 21:02:27,249 [omego.extern] INFO Executing [custom environment]: psql -v ON_ERROR_STOP=on -d omero -h database -U omero -w -A -t -f omero-20221003-210209-440521.sql
2022-10-03 21:02:28,976 [omego.extern] INFO Completed [1.727 s]
2022-10-03 21:02:28,978 [ omego.db] WARNI stderr: b'psql:omero-20221003-210209-440521.sql:2842: NOTICE: identifier "fkcontraststretchingcontext_codomainmapcontext_id_codomainmapcontext" will be truncated to "fkcontraststretchingcontext_codomainmapcontext_id_codomainmapco"\npsql:omero-20221003-210209-440521.sql:4712: NOTICE: identifier "fklogicalchannel_photometricinterpretation_photometricinterpretation" will be truncated to "fklogicalchannel_photometricinterpretation_photometricinterpret"\npsql:omero-20221003-210209-440521.sql:5697: NOTICE: identifier "fkreverseintensitycontext_codomainmapcontext_id_codomainmapcontext" will be truncated to "fkreverseintensitycontext_codomainmapcontext_id_codomainmapcont"\n'
Running /startup/99-run.sh
Starting OMERO.server

omero-database logs
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

fixing permissions on existing directory /var/lib/postgresql/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting default timezone ... Etc/UTC
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok

Success. You can now start the database server using:

pg_ctl -D /var/lib/postgresql/data -l logfile start

waiting for server to start....2022-10-03 21:01:23.136 UTC [48] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2022-10-03 21:01:23.145 UTC [49] LOG: database system was shut down at 2022-10-03 21:01:22 UTC
2022-10-03 21:01:23.150 UTC [48] LOG: database system is ready to accept connections
done
server started
CREATE DATABASE

/usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*

2022-10-03 21:01:23.579 UTC [48] LOG: received fast shutdown request
waiting for server to shut down....2022-10-03 21:01:23.580 UTC [48] LOG: aborting any active transactions
2022-10-03 21:01:23.583 UTC [48] LOG: background worker "logical replication launcher" (PID 55) exited with exit code 1
2022-10-03 21:01:23.583 UTC [50] LOG: shutting down
2022-10-03 21:01:23.595 UTC [48] LOG: database system is shut down
done
server stopped

PostgreSQL init process complete; ready for start up.

WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.
2022-10-03 21:01:23.687 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
2022-10-03 21:01:23.687 UTC [1] LOG: listening on IPv6 address "::", port 5432
2022-10-03 21:01:23.689 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2022-10-03 21:01:23.701 UTC [76] LOG: database system was shut down at 2022-10-03 21:01:23 UTC
2022-10-03 21:01:23.705 UTC [1] LOG: database system is ready to accept connections
2022-10-03 21:01:48.522 UTC [84] ERROR: relation "dbpatch" does not exist at character 15
2022-10-03 21:01:48.522 UTC [84] STATEMENT: select * from dbpatch

omero-web logs
Running /startup/50-config.py 
Running /startup/60-default-web-config.sh 
Running /startup/98-cleanprevious.sh 
Running /startup/99-run.sh 
Starting OMERO.web

652 static files copied to '/opt/omero/web/OMERO.web/var/static', 2 post-processed.
Clearing expired sessions. This may take some time... [OK]
[2022-10-03 21:01:55 +0000] [84] [INFO] Starting gunicorn 20.1.0
[2022-10-03 21:01:55 +0000] [84] [INFO] Listening at: http://0.0.0.0:4080 (84)
[2022-10-03 21:01:55 +0000] [84] [INFO] Using worker: sync
[2022-10-03 21:01:55 +0000] [93] [INFO] Booting worker with pid: 93
[2022-10-03 21:01:55 +0000] [95] [INFO] Booting worker with pid: 95
[2022-10-03 21:01:55 +0000] [97] [INFO] Booting worker with pid: 97
[2022-10-03 21:01:56 +0000] [99] [INFO] Booting worker with pid: 99
[2022-10-03 21:01:56 +0000] [101] [INFO] Booting worker with pid: 101
[2022-10-03 21:10:32 +0000] [84] [CRITICAL] WORKER TIMEOUT (pid:97)
[2022-10-03 22:10:32 +0100] [97] [INFO] Worker exiting (pid: 97)
[2022-10-03 21:10:32 +0000] [138] [INFO] Booting worker with pid: 138

@joshmoore
Copy link
Member

Hi @PandaGab,

thanks for the output. Those look fine. It would help to also take a look at the internal Blitz-0.log. Could you share the following with us?

docker-compose exec omeroserver cat /opt/omero/server/OMERO.server/var/log/Blitz-0.log

@PandaGab
Copy link
Author

PandaGab commented Oct 4, 2022

Hey !

Interestingly, there are no logs !

sh-4.2$ pwd
/opt/omero/server/OMERO.server/var
sh-4.2$ ls -la
total 12
drwxr-xr-x 2 omero-server omero-server 4096 Jun 29 08:56 .
drwxr-xr-x 1 omero-server omero-server 4096 Jun 29 08:56 ..

@joshmoore
Copy link
Member

Well that's odd! Can you check how much available memory there is in the omeroserver container?

@PandaGab
Copy link
Author

PandaGab commented Oct 4, 2022

Sure !

sh-4.2$ whoami
omero-server
sh-4.2$ df -h /
Filesystem      Size  Used Avail Use% Mounted on
overlay         252G  7.5G  232G   4% /

@joshmoore
Copy link
Member

Thanks. And free -m?

@PandaGab
Copy link
Author

PandaGab commented Oct 4, 2022

sh-4.2$ free -m
              total        used        free      shared  buff/cache   available
Mem:           7951        1308        5635         352        1007        6120
Swap:          4095           0        4095

@joshmoore
Copy link
Member

Hmm.... that's a bit on the small side but I would think it would at least produce an error log. For reference:

docker-compose exec omeroserver /opt/omero/server/OMERO.server/bin/omero admin jvmcfg
JVM Settings:
============
blitz=-Xmx7200m -XX:MaxPermSize=1g -XX:+IgnoreUnrecognizedVMOptions
indexer=-Xmx4800m -XX:MaxPermSize=1g -XX:+IgnoreUnrecognizedVMOptions
pixeldata=-Xmx7200m -XX:MaxPermSize=1g -XX:+IgnoreUnrecognizedVMOptions
repository=-Xmx4800m -XX:MaxPermSize=1g -XX:+IgnoreUnrecognizedVMOptions

@PandaGab
Copy link
Author

PandaGab commented Oct 6, 2022

Thanks for the logs.

FYI : I tried on my old Mac (without the M1 chip) and got a working installation !

@joshmoore
Copy link
Member

FYI : I tried an my old Mac (without the M1 chip) and got a working installation !

Glad to hear it, but really unfortunate that M1 is causing issues. If you (or anyone else) happens to find an error message or a log of some form, we'd love to see it.

@Helfrid
Copy link

Helfrid commented May 13, 2023

Has there been any progress on this issue? When I run the docker-compose pull command I get: no matching manifest for linux/arm64/v8 in the manifest list entries. As with PandaGab I got it to work without problems on an intel Mac.

@joshmoore
Copy link
Member

joshmoore commented May 15, 2023

Hi @Helfrid. The "no matching manifest" issue has to do with there not being an arm build on docker hub. If you trying building the image locally, that should go away but then I assume you will run into the same problem as @PandaGab.

Looking at this with @will-moore, I assume what we are running into is that there isn't an upstream build of the Ice packages. My assumption from looking briefly is that the dependencies look like this:

  • The docker build uses an ansible playbook.
  • The ansible playbook uses ansible-role-ice.
  • ansible-role-ice makes use of the redhat.yml tasks.
  • The redhat.yml tasks use the Ice repository but there's no arm build.

The solution, I believe, is to make a repository like zeroc-ice-ubuntu1804 which builds zeroc on arm and (e.g., "zeroc-ice-centos7-arm") and provides a downloadable package.

If someone would like to test this theory (I don't have an m1 to help out 😬), then try a local build as suggested by @khaledk2 who tested on a Raspberry Pi (also arm):

Download ice 3.6.5 using the following command to your home folder:

git clone -b v3.6.5 https://github.com/zeroc-ice/ice.git
cd ice/cpp
make

See further instructions below.

Related issues:

@khaledk2
Copy link

khaledk2 commented May 16, 2023

Sorry, my fault; there is no need for this line to build Ice locally:

tar xvfz Ice-3.6.5.tar.gz

Also, after the building you should update the path and library path to be able to use the build. You can do that by editing the .bash_profile on the Mac machine and adding these two lines by the end of the file:

export PATH=~/ice/cpp/bin:$PATH
export LD_LIBRARY_PATH=~/ice/cpp/lib:$LD_LIBRARY_PATH

You should run this command then everything should be ready.

source .bash_profile

@manics
Copy link
Member

manics commented May 16, 2023

I've got some Ice packages compiled for Ubuntu arm64 in
https://github.com/manics/omero-ice-docker
https://forum.image.sc/t/omero-server-and-web-installation-on-linux-arm64/75628/3

The binaries are inside the container image because the idea was to use Docker's multi-arch support to compile Ice as part of a cross-architecture build, but (perhaps inevitably) it hit the 6 hour time-limit so I spun up a temporary arm64 GitHub runner on Oracle cloud instead. Given this cross-build idea didn't work you're better off following the standard zeroc-ice-ubuntuNNNN workflow linked above.

If you want a fully automated CI build you can sign up for a free Oracle Cloud account which includes "always* free" arm64 compute
https://www.oracle.com/uk/cloud/free/
and spinning up a runner (should be done in a private mirror of the repo for security)
https://github.com/manics/omero-ice-docker/blob/e2be1ddedf623286cbaa013d3569dc035f9ef0fd/.github/workflows/main.yml#L2-L4

(* interpretation of "always" is an exercise for the reader)

@will-moore
Copy link
Member

I tried this on my Mac M1...

git clone -b v3.6.5 https://github.com/zeroc-ice/ice.git
cd ice/cpp
make

And got the following...

1 error generated
$ make
making all in config
make[1]: Nothing to be done for `all'.
making all in src
making all in IceUtil
xcrun clang++ -c -pthread -fvisibility=hidden --std=c++11 -I../../include   -DICE_UTIL_API_EXPORTS -I.. -arch x86_64 -g -Wall -Werror -mmacosx-version-min=10.9 -MMD ArgVector.cpp -MF .depend/ArgVector.d
xcrun clang++ -c -pthread -fvisibility=hidden --std=c++11 -I../../include   -DICE_UTIL_API_EXPORTS -I.. -arch x86_64 -g -Wall -Werror -mmacosx-version-min=10.9 -MMD Cond.cpp -MF .depend/Cond.d
xcrun clang++ -c -pthread -fvisibility=hidden --std=c++11 -I../../include   -DICE_UTIL_API_EXPORTS -I.. -arch x86_64 -g -Wall -Werror -mmacosx-version-min=10.9 -MMD ConvertUTF.cpp -MF .depend/ConvertUTF.d
xcrun clang++ -c -pthread -fvisibility=hidden --std=c++11 -I../../include   -DICE_UTIL_API_EXPORTS -I.. -arch x86_64 -g -Wall -Werror -mmacosx-version-min=10.9 -MMD CountDownLatch.cpp -MF .depend/CountDownLatch.d
xcrun clang++ -c -pthread -fvisibility=hidden --std=c++11 -I../../include   -DICE_UTIL_API_EXPORTS -I.. -arch x86_64 -g -Wall -Werror -mmacosx-version-min=10.9 -MMD CtrlCHandler.cpp -MF .depend/CtrlCHandler.d
xcrun clang++ -c -pthread -fvisibility=hidden --std=c++11 -I../../include   -DICE_UTIL_API_EXPORTS -I.. -arch x86_64 -g -Wall -Werror -mmacosx-version-min=10.9 -MMD Exception.cpp -MF .depend/Exception.d
xcrun clang++ -c -pthread -fvisibility=hidden --std=c++11 -I../../include   -DICE_UTIL_API_EXPORTS -I.. -arch x86_64 -g -Wall -Werror -mmacosx-version-min=10.9 -MMD FileUtil.cpp -MF .depend/FileUtil.d
xcrun clang++ -c -pthread -fvisibility=hidden --std=c++11 -I../../include   -DICE_UTIL_API_EXPORTS -I.. -arch x86_64 -g -Wall -Werror -mmacosx-version-min=10.9 -MMD InputUtil.cpp -MF .depend/InputUtil.d
xcrun clang++ -c -pthread -fvisibility=hidden --std=c++11 -I../../include   -DICE_UTIL_API_EXPORTS -I.. -arch x86_64 -g -Wall -Werror -mmacosx-version-min=10.9 -MMD MutexProtocol.cpp -MF .depend/MutexProtocol.d
xcrun clang++ -c -pthread -fvisibility=hidden --std=c++11 -I../../include   -DICE_UTIL_API_EXPORTS -I.. -arch x86_64 -g -Wall -Werror -mmacosx-version-min=10.9 -MMD Options.cpp -MF .depend/Options.d
xcrun clang++ -c -pthread -fvisibility=hidden --std=c++11 -I../../include   -DICE_UTIL_API_EXPORTS -I.. -arch x86_64 -g -Wall -Werror -mmacosx-version-min=10.9 -MMD OutputUtil.cpp -MF .depend/OutputUtil.d
xcrun clang++ -c -pthread -fvisibility=hidden --std=c++11 -I../../include   -DICE_UTIL_API_EXPORTS -I.. -arch x86_64 -g -Wall -Werror -mmacosx-version-min=10.9 -MMD Random.cpp -MF .depend/Random.d
xcrun clang++ -c -pthread -fvisibility=hidden --std=c++11 -I../../include   -DICE_UTIL_API_EXPORTS -I.. -arch x86_64 -g -Wall -Werror -mmacosx-version-min=10.9 -MMD RecMutex.cpp -MF .depend/RecMutex.d
xcrun clang++ -c -pthread -fvisibility=hidden --std=c++11 -I../../include   -DICE_UTIL_API_EXPORTS -I.. -arch x86_64 -g -Wall -Werror -mmacosx-version-min=10.9 -MMD SHA1.cpp -MF .depend/SHA1.d
xcrun clang++ -c -pthread -fvisibility=hidden --std=c++11 -I../../include   -DICE_UTIL_API_EXPORTS -I.. -arch x86_64 -g -Wall -Werror -mmacosx-version-min=10.9 -MMD Shared.cpp -MF .depend/Shared.d
xcrun clang++ -c -pthread -fvisibility=hidden --std=c++11 -I../../include   -DICE_UTIL_API_EXPORTS -I.. -arch x86_64 -g -Wall -Werror -mmacosx-version-min=10.9 -MMD StringConverter.cpp -MF .depend/StringConverter.d
xcrun clang++ -c -pthread -fvisibility=hidden --std=c++11 -I../../include   -DICE_UTIL_API_EXPORTS -I.. -arch x86_64 -g -Wall -Werror -mmacosx-version-min=10.9 -MMD StringUtil.cpp -MF .depend/StringUtil.d
xcrun clang++ -c -pthread -fvisibility=hidden --std=c++11 -I../../include   -DICE_UTIL_API_EXPORTS -I.. -arch x86_64 -g -Wall -Werror -mmacosx-version-min=10.9 -MMD Thread.cpp -MF .depend/Thread.d
xcrun clang++ -c -pthread -fvisibility=hidden --std=c++11 -I../../include   -DICE_UTIL_API_EXPORTS -I.. -arch x86_64 -g -Wall -Werror -mmacosx-version-min=10.9 -MMD ThreadException.cpp -MF .depend/ThreadException.d
xcrun clang++ -c -pthread -fvisibility=hidden --std=c++11 -I../../include   -DICE_UTIL_API_EXPORTS -I.. -arch x86_64 -g -Wall -Werror -mmacosx-version-min=10.9 -MMD Time.cpp -MF .depend/Time.d
xcrun clang++ -c -pthread -fvisibility=hidden --std=c++11 -I../../include   -DICE_UTIL_API_EXPORTS -I.. -arch x86_64 -g -Wall -Werror -mmacosx-version-min=10.9 -MMD Timer.cpp -MF .depend/Timer.d
xcrun clang++ -c -pthread -fvisibility=hidden --std=c++11 -I../../include   -DICE_UTIL_API_EXPORTS -I.. -arch x86_64 -g -Wall -Werror -mmacosx-version-min=10.9 -MMD Unicode.cpp -MF .depend/Unicode.d
xcrun clang++ -c -pthread -fvisibility=hidden --std=c++11 -I../../include   -DICE_UTIL_API_EXPORTS -I.. -arch x86_64 -g -Wall -Werror -mmacosx-version-min=10.9 -MMD UUID.cpp -MF .depend/UUID.d
rm -f ../../lib/libIceUtil.3.6.5.dylib
xcrun clang++  -dynamiclib  -arch x86_64 -g -Wall -Werror -mmacosx-version-min=10.9 -L../../lib -o ../../lib/libIceUtil.3.6.5.dylib -install_name @rpath/libIceUtil.36.dylib ArgVector.o Cond.o ConvertUTF.o CountDownLatch.o CtrlCHandler.o Exception.o FileUtil.o InputUtil.o MutexProtocol.o Options.o OutputUtil.o Random.o RecMutex.o SHA1.o Shared.o StringConverter.o StringUtil.o Thread.o ThreadException.o Time.o Timer.o Unicode.o UUID.o 
rm -f ../../lib/libIceUtil.36.dylib
ln -s libIceUtil.3.6.5.dylib ../../lib/libIceUtil.36.dylib
rm -f ../../lib/libIceUtil.dylib
ln -s libIceUtil.36.dylib ../../lib/libIceUtil.dylib
making all in Slice
xcrun clang++ -c -I.. -pthread -fvisibility=hidden --std=c++11 -I../../include -DSLICE_API_EXPORTS -arch x86_64 -g -Wall -Werror -mmacosx-version-min=10.9 -MMD Checksum.cpp -MF .depend/Checksum.d
xcrun clang++ -c -I.. -pthread -fvisibility=hidden --std=c++11 -I../../include -DSLICE_API_EXPORTS -arch x86_64 -g -Wall -Werror -mmacosx-version-min=10.9 -MMD CPlusPlusUtil.cpp -MF .depend/CPlusPlusUtil.d
CPlusPlusUtil.cpp:860:52: error: 'ptr_fun<const std::string &, std::string>' is deprecated [-Werror,-Wdeprecated-declarations]
    transform(ids.begin(), ids.end(), ids.begin(), ptr_fun(lookupKwd));
                                                   ^
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/c++/v1/__functional/pointer_to_unary_function.h:37:1: note: 'ptr_fun<const std::string &, std::string>' has been explicitly marked deprecated here
_LIBCPP_DEPRECATED_IN_CXX11 inline _LIBCPP_INLINE_VISIBILITY
^
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/c++/v1/__config:1054:39: note: expanded from macro '_LIBCPP_DEPRECATED_IN_CXX11'
#  define _LIBCPP_DEPRECATED_IN_CXX11 _LIBCPP_DEPRECATED
                                      ^
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/c++/v1/__config:1043:48: note: expanded from macro '_LIBCPP_DEPRECATED'
#    define _LIBCPP_DEPRECATED __attribute__ ((deprecated))
                                               ^
1 error generated.
make[2]: *** [CPlusPlusUtil.o] Error 1
make[1]: *** [Slice] Error 2
make: *** [all] Error 1

@Helfrid
Copy link

Helfrid commented May 17, 2023

Thanks to @manics and help from @alexherbert, I got this working now!
I installed the following images::
sudo docker pull ghcr.io/manics/omero-server-docker:ubuntu
sudo docker pull ghcr.io/manics/omero-web-docker:ubuntu
sudo docker pull postgres:11

Then:
git clone https://github.com/aherbert/docker-example-omero -b multi-arch
cd docker-example-omero
sudo docker-compose up -d

Thanks, everyone!

@manics
Copy link
Member

manics commented May 17, 2023

@Helfrid In case it's not clear, I built those images as a proof-of-concept and they're not maintained. I'm not using OMERO anymore, but now that you know it works it might be good if you (or someone else) made the changes as described in #13 (comment) to get a proper supported production image.

@Helfrid
Copy link

Helfrid commented May 20, 2023

@joshmoore could you, or someone else from the Omero team follow up on @manics suggestion and set up a supported production image?

@will-moore
Copy link
Member

Thanks all - this worked for me too: Can log-in an import images 👍

docker pull ghcr.io/manics/omero-server-docker:ubuntu
docker pull ghcr.io/manics/omero-web-docker:ubuntu
docker pull postgres:11

git clone https://github.com/aherbert/docker-example-omero -b multi-arch
cd docker-example-omero
docker-compose up -d
omero login

@imagesc-bot
Copy link

This issue has been mentioned on Image.sc Forum. There might be relevant details there:

https://forum.image.sc/t/ome-types-rewritten-requesting-alpha-testers/83174/23

@will-moore
Copy link
Member

@joshmoore @jburel - As mentioned in last Tuesdays' meeting, and requested above: #13 (comment)
it would be great to get this workflow supported and documented by OME.

As I understand from @pwalczysko (who is also using it) we need OME to host the Ice packages built by @manics (if that's OK?), then use those as in https://github.com/aherbert/docker-example-omero/tree/multi-arch.
Maybe we don't actually want to merge those changes, but either provide the option to use those packages or simply document how to use them?

@jburel
Copy link
Member

jburel commented Oct 30, 2023

The problem of this workflow is that is based on a personal image not something we can support in our documentation

@manics
Copy link
Member

manics commented Oct 30, 2023

I'm happy to transfer the repo to an org of your choice, or for someone to copy the code/artifacts and host them elsewhere. Just let me know 😄.

Note the main constraint for having a full CI/CD workflow for arm64 ICe builds is that the Ice artifacts take too long to cross-compile on Docker, so your need an arm64 GitHub runner. Free arm64 VMs are available from Oracle Cloud Infrastructure, but private runners should not be added to public repos https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners/about-self-hosted-runners#self-hosted-runner-security

In practice this probably means have a public and private mirror of the same repo. Alternatively since these artifacts are most likely a one-off you could just copy them, or build them locally once on a Mac M1 in Docker arm64.

@joshmoore
Copy link
Member

I now have an M1 machine and can offer to build these artifacts and push them if we want to go that route.

@will-moore
Copy link
Member

@joshmoore That would be great, thanks! (and thanks to @manics for the offer too).

@joshmoore
Copy link
Member

So some thoughts/questions after chatting with @sbesson, @jburel, @pwalczysko, @khaledk2 and @dominikl:

  • @manics, are any of the other branches on your forks important? Or can I focus on just ubuntu?
  • Also, can you explain the release process that you use with the -private repos?
  • My plan will be to take the active branches, push them to the ome/ repos, and start experimenting with them. I'm likely to push builds to ghcr.io/ome now even before the GitHub actions are live.
  • These branches & containers should all be considered "integration" or "dev" in preparation for a role over.

@manics
Copy link
Member

manics commented Oct 31, 2023

Just ubuntu: https://github.com/manics/omero-server-docker/tree/ubuntu which depends on https://github.com/manics/omero-ice-docker

The private repo is identical to the public one- the only difference is it has a private arm64 runner attached to it. The GH workflow is conditional on
if: github.repository == 'manics/omero-ice-docker-private'
https://github.com/manics/omero-ice-docker/blob/e2be1ddedf623286cbaa013d3569dc035f9ef0fd/.github/workflows/main.yml#L58

Having said all that, the only reason I went down this route is I didn't fancy forking/modifying the Ansible roles, so to make it faster to iterate I used a plain Dockerifle without Ansible. If you want to stick with the pattern used by the existing Docker images then I (optimistically?) assume you can ignore everything I've done, and instead:

@imagesc-bot
Copy link

This issue has been mentioned on Image.sc Forum. There might be relevant details there:

https://forum.image.sc/t/omero-server-and-web-installation-on-linux-arm64/75628/6

@beniroquai
Copy link

@khaledk2 I read that you were able to install omero server on the Raspberry Pi? Great news! How was the performance? I was thinking about running this in combination with a python-based image acquisition script. Would this already be too much? Which Pi did you use and were you able to use the Docker-based installation? Thanks :)

@khaledk2
Copy link

khaledk2 commented Jan 8, 2024

@beniroquai I have installed an Omero server instance on my Raspberry Pi 4 (quad-core ARM Cortex-A72 processor with 8 GB RAM, OS: Debian 10). I have built both Omero-server and ice locally. I have used Venv to create the Python virtual environment and install Omero-py. I have configured the database that has already been installed. I can start up the server and access it from Insight from my Windows machine. I have tested creating a project, and dataset, uploading images, and adding key-value pairs. I think all of them are working fine. I did not perform in-depth testing to evaluate the performance.

@beniroquai
Copy link

Sounds very cool indeed! Thanks for reporting. Did you also setup any webapp to screen images from another machine? Am I right that you followed this tutorial? https://docs.openmicroscopy.org/omero/5.6.3/sysadmins/unix/server-ubuntu2004-ice36.html
I guess i managed to get to the point where I had to install ice, but if you say compiling is necessary, I'll try it again.
Docker should have a similar performance, or? @manics

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

9 participants