Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Installation error when adding user to vboxsf group #189

Open
cat7 opened this issue Nov 12, 2020 · 11 comments
Open

Installation error when adding user to vboxsf group #189

cat7 opened this issue Nov 12, 2020 · 11 comments
Assignees

Comments

@cat7
Copy link

cat7 commented Nov 12, 2020

Hi Maarten,

I attempted installation of LaMachine on both Ubuntu 20 and Fedora 33.
Both fail with:

TASK [lamachine-core : Adding user to vboxsf group for shared folder access (if it exists)] ***************************************
fatal: [localhost]: FAILED! => {
"changed": false,
"name": "xxx", (username anonymised)
"rc": 10
}

MSG:

usermod: Permission denied.
usermod: cannot lock /etc/group; try again later.

Anything I can do to prevent this?

@proycon
Copy link
Owner

proycon commented Nov 12, 2020

Interesting, I haven't seen this before yet, but you reproduced it on two distributions.
What kind of flavour are you installing? A local native installation? Or is this inside a virtual machine? And you entered the right password for sudo permissions I assume?

In any case, I implemented a (temporary) patch that discards this error so it's not a show-stopper anymore.

@cat7
Copy link
Author

cat7 commented Nov 13, 2020

Hi Maarten,

Thanks for getting back to me. I'm sure I entered the correct sudo password ;-) There is only one normal user account on my installations and it has sudo rights. I attempted installation in virtual machines (virtualbox), then selected 1) in a local user environment, used my home folder for the installation, and chose the stable version.

With your patch I get past the vboxsf error.
In the Fedora 33 machine, now running into missing libtar.h so I installed libtar-devel, next textcat.h was missing, so I installed libexttextcat-devel. However, I never got to the point that the installation was successful. A new installation using Ubuntu 20 got me up-and-running.

btw: I see you install gensim and friends. Perhaps I asked this before, but will you be adding its lsa/lda implementation to LaMachine?

Groet,
Howard

@cat7
Copy link
Author

cat7 commented Nov 25, 2020

Hi,

I'm running into a new problem here ;-)
I created a local stable installation on a Ubuntu 20 running in virtual box with alpino and tscan added. Installation went fine, but tscan fails when I try to let it use the woprs.

Lamachine is started with:
~/bin/lamachine-instancename-activate
/home/username/instancename/bin/lamachine-start-webserver

Alpino, both woprs, frog, foliadocserve etc. processes are all started.

Accessing the server through http://hostname, http://127.0.0.1 or http://localhost makes no difference.
http://hostname:7002 and 7020 shortly show the webservice but then the connection is lost and the woprs get reloaded.
http://hostname/flat works
http://hostname/foliadocserve is not found

When I use the default tscan settings, it starts analysis but then shortly after reports "error obtaining status".
When I exclude the woprs, the analysis completes OK.

Log from tscan says this:
[CLAM Dispatcher] Adding to PYTHONPATH: /home/hsp/OU/lib/python3.8/site-packages/tscanservice
[CLAM Dispatcher] Started CLAM Dispatcher v3.0.20 with tscanservice.tscan (2020-11-25 09:53:04)
[CLAM Dispatcher] Running /home/hsp/OU/src/tscan/webservice/tscanservice/tscanwrapper.py "/home/hsp/OU/var/www-data/tscan.clam/projects/anonymous/test/clam.xml" "/home/hsp/OU/var/www-data/tscan.clam/projects/anonymous/test/.status" "/home/hsp/OU/var/www-data/tscan.clam/projects/anonymous/test/input/" "/home/hsp/OU/var/www-data/tscan.clam/projects/anonymous/test/output/" "/home/hsp/OU/src/tscan" "/home/hsp/OU/opt/Alpino"
[CLAM Dispatcher] Running with pid 3826 (2020-11-25 09:53:04)
TScan 0.9.8
working dir /tmp/tscan-3830/
opened file input/test.txt
analyse tokenized sentence=Dit is een tekstregel ter analyse .
calling Alpino Server
calling Wopr
calling Wopr
done with Alpino Server
No usable FoLia data retrieved from Wopr. Got ''
done with Wopr
No usable FoLia data retrieved from Wopr. Got ''
done with Wopr
analyse tokenized sentence=En nog een zin op dezelfde regel .
calling Alpino Server
failed to open Alpino connection: 127.0.0.1:7003
Reason: invalid socket : ClientSocket: Connection on 127.0.0.1:6 failed (Connection refused)
calling Wopr
failed to open Wopr connection: 127.0.0.1:7020
Reason: invalid socket : ClientSocket: Connection on 127.0.0.1:5 failed (Connection refused)
mv: cannot stat '/home/hsp/OU/var/www-data/tscan.clam/projects/anonymous/test/input//.csv': No such file or directory
cat: '/home/hsp/OU/var/www-data/tscan.clam/projects/anonymous/test/output//
.words.csv': No such file or directory
cat: '/home/hsp/OU/var/www-data/tscan.clam/projects/anonymous/test/output//.paragraphs.csv': No such file or directory
cat: '/home/hsp/OU/var/www-data/tscan.clam/projects/anonymous/test/output//
.sentences.csv': No such file or directory
cat: '/home/hsp/OU/var/www-data/tscan.clam/projects/anonymous/test/output//*.document.csv': No such file or directory
Expected output file input/test.txt.tscan.xml not created, something went wrong earlier?
[CLAM Dispatcher] Process ended (2020-11-25 09:53:41, 37.376531s)
[CLAM Dispatcher] Updating project index
[CLAM Dispatcher] Removing temporary files
[CLAM Dispatcher] Finished (2020-11-25 09:53:41), exit code 0, dispatcher wait time 37.0s, duration 37.382804s

@proycon
Copy link
Owner

proycon commented Nov 25, 2020

The woprs reloading continuously sounds very much like a memory issue. How much memory do you have available in the virtual machine? Wopr is very memory-hungry, you'll need like 5 to 6GB per wopr, and then there's Frog and Alpino which may grow in memory when used, so don't attempt it all on a VM with anything less than 16GB.

@cat7
Copy link
Author

cat7 commented Nov 25, 2020

Thanks, yes I know these are eating memory. I tried with 16 and 24 Gb.
But from the portal when I click "open foliadocserve in browser" I get localhost/foliadocserve not available, while e.g. localhost/flat opens the correct page. Perhaps something was not installed correctly?

@proycon
Copy link
Owner

proycon commented Nov 25, 2020

That could be an issue with the foliadocserve entry in the portal yes, if FLAT works fine than the document server is working as it should.

Regarding wopr, there may be some further clues in /usr/local/var/log/uwsgi/tscan.uwsgi.log . If you have enough memory then I wouldn't expect the continuous restarts you experience.

@cat7
Copy link
Author

cat7 commented Nov 25, 2020

This is the tscan log after a clean boot and start of lamachine. Both woprs seem to be listening:

*** Starting uWSGI 2.0.19.1 (64bit) on [Wed Nov 25 13:18:51 2020] ***
compiled with version: 9.3.0 on 13 November 2020 09:35:38
os: Linux-5.4.0-54-generic #60-Ubuntu SMP Fri Nov 6 10:37:59 UTC 2020
nodename: hsp-lamachine
machine: x86_64
clock source: unix
pcre jit disabled
detected number of CPU cores: 4
current working directory: /home/hsp/OU/etc/uwsgi-emperor/vassals
detected binary path: /home/hsp/OU/bin/uwsgi
chdir() to /home/hsp/OU/etc/
your processes number limit is 63824
your memory page size is 4096 bytes
detected max file descriptor number: 1024
lock engine: pthread robust mutexes
thunder lock: disabled (you can enable it with --thunder-lock)
uwsgi socket 0 bound to TCP address 127.0.0.1:9920 fd 6
Python version: 3.8.5 (default, Jul 28 2020, 12:59:40) [GCC 9.3.0]
PEP 405 virtualenv detected: /home/hsp/OU
Set PythonHome to /home/hsp/OU
Python main interpreter initialized at 0x55d5f5f55240
python threads support enabled
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 60 seconds
mapped 145808 bytes (142 KB) for 1 cores
*** Operational MODE: single process ***
mounting /home/hsp/OU/opt/tscanservice/tscan.wsgi on /tscan
WARNING: No MySQL support available in your version of Python! pip install mysqlclient if you plan on using MySQL for authentication
WARNING: No explicit SECRET_KEY set in service configuration, generating one at random! This may cause problems with session persistence in production environments!
WARNING: No user authentication enabled, this is not recommended for production environments!
WARNING: *** NO AUTHENTICATION ENABLED!!! This is strongly discouraged in production environments! ***
WSGI app 0 (mountpoint='/tscan') ready in 2 seconds on interpreter 0x55d5f5f55240 pid: 2118 (default app)
*** uWSGI is running in multiple interpreter mode ***
spawned uWSGI master process (pid: 2118)
spawned uWSGI worker 1 (pid: 2148, cores: 1)
[uwsgi-daemons] spawning "/home/hsp/OU/src/tscan/webservice/startfrog.sh" (uid: 1000 gid: 1000)
[uwsgi-daemons] spawning "/home/hsp/OU/src/tscan/webservice/startwopr20.sh" (uid: 1000 gid: 1000)
[uwsgi-daemons] spawning "/home/hsp/OU/src/tscan/webservice/startalpino.sh" (uid: 1000 gid: 1000)
++ hostname

  • '[' hsp-lamachine == mlp01 ']'
  • '[' '!' -z /home/hsp/OU ']'
  • FROGPATH=/home/hsp/OU
  • LOGFILE=/tmp/frog-tscan.log
  • PORT=7001
  • ID=tscan
  • mv /tmp/frog-tscan.log /tmp/frog-tscan.log.sav
    [uwsgi-daemons] spawning "/home/hsp/OU/src/tscan/webservice/startwopr02.sh" (uid: 1000 gid: 1000)
    mv: cannot stat '/tmp/frog-tscan.log': No such file or directory
  • frog -X --id=tscan --skip=mp -S7001
    13:18:53.78: Starting wopr 1.42
    13:18:53.78: Timbl support built in.
    13:18:53.78: Based on timbl 6.5
    13:18:53.78: Based on libfolia 2.6
    13:18:53.78: ICU support, version 66.1
    13:18:53.78: std::numeric_limits::max() = 2147483647
    13:18:53.78: std::numeric_limits::max() = 9223372036854775807
    13:18:53.78: PID: 2162 PPID: 2160
    13:18:53.78: Running: xmlserver
    13:18:53.78: Starting wopr 1.42
    13:18:53.78: Timbl support built in.
    13:18:53.78: Based on timbl 6.5
    13:18:53.78: Based on libfolia 2.6
    13:18:53.78: ICU support, version 66.1
    13:18:53.78: std::numeric_limits::max() = 2147483647
    13:18:53.78: std::numeric_limits::max() = 9223372036854775807
    13:18:53.78: PID: 2157 PPID: 2155
    13:18:53.78: Running: xmlserver
    13:18:53.78: xmlserver. Returns a FoLiA document over a sequence.
    13:18:53.78: ibasefile: ../data/sonar_newspapercorp_tokenized.3.txt.l0r2_-a4+D.ibase
    13:18:53.78: port: 7002
    13:18:53.78: keep: 1
    13:18:53.78: moses: 0
    13:18:53.78: lb: 1
    13:18:53.78: lc: 0
    13:18:53.78: rc: 2
    13:18:53.78: verbose: 2
    13:18:53.78: timbl:
    13:18:53.78: lexicon ../data/sonar_newspapercorp_tokenized.3.txt.lex
    13:18:53.78: hapax: 0
    13:18:53.78: skip_sm: false
    13:18:53.78: Reading lexicon.
    13:18:53.78: xmlserver. Returns a FoLiA document over a sequence.
    13:18:53.78: ibasefile: ../data/sonar_newspapercorp_tokenized.3.txt.l2r0_-a4+D.ibase
    13:18:53.78: port: 7020
    13:18:53.78: keep: 1
    13:18:53.78: moses: 0
    13:18:53.78: lb: 1
    13:18:53.78: lc: 2
    13:18:53.78: rc: 0
    13:18:53.78: verbose: 2
    13:18:53.78: timbl:
    13:18:53.78: lexicon ../data/sonar_newspapercorp_tokenized.3.txt.lex
    13:18:53.78: hapax: 0
    13:18:53.78: skip_sm: false
    13:18:53.78: Reading lexicon.
    13:19:02.37: Read lexicon, 652691/652691 (total_count=38223560).
    Reading Instance-Base from: ../data/sonar_newspapercorp_tokenized.3.txt.l0r2_-a4+D.ibase
    13:19:02.45: Read lexicon, 652691/652691 (total_count=38223560).
    Reading Instance-Base from: ../data/sonar_newspapercorp_tokenized.3.txt.l2r0_-a4+D.ibase

Size of InstanceBase = 13640943 Nodes, (545637720 bytes), 30.21 % compression
Feature Permutation based on Data File Ordering :
< 1, 2 >
13:21:10.75: Listening...

Size of InstanceBase = 14249860 Nodes, (569994400 bytes), 30.18 % compression
Feature Permutation based on Data File Ordering :
< 2, 1 >
13:21:11.99: Listening...

@cat7
Copy link
Author

cat7 commented Nov 25, 2020

and the uwsgi log:
[uWSGI] getting INI configuration from /home/hsp/OU/etc/uwsgi-emperor/emperor.ini
Wed Nov 25 13:18:51 2020 - *** Starting uWSGI 2.0.19.1 (64bit) on [Wed Nov 25 13:18:51 2020] ***
Wed Nov 25 13:18:51 2020 - compiled with version: 9.3.0 on 13 November 2020 09:35:38
Wed Nov 25 13:18:51 2020 - os: Linux-5.4.0-54-generic #60-Ubuntu SMP Fri Nov 6 10:37:59 UTC 2020
Wed Nov 25 13:18:51 2020 - nodename: hsp-lamachine
Wed Nov 25 13:18:51 2020 - machine: x86_64
Wed Nov 25 13:18:51 2020 - clock source: unix
Wed Nov 25 13:18:51 2020 - pcre jit disabled
Wed Nov 25 13:18:51 2020 - detected number of CPU cores: 4
Wed Nov 25 13:18:51 2020 - current working directory: /home/hsp
Wed Nov 25 13:18:51 2020 - detected binary path: /home/hsp/OU/bin/uwsgi
Wed Nov 25 13:18:51 2020 - your processes number limit is 63824
Wed Nov 25 13:18:51 2020 - your memory page size is 4096 bytes
Wed Nov 25 13:18:51 2020 - detected max file descriptor number: 1024
Wed Nov 25 13:18:51 2020 - lock engine: pthread robust mutexes
Wed Nov 25 13:18:51 2020 - thunder lock: disabled (you can enable it with --thunder-lock)
Wed Nov 25 13:18:51 2020 - *** starting uWSGI Emperor ***
Wed Nov 25 13:18:51 2020 - Python version: 3.8.5 (default, Jul 28 2020, 12:59:40) [GCC 9.3.0]
*** has_emperor mode detected (fd: 6) ***
*** has_emperor mode detected (fd: 8) ***
[uWSGI] getting INI configuration from flat.ini
[uWSGI] getting INI configuration from timbl.ini
*** has_emperor mode detected (fd: 7) ***
[uWSGI] getting INI configuration from ucto.ini
*** has_emperor mode detected (fd: 9) ***
[uWSGI] getting INI configuration from piereling.ini
open("./python3_plugin.so"): No such file or directory [core/utils.c line 3732]
!!! UNABLE to load uWSGI plugin: ./python3_plugin.so: cannot open shared object file: No such file or directory !!!
open("./logfile_plugin.so"): No such file or directory [core/utils.c line 3732]
!!! UNABLE to load uWSGI plugin: ./logfile_plugin.so: cannot open shared object file: No such file or directory !!!
*** has_emperor mode detected (fd: 13) ***
[uWSGI] getting INI configuration from alpino.ini
*** has_emperor mode detected (fd: 12) ***
[uWSGI] getting INI configuration from frog.ini
open("./python3_plugin.so"): No such file or directory [core/utils.c line 3732]
!!! UNABLE to load uWSGI plugin: ./python3_plugin.so: cannot open shared object file: No such file or directory !!!
open("./logfile_plugin.so"): No such file or directory [core/utils.c line 3732]
!!! UNABLE to load uWSGI plugin: ./logfile_plugin.so: cannot open shared object file: No such file or directory !!!
open("./python3_plugin.so"): No such file or directory [core/utils.c line 3732]
!!! UNABLE to load uWSGI plugin: ./python3_plugin.so: cannot open shared object file: No such file or directory !!!
open("./logfile_plugin.so"): No such file or directory [core/utils.c line 3732]
!!! UNABLE to load uWSGI plugin: ./logfile_plugin.so: cannot open shared object file: No such file or directory !!!
open("./python3_plugin.so"): No such file or directory [core/utils.c line 3732]
!!! UNABLE to load uWSGI plugin: ./python3_plugin.so: cannot open shared object file: No such file or directory !!!
open("./logfile_plugin.so"): No such file or directory [core/utils.c line 3732]
!!! UNABLE to load uWSGI plugin: ./logfile_plugin.so: cannot open shared object file: No such file or directory !!!
open("./python3_plugin.so"): No such file or directory [core/utils.c line 3732]
!!! UNABLE to load uWSGI plugin: ./python3_plugin.so: cannot open shared object file: No such file or directory !!!
open("./logfile_plugin.so"): No such file or directory [core/utils.c line 3732]
!!! UNABLE to load uWSGI plugin: ./logfile_plugin.so: cannot open shared object file: No such file or directory !!!
*** has_emperor mode detected (fd: 10) ***
[uWSGI] getting INI configuration from tscan.ini
*** has_emperor mode detected (fd: 14) ***
[uWSGI] getting INI configuration from babelente.ini
*** has_emperor mode detected (fd: 11) ***
[uWSGI] getting INI configuration from colibricore.ini
open("./python3_plugin.so"): No such file or directory [core/utils.c line 3732]
!!! UNABLE to load uWSGI plugin: ./python3_plugin.so: cannot open shared object file: No such file or directory !!!
open("./logfile_plugin.so"): No such file or directory [core/utils.c line 3732]
!!! UNABLE to load uWSGI plugin: ./logfile_plugin.so: cannot open shared object file: No such file or directory !!!
open("./python3_plugin.so"): No such file or directory [core/utils.c line 3732]
!!! UNABLE to load uWSGI plugin: ./python3_plugin.so: cannot open shared object file: No such file or directory !!!
open("./logfile_plugin.so"): No such file or directory [core/utils.c line 3732]
!!! UNABLE to load uWSGI plugin: ./logfile_plugin.so: cannot open shared object file: No such file or directory !!!
open("./python3_plugin.so"): No such file or directory [core/utils.c line 3732]
!!! UNABLE to load uWSGI plugin: ./python3_plugin.so: cannot open shared object file: No such file or directory !!!
open("./logfile_plugin.so"): No such file or directory [core/utils.c line 3732]
!!! UNABLE to load uWSGI plugin: ./logfile_plugin.so: cannot open shared object file: No such file or directory !!!
open("./python3_plugin.so"): No such file or directory [core/utils.c line 3732]
!!! UNABLE to load uWSGI plugin: ./python3_plugin.so: cannot open shared object file: No such file or directory !!!
open("./logfile_plugin.so"): No such file or directory [core/utils.c line 3732]
!!! UNABLE to load uWSGI plugin: ./logfile_plugin.so: cannot open shared object file: No such file or directory !!!
Wed Nov 25 13:18:51 2020 - *** Python threads support is disabled. You can enable it with --enable-threads ***
Wed Nov 25 13:18:51 2020 - Python main interpreter initialized at 0x55a0433fa780
Wed Nov 25 13:18:51 2020 - your mercy for graceful operations on workers is 60 seconds
Wed Nov 25 13:18:51 2020 - *** Operational MODE: no-workers ***
Wed Nov 25 13:18:51 2020 - spawned uWSGI master process (pid: 2110)
Wed Nov 25 13:18:53 2020 - [emperor] vassal tscan.ini has been spawned
Wed Nov 25 13:18:53 2020 - [emperor] vassal tscan.ini is ready to accept requests
Wed Nov 25 13:18:53 2020 - [emperor] vassal alpino.ini has been spawned
Wed Nov 25 13:18:53 2020 - [emperor] vassal alpino.ini is ready to accept requests
Wed Nov 25 13:18:53 2020 - [emperor] vassal timbl.ini has been spawned
Wed Nov 25 13:18:53 2020 - [emperor] vassal timbl.ini is ready to accept requests
Wed Nov 25 13:18:53 2020 - [emperor] vassal babelente.ini has been spawned
Wed Nov 25 13:18:53 2020 - [emperor] vassal babelente.ini is ready to accept requests
Wed Nov 25 13:18:53 2020 - [emperor] vassal flat.ini has been spawned
Wed Nov 25 13:18:53 2020 - [emperor] vassal flat.ini is ready to accept requests
Wed Nov 25 13:18:53 2020 - [emperor] vassal piereling.ini has been spawned
Wed Nov 25 13:18:53 2020 - [emperor] vassal piereling.ini is ready to accept requests
Wed Nov 25 13:18:54 2020 - [emperor] vassal colibricore.ini has been spawned
Wed Nov 25 13:18:54 2020 - [emperor] vassal colibricore.ini is ready to accept requests
Wed Nov 25 13:18:54 2020 - [emperor] vassal ucto.ini has been spawned
Wed Nov 25 13:18:54 2020 - [emperor] vassal frog.ini has been spawned
Wed Nov 25 13:18:54 2020 - [emperor] vassal frog.ini is ready to accept requests
Wed Nov 25 13:18:54 2020 - [emperor] vassal ucto.ini is ready to accept requests

@cat7
Copy link
Author

cat7 commented Nov 25, 2020

Ok, I reinstalled lamachine and now run it with 24 Gb. The woprs made system load go up to 20Gb memory but were stable when analysing a test file. So it was a memory issue, but the foliaserver issue is still present.

Best,
Howard

@proycon
Copy link
Owner

proycon commented Nov 25, 2020

But the foliadocserve issue only shows when you access it through the portal right? FLAT works fine? That may be a bug I'll have to look into then.

@cat7
Copy link
Author

cat7 commented Nov 25, 2020

Hi, yes that is the case.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants