New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Out of memory error #145
Comments
can yo send rats.log? |
Sorry I missed your request. I can't find any rats.log file anywhere: do I need to enable debugging somewhere? |
it's must be in the same folder where db and other config files, what OS do you using? |
debian 11 (bullseye) |
Anyway even if augmenting the memory size in that way seems to help , after a few days of running I still get plenty of errors and I need to restart the server. |
This issue is till marked with "waiting for reply", do you need more informations? |
yes, if possible, when it's happens, and if possible attach rats.log, it's must be in Rats on the Boat folder, on unix systems somehre on ~/.config or ~/.share directories |
There is no such file on my system, I found the directory you say: |
In the directory where I clone the repo and run th server (web version) I have some of the files you show here (for example version.vrs, sphinx.conf, query.og and searchd.log), but no rats.log. |
BTW, I have no clue how to run the GUI version, you don't explain in the readme, and I'm interested |
for web version it's in the same directory as sources |
GUI version it's https://github.com/DEgITx/rats-search/releases/tag/v1.7.1 AppImage for example, as desktop application, not web one |
This morning my instance was not working and on the console I had millions of lines like this one: I had to kill -9 the server to make it stop. A simple kill signal didn't work.
If it useful (I doubt) I'll open a separate issue, please tell me. |
for each problem open separate, open and attach log |
the problem btw is still I have no rats.log anywhere. Any hint? |
I already told that rats.log in the same folder with sources for web version |
I will recheck this |
I know, but the file doesn't exist anywhere and, believe me, I looked everywhere so, how can I check why the file is not created? |
this functionality was missing in web version, I restored in master |
Can I run master or I risk losing the db again? |
yes you can |
So, I run master with 16GB of ram and everything went fine for a couple days but eventually started to get these errors:
And even pressing CTRl-C multiple times the server was unable to stop. At the end I pressed CTRL-\ to kill it and was able to re-run the server. I don't seem to find something interesting in the log file, do you want me to attach it? |
please provide me also logs/screen of memory usage of rats processes after few days, also will help |
The master seems much more stable, it is now able to run for a couple of days without issues. Anyway then it start to hang and I need to kill it, this is the memory situation:
Checking the log I see also many of these errors:
If useful I can attach full logs |
Memory situation before today's crash
|
today it crashed by itself and these are the latest rows on the console:
|
BTW, the issues is still marked with "awaiting reply", may I give you more informations? |
if possible also make screen of htop after few days of usage and attach here |
# [1.8.0](v1.7.1...v1.8.0) (2021-09-12) ### Bug Fixes * **db:** converting db to version 8 ([c19a95d](c19a95d)) * **db:** moving content type to uint values ([f4b7a8d](f4b7a8d)) * **docker:** moved to 16 version ([1089fa3](1089fa3)) * **linux:** add execute right to searchd.v2 [#152](#152) ([0bc35c5](0bc35c5)) * **linux:** fix convertation of db under linux system [#152](#152) ([ea01858](ea01858)) ### Features * **log:** using tagslog ([750dbfd](750dbfd)) * **server:** missing rats.log functionality restored [#145](#145) ([d5243ff](d5243ff)) ### Performance Improvements * **db:** optimize some tables to stored_only_fields to recrudesce memory usage of big databases [#152](#152) ([762b0d1](762b0d1))
I will collect the relevant statistics of the process, in the meantime version 1.8 after a few days of great working crashed in this way.
|
This a part of the htop situation after a couple days
|
htop dump a day later: as you can see the used memory grows until it reaches the maximum allowed
|
# [1.8.0](v1.7.1...v1.8.0) (2021-09-12) ### Bug Fixes * **db:** converting db to version 8 ([c19a95d](c19a95d)) * **db:** moving content type to uint values ([f4b7a8d](f4b7a8d)) * **docker:** moved to 16 version ([1089fa3](1089fa3)) * **linux:** add execute right to searchd.v2 [#152](#152) ([0bc35c5](0bc35c5)) * **linux:** fix convertation of db under linux system [#152](#152) ([ea01858](ea01858)) ### Features * **log:** using tagslog ([750dbfd](750dbfd)) * **server:** missing rats.log functionality restored [#145](#145) ([d5243ff](d5243ff)) ### Performance Improvements * **db:** optimize some tables to stored_only_fields to recrudesce memory usage of big databases [#152](#152) ([762b0d1](762b0d1))
I leave my client running for days (I suppose this should be the default usage), but after a few hours I get out of heap memory errors.
Can you provide a way to configure the usage of bigger heaps?
The text was updated successfully, but these errors were encountered: