Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Automatically release memory #542

Open
ttsite opened this issue Aug 17, 2023 · 20 comments
Open

Automatically release memory #542

ttsite opened this issue Aug 17, 2023 · 20 comments

Comments

@ttsite
Copy link

ttsite commented Aug 17, 2023

Can you directly add an automatic memory release function in the software?

@SChernykh
Copy link
Contributor

Can you explain more? xmrig-proxy doesn't use much memory to begin with.

@ttsite
Copy link
Author

ttsite commented Aug 17, 2023

As time goes by, the memory usage will increase. What I mean is if there is a setting function that automatically releases memory after reaching a certain memory usage

@SChernykh
Copy link
Contributor

How much does it increase over time? Is it constantly "leaking"? Then this is a bug, it shouldn't increase.

@ttsite
Copy link
Author

ttsite commented Aug 17, 2023

How much does it increase over time? Is it constantly "leaking"? Then this is a bug, it shouldn't increase.

Continuously increasing!

@SChernykh
Copy link
Contributor

I'll look into it next week then. It shouldn't constantly increase. Which xmrig-proxy version do you use? One of the release binaries, or do you compile it yourself?

@SChernykh
Copy link
Contributor

I did a quick test with https://github.com/google/sanitizers/wiki/AddressSanitizerLeakSanitizer and didn't find memory leaks. Do you use release binaries or do you compile xmrig-proxy yourself?

@ttsite
Copy link
Author

ttsite commented Aug 18, 2023

I did a quick test with https://github.com/google/sanitizers/wiki/AddressSanitizerLeakSanitizer and didn't find memory leaks. Do you use release binaries or do you compile xmrig-proxy yourself?

Both binary and self compiled tests have been conducted, which means that the memory usage will gradually increase with the increase of usage time. The version is the latest version, and I have been using third-party memory tools to organize it. My suggestion is whether the software can add a memory organizing function. Set a memory usage, and if it exceeds this usage, it will automatically organize and use memory to release it

@SChernykh
Copy link
Contributor

"Memory organizing function" is not how C++ programs work. If memory usage is growing, it's a memory leak and it's a bug. Your OS can already automatically reduce memory used by programs (it's called swapping), no support from xmrig-proxy is needed.

@bwq90
Copy link

bwq90 commented Mar 27, 2024

@SChernykh

I am also facing similar issue on xmrig proxy. I am using latest binary. I have 128GB RAM Server. But xmrig proxy will kill itself after 2 to 3 days with following errors in dmesg -T

[Tue Mar 26 10:38:28 2024] Out of memory: Killed process 893099 (xmrig-proxy) total-vm:109881936kB, anon-rss:109167072kB, file-rss:924kB, shmem-rss:0kB, UID:0 pgtables:214536kB oom_score_adj:0 [Tue Mar 26 10:38:29 2024] TCP: out of memory -- consider tuning tcp_mem [Tue Mar 26 10:38:29 2024] TCP: out of memory -- consider tuning tcp_mem [Tue Mar 26 10:38:29 2024] TCP: out of memory -- consider tuning tcp_mem [Tue Mar 26 10:38:29 2024] TCP: out of memory -- consider tuning tcp_mem [Tue Mar 26 10:38:29 2024] TCP: out of memory -- consider tuning tcp_mem [Tue Mar 26 10:38:29 2024] TCP: out of memory -- consider tuning tcp_mem [Tue Mar 26 10:38:29 2024] TCP: out of memory -- consider tuning tcp_mem [Tue Mar 26 10:38:29 2024] TCP: out of memory -- consider tuning tcp_mem
Strangely this happens on only 1 xmrig-proxy process which has a lot of miners assigned. This do not occur with other xmrig proxies process which has lower miners connected and that is running fine for a week now since start.

But this other high miner count proxy will continue eating up RAM and will get killed it self every 2 days. I have implemented a shell function to run it as soon as it is killed and also implemented a cron job to release memory of cache. When Memory usage is 70% it will drop cache.

But I want to get to bottom of this. I also tried to add 3 servers in parallel to load balance the high miner count. but those 3 servers will also get xmrig-proxy out of memory every few days.
I have tuned the sysctl with optimal settings.

Can you please support on this.

@SChernykh
Copy link
Contributor

SChernykh commented Mar 27, 2024

Did you try values from this article https://dzone.com/articles/tcp-out-of-memory-consider-tuning-tcp-mem ?

net.core.netdev_max_backlog=30000
net.core.rmem_max=134217728
net.core.wmem_max=134217728
net.ipv4.tcp_max_syn_backlog=8192
net.ipv4.tcp_rmem=4096 87380 67108864
net.ipv4.tcp_wmem=4096 87380 67108864

And also net.ipv4.tcp_mem = 4096 87380 67108864

Can you check if that crashing xmrig-proxy leaks open TCP sockets? You can use lsof -a -n -p PID | wc -l command to check it.

@bwq90
Copy link

bwq90 commented Mar 27, 2024

Thanks for prompt response @SChernykh . I have already added these lines in my sysctl conf.

The lsof command just hangs on the server. probably due to high number of open files? :)

@bwq90
Copy link

bwq90 commented Mar 27, 2024

in dmesg -T output. I see too many orphaned sockets output after xmrig is crashed.

[Tue Mar 26 10:41:04 2024] TCP: too many orphaned sockets
[Tue Mar 26 10:41:04 2024] TCP: too many orphaned sockets
[Tue Mar 26 10:41:04 2024] TCP: too many orphaned sockets
[Tue Mar 26 10:41:04 2024] TCP: too many orphaned sockets
[Tue Mar 26 10:41:04 2024] TCP: too many orphaned sockets
[Tue Mar 26 10:41:08 2024] net_ratelimit: 39 callbacks suppressed
[Tue Mar 26 10:41:08 2024] TCP: out of memory -- consider tuning tcp_mem
[Tue Mar 26 10:41:08 2024] TCP: too many orphaned sockets
[Tue Mar 26 10:41:08 2024] TCP: too many orphaned sockets
[Tue Mar 26 10:41:08 2024] TCP: out of memory -- consider tuning tcp_mem
[Tue Mar 26 10:41:10 2024] TCP: too many orphaned sockets
[Tue Mar 26 10:41:10 2024] TCP: too many orphaned sockets
[Tue Mar 26 10:41:10 2024] TCP: too many orphaned sockets
[Tue Mar 26 10:41:10 2024] TCP: too many orphaned sockets
[Tue Mar 26 10:41:10 2024] TCP: too many orphaned sockets
[Tue Mar 26 10:41:12 2024] TCP: too many orphaned sockets
[Tue Mar 26 10:41:15 2024] net_ratelimit: 2 callbacks suppressed
[Tue Mar 26 10:41:15 2024] TCP: too many orphaned sockets
[Tue Mar 26 10:41:15 2024] TCP: too many orphaned sockets
[Tue Mar 26 10:41:17 2024] TCP: too many orphaned sockets
[Tue Mar 26 10:41:17 2024] TCP: too many orphaned sockets
[Tue Mar 26 10:41:17 2024] TCP: too many orphaned sockets
[Tue Mar 26 10:41:17 2024] TCP: too many orphaned sockets
[Tue Mar 26 10:41:17 2024] TCP: too many orphaned sockets
[Tue Mar 26 10:41:17 2024] TCP: too many orphaned sockets
[Tue Mar 26 10:41:17 2024] TCP: too many orphaned sockets
[Tue Mar 26 10:41:17 2024] TCP: too many orphaned sockets
[Tue Mar 26 10:41:21 2024] net_ratelimit: 1 callbacks suppressed
[Tue Mar 26 10:41:21 2024] TCP: too many orphaned sockets
[Tue Mar 26 10:41:21 2024] TCP: too many orphaned sockets
[Tue Mar 26 10:41:21 2024] TCP: too many orphaned sockets
[Tue Mar 26 10:41:21 2024] TCP: too many orphaned sockets
[Tue Mar 26 10:41:21 2024] TCP: too many orphaned sockets

@bwq90
Copy link

bwq90 commented Mar 27, 2024

Just to add. I have workers set to false already. But I have custom-diff set to 1000 and

"custom-diff": 1000,
"custom-diff-stats": true,

Can this may have some impact due to high miner count?

@SChernykh
Copy link
Contributor

Difficulty 1000 is too low, you'll be getting too many submitted shares and too high network load. XMRig donation servers set difficulty 1,000,000 for a reason.

@bwq90
Copy link

bwq90 commented Mar 27, 2024

Thanks. What should be optimal setting for custom diff if I have huge count of miners with different hardware CPU

@SChernykh
Copy link
Contributor

It doesn't matter if a single miner has low hashrate and can't submit shares every 30 seconds. What matters is the overall load on the proxy, so you can just set difficulty = your total hashrate and get 1 incoming share/second on average.

@bwq90
Copy link

bwq90 commented Mar 27, 2024

Thanks for the insight. I have changed it to

"custom-diff": 0,
"custom-diff-stats": false,

I'll monitor for crash logs again. The Hardware of Proxy is top noch.

Intel(R) Xeon(R) CPU E5-2620 v3
128G RAM
SSD Disk

sysctl is tuned along with other Kernel tunings.

@bwq90
Copy link

bwq90 commented Mar 27, 2024

Sorry for being a pain @SChernykh . One last concern. My only reason for lowering the custom diff to 1000 was that my miners are often offline and online. So what if they are assigned a high diff job and in the process of calculating the share. if They get offline and shutdown or timed out. In that scenario where there are a lof of such miners. Shouldn't we low the diff to lowest value so that we can get maximum work from miners when they are online?

@SChernykh
Copy link
Contributor

SChernykh commented Mar 27, 2024

No, this is not how mining works. Otherwise no one would be able to mine a block because no one is able to submit a 300G difficulty share within a few hours they're online. Finding a share is a random memory-less process, and the law of big numbers applies here. Many big miners = 1 big miner with the same hashrate for all practical purposes.

@bwq90
Copy link

bwq90 commented Mar 27, 2024

Thank you so much. I have made the changes as per your suggestion and will monitor it for few days to see if proxy crashes for high memory. I hope it should not crash now. Secondly when should I decide that now is that time to add a parallel load balancer server with proxy? What should be the maximum / optimal number of workers for a single Ubuntu Server with such specs?

Intel(R) Xeon(R) CPU E5-2620 v3
128G RAM
SSD Disk

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants