-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
transmission-daemon extreemly high memory usage #313
Comments
Maybe it is a bug of debian 9. Here is a temporary way to solve this problem.
Then run the command:
|
Recompiling transmission with static linking seems to have done the trick |
recompiling transmission w/static linking will cause transmission to keep separate copies of system libraries in memory for itself -- using more memory. I only have about 350 torrents using 7.5G virtual, 2g resident, since each torrent is many GB long, I would hope it uses alot of memory for buffering. According to top, I see transmission using about 2.092% of memory and about 1.3% of of one cpu. The more memory I give it for buffering, the lower the cpu usage and less disk activity. How big are the torrents you are serving? A single suse release comes out on DVD and could easily take >4G. You want to be able to specify memory usage, since if you set it too low, it will slow down torrent serving and raise unnecessary disk activity. |
@messyidea, @Mechazawa, whey you guys talk about "static linking", what exactly do you mean by that? What are the dependencies of transmission-daemon (as reported by ldd) provided by the distro versus the one you built? It seems unlikely to me that just rebuilding something with same set of dependencies will produce different results, no matter how the linking was performed... E.g. could it be that when rebuilding you configured Transmission with uTP disabled while it was enabled in the distro package or vice versa? Or used different crypto backend (openssl, wolfssl, mbedtls)? |
@mikedld |
I tried a regular rebuild from sources yesterday and that seems to be working fine as well for me. |
I got hit by this issue as well. I set up a simple memory usage monitor which pulls the resident set size from /proc and asks the daemon for the current download/upload speeds, as reported to systemd. Now, I'm no Transmission expert by any means, but I guess this cannot be related to the Running Debian testing and Transmission 38d3d53. |
I'm experiencing the same issue with memory leaks on 2.92 on raspbian stretch (raspberry pi model b+ w/ 512mb ram) . The version in the raspbian repository is unusable and immediately jumps to 50% memory usage after startup according to htop and starts climbing. The system soon after starts swapping and becomes unresponsive. Building 2.92 from source works better but memory usage is still high (around 30%) compared to 2.84. And memory leaks occur if i seed from a network mounted drive (sshfs) . I only upgraded to stretch a few weeks ago and prior to this I had no such problems with 2.84 on raspbian jessie. It was rock solid and transmission was running constantly for months at a time with zero issues. I built and installed 2.84 from source yesterday and everything seems back to normal so it's unlikely to be an OS issue. Memory usage rarely exceeds 20% and at the moment is hovering under 15%. Other relevant info, I have cache-size-mb set to 8. I have around 700 torrents loaded but about 250 are seeding at any given time. |
I'm looking at 2 of my torrents -- one 400MB torrent has a piece size of 256KB, another 6G torrent has a piece size of 8M. Of the next 6, 3 have 8MB piece sizes w/the lowest size at 512KB and 4 have 4MB piece sizes. That's 1 piece of the torrent that has to take 4MB of memory while it is sending 1 piece to 1 client. If all of the torrents you serve have a piece size of 1MB each, then to have 256 of them sending or receiving 1 piece to 1 client at a time would take 256MB of buffer memory. If 2.84 was working "rock solid" for you, I'd stick with it. |
Once I've been forced to change the announcer URLs from http to https, transmission-daemon (2.92 trunk 14736) a couple of minutes after start deadlocks Debian 9 testing (Buster) running on 1.5G available RAM (without swap) for 30-60 minutes (no chance to kill manually) until oom-killer does the job. I've figured out a workaround: now I must start such torrents in small batches and delayed for several minutes inbetween, since transmission-daemon doubles its memory consumption after announcing just about 50 torrents. |
Having an announcer URL be https would be very resource intensive. It's the setting up of the TCP connection that is most intense (which was why UDP was added for announcer URL's). Changing it to HTTPS slows it down by several factors. I wouldn't think it would be reasonable to try to support announcing in https -- I just wouldn't think it would be technically feasible with today's hardware. Update - above, I was just thinking of the increased cost of the computations involved in setting up an encrypted session. I wasn't thinking about the network costs which could exacerbate performance issues as HTTPS connections may require contacting multiple 3rd party servers to check keys against revocation lists and following chains of authority between a trusted root and the server you are talking to. All of those require roundtrip network costs (and waits) as remote links in the authority chain are verified. Given much of that verification involves waiting, I'd suspect they'd exact a higher memory cost than CPU cost. |
My workaround just delayed the issue a little bit. I've seen spikes in the memory consumption until it hanged again. It seems, that if announcing fails for some reason, than more and more torrents get announced at the same time, which I have to avoid. So digging into sources is unavoidable. @Astara, yes, I'm aware that TCP with SSL costs more than clear-text UDP, but I'm not speaking of millions of torrents. 1.5GB of RAM are not enough for 50 TCP connections with SSL? This must be a bug in transmission or in a way, how transmission uses libcurl, I guess. |
I refuse to believe that it has anything to do with SSL. It feels a lot more like a memory leak. |
@Mechazawa, until I switched announcers to SSL, I had never problems. And a memory leak cannot fluctuate, it can only accumulate, instead I'm observing spikes in memory consumptions, which, I guess, coincide with reannouncements. |
@andreygursky I'm currently graphing the memory usage of one of the daemons. I'll post the results once it crashes due to lack of memory |
@Mechazawa, what flavour of curl development package was installed when you recompiled transmission (GnuTLS, OpenSSL, NSS,..)? |
@Mechazawa You've already said that recompiling with static linking fixed the problem. That proves it can' be a problem in the code and can't be a memory leak. Besides, what does a memory leak that goes away w/static leaking feel like? If you believe it to be a memory leak -- show that it is so. Run it under Valgrind. It can track every allocation. Compile transmission w/symbols and it can point exactly to what memory is "lost" (no references to it) or what hasn't been freed at exit (but may still be referenced). Most import is memory that has no references to it at program exit. On top of that, valgrind will show you exactly where the memory that was leaked was allocated. I've had transmission 2.92 running for months, and not had it change in its memory usage -- I don't use SSL but I do allow encryption when requested. None of the sites I connect to use SSL trackers -- mostly http, some udp and some magnets. But no way can I believe transmission can run for months, have a memory leak problem and NOT crash or run out of memory. I've been restarting it more recently to free up network bandwidth, but my main daemon (I usually run 2 instances to get better priority grading) has been running since Sep 12... this is the 27'th.. that's 15 days so far -- not that I'd expect it to be any different, as the binary hasn't changed. IF there's a memory leak, it has to be in code I don't use -- like the SSL announce code -- which is a real bad idea anyway. Looking around the web I see figures of 200-400ms/connection -- that would be repeated every announce and with every torrent. 50 torrents might take as much as 25 seconds. If running xmission in a container might take double or triple that. That reminds me -- @andreygursky -- how many cpu's does your machine have and are you running transmission in a container or VM? Networking would be one of the slower virtualization areas for connection creation/teardown. If you are doing SSL in a container, 50 torrents might tax things (I wouldn't think it likely), but costs of repeated SSL session creation/teardown times virtualization costs might add up... BTW -- I'm not a transmission developer, but I do build my own and have used it for several years. I usually have it running 24x7 on my home machine, so I'd notice it if it was misbehaving... p.s. I'm running the daemon(s) on linux and use the Windows Transmission GUI to interact with them (for the most part). They get started @ boot time via scripts and run as their own user. The one that's been running for 15 days shows 1266963K (~1.2G) usage, which is ballpark / normal for ~330 torrents in seeding state. |
I've removed libcurl4-gnutls-dev and libssl-dev and installed libcurl4-openssl-dev and libssl1.0-dev instead. Then I've rebuilt transmission. This seems to solve the issue. Searching reveals following: https://stackoverflow.com/questions/45498537/https-request-memory-leak-with-curlasynchttpclient and curl/curl#1086. Don't know yet how related. If my issue is unrelated to the original one, let me know, to open a new one or @mikedld could just add a new entry into known issues list? P.S. It's a little bit strange, Debian cherry-picked OpenSSL 1.1 patch, but uses GnutTLS flavour of libcurl. And even if you like to use OpenSSL flavour of libcurl, it is only of version 1.0, while mixing is not possible (and not something good in general), thus using OpenSSL 1.1 is not possible at the moment. |
@Astara, it's a pretty old PC with one core 64bit CPU. |
2000 torrents each? I use between 1.5-3Gb with 350 torrents with no SSL. Is that the memory usage for all 4 daemons? So about 8000 torrents? Have you ever had this working w/transmission or any other torrent program? Just seems like quite a few torrents. If it is a memory leak, you should be able to reproduce it with 1 instance and 500 torrents -- might just take longer. Have you tried that or are you just running 4 instances w/8000 torrents total. Did you mention your HW? How much memory is on the machine? 8G? How many Cores? What's the %cpu usage during this time... Are any of the torrents active? I.e. actually servicing clients w/traffic? Are they all waiting for connections, or are some number of them "queued"? FWIW, I don't use the 'queues' feature of transmission -- so when I have 350 torrents -- they are all awaiting client connections and are all announcing every ~ half hour (I think). Some of the terminology seems a bit confusing, as when someone says they have 2000 torrents, not sure if all are listening or only a small number are and the rest are queued. From what I can tell, some people's machines were getting overloaded with more than 10-20 active torrents -- so the queuing was added to allow only some to actually be active at one time. For myself, I split my torrents when I got about 400, since I noticed my GUI response was getting too slow for my tastes -- but that's running the GUI over a local net via the RPC client. Just guessing, but probably had to do with the RPC interface and too many requests had to be done to update each display. Might be helped if the RPC interface had bulk operations, but that presumes anyone can remake/recompile the GUI as I'm told its author has stopped updating it (I tried to compile it, but couldn't get it to compile under the Pascal compiler they used. Anwway, when you get time, you might try running 1 instance with 50 clients under Valgrind (it's open source at valgrind.org). I found it invaluable in tracking down memory problems (mine involved use-after-free pointers). You wouldn't want to run it till it ran out of memory -- would generate way too much output. Only need to run it through 1 announce (via SSL) cycle, then stop the program -- it will show you any lost memory. Even 1 client is likely to show the problem if it is really a leak -- even pointer reuse, since it keeps track of everything. @andreygursky -- BTW , you say 50 torrents with SSL will dup the bug ... You might try running valgrind as well, even with 1 torrent+SSL and see what it comes up with. I believe it can find memory problems even with a normal binary -- but to have it mean anything and to track it down, you'd eventually want to recompile with symbols. BTW -- RE: the problem on stackoverflow -- good find! Even there you can see that someone else changed the mix a bit using different backends and widely different memory usages (even though they couldn't reproduce the problem). I'm glad you found a workaround for now... |
I tried @andreygursky's sollution and that seems to have fixed it. @Astara |
I can also confirm that compiling transmission 2.92 with libcurl4-openssl-dev and libssl1.0-dev on debian/raspbian stable appears to fix the memory issues. Thanks @andreygursky ! |
Has someone reported this bug to the debian maintainers? |
@Mechazawa, exactly the spikes I spoke about. @Bisaloo, I've found this issue here from the Debian's bug report transmission-daemon 2.92 high memory usage. |
so does this issue #333 is relevant ? |
I don't know if Debian is going to fix this downstream? |
@Seeder101 The package maintainers should be notified: https://packages.debian.org/source/stretch/transmission |
@Mechazawa https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=865624 |
You mean to "make" the maintainers...??? Huh? How do you propose to "make" a volunteer to do what you want? Maybe try editing the product generation script (for rpm would be "spec" file.. not sure what it would be for debian)... and submitting the fixed version as a proposed fix? ahem... and people wonder why developers get burnt out.... |
This is what makes trouble. transmission should be built with libcurl4-openssl-dev installed (instead of libcurl4-gnutls-dev). |
@andreygursky I guess I've been lucky this hasn't affected my for very long, but for me, this issue only started over the last 3-4 months. I have edited my post above, but on 2020-07-30 I updated from the version from 2020-02-26. I've discarded the 'half installed' part of the log above, but if I had to guess, 2020-07-30 06:13:32 status installed libcurl3-gnutls:amd64 7.52.1-5+deb9u11 Maybe that build merged new code, but maybe they upgraded build OS or changed build options? |
Downgrading is of course the simplest solution. But Debian oldstable packages are likely to be updated only because of security issues, thus making downgrades only for short time justifiable. |
@andreygursky it's not crashing fast enough to make it unusable, it just took 1 hour to get killed. I added 'Restart=always' to the systemd config so I don't have to intervene. I'll wait 20 minutes, see how fast memory increases, then see if deleting some torrents helps. |
Instead it is enough to ensure, that not too many torrents with https trackers are announced almost simultaneously. You may try to stop them and then start one after another with some delay. |
I leave my torrents paused when finished (for shame) but, but could transmission be connecting to the trackers even when the torrent is paused? I removed all torrents, and it seemed to stop growing, but stuck at 460Mb (total used ram on system). I didn't wait for long, but restarting it dropped it back down to 109Mb used by the system and it doesn't seem to be growing, so yay, problem solved. I guess it's a deterant to seeding if the smoothest option is deleting all after 50 :/ but if it makes crashes less frequent I'll take it |
No, it shouldn't. |
Try this raspberrypi/linux#3210 (comment) - I did not tried it on transission but in my case works so far well. |
Mine has been stable for 2+ days now. I think my issue is actually coming from the transgui on my windows machine. If I leave it open it seems to chew into the memory on my pi's daemon. I use SSL for the transgui. Edit: What I mean by coming from is the SSL calls from Transmission remote GUI to the daemon. |
@sirskills pretty sure I can confirm this sequence of events.. Mine had been running smoothly but a crash happened overnight. I wasn't downloading anything yesterday, but I did open up transmission-remote-gtk and the connections to the server are using TLS (terminated by nginx though, not transmission-daemon) There was no traffic to the machine yesterday except from 9.11pm, and a little afterwards traffic started to rise. I don't think I did anything with the downloads around then.. |
I am also using nginx reverse proxy, could be something. |
I'll try take it a step further and get memory usage of nginx + transmission-daemon over time, since we're just seeing kernel vs userspace above.. |
Oh, but I've only seen oom-killer reap the transmission-daemon process - not nginx. So we need to see what part of transmission-daemon's memory is growing.. Going to log the pid's smap file for a few hours and see what's happening https://unix.stackexchange.com/questions/36450/how-can-i-find-a-memory-leak-of-a-running-process |
Yea I haven't seen memory increase on nginx, just the transmission-daemon. Not sure what pmap is. |
I might have to recompile transmission-daemon to enable all the debug symbols, no line numbers currently, but here's a report by the memleax tool after attaching to my running transmission-daemon (via nginx reverse proxy performing TLS termination) https://gist.github.com/afk11/f97b25952016195f9944e8cee325a857 |
I ran strings on the memory region that grew (according to pmap), the start has torrent filenames, some tracker urls, etc, but soon it moves onto really random strings. At the very end of the file, it's complete gibberish
and if we scroll around a bit more, I noticed some batches of UUUUU were prefixed by LAME3.9something
Probably not super helpful, will try build from source later |
Mine has been stable for at least 10 days now. All I've had to do is make sure I don't leave my transmission-remote GUI open on my windows machine. |
I experienced this issue even with the "fixed" Debian version 2.94-2 and openssl in use instead of gnutls. It wasn't as pronounced as some people here experienced it, but transmission-daemon still grew over 2-3 days from 45MB RSS (which is exactly the amount of memory transmission-daemon uses on Windows with the same # of torrents) to 180MB RSS. With transmission-daemon 3.0 backported I am no longer seeing it. Initial memory was again around 45MB RSS, and then it grew pretty quickly to 55MB as torrents were added and downloaded. But so far it's been relatively stable at 55MB RSS. This is with around 650 seeded torrents, btw. If memory use changes noticeably one way or the other, I will edit. I can only recommend you guys pull version 3.0 from testing. It's relatively easy since there are so few dependencies. I haven't seen any problems. |
I'm closing this issue due to inactivity and due to the last report that this is resolved by using 3.0 |
i think i have a similar problem on ubuntu 22.04 with transmission-daemon, with just 6 torrent files running, it feels like it keeps all the torrents directly in memory in the process itself instead of using streaming or linux's build-in caching: $ ps -aux | grep transmission
debian-+ 1711 0.8 39.7 10632612 6461816 ? Ssl Apr30 133:43 /usr/bin/transmission-daemon -f --log-error $ apt list --installed "transmission*"
Listing... Done
transmission-cli/jammy,now 3.00-2ubuntu2 amd64 [installed]
transmission-common/jammy,now 3.00-2ubuntu2 all [installed,automatic]
transmission-daemon/jammy,now 3.00-2ubuntu2 amd64 [installed] Also, when i add another torrent (that is about ~2GB), it over the time of the torrent downloading increases by about another GB of memory usage |
I've also run into memory issues since upgrading to Ubuntu 22.04. Over about 3 days, the memory usage of transmission-daemon climbs until all memory is used and the entire system (a rpi) freezes. This is with no active torrents. I did not have this issue on Ubuntu 20.04. |
It looks like a different issue. In few words, this issue is about very high memory consumption spikes during reannouncing torrents to trackers using https (instead of plain http), especially if high number of such reannounces are scheduled simultaneously. The workaround is to build transmission with libcurl4-openssl-dev instead of libcurl4-gnutls-dev, which was applied in Debian/Ubuntu. @hasezoey, could you please open a new issue with the description you provided? Then @stevenengler could copy the comment there. |
created a new issue #3077 |
I'm running Sure, it went up to 2MB/s during the day, but why would that affect current state? After restarting the daemon and started seeding the same torrents, RAM went down to 160MB, which is lot nicer. Shouldn't we leave disk caching to the OS? Or what is the RAM used for? I think there is a garbage collection missing after spikes. |
Also there is no open ticket for RAM issues, the solution mentioned was to periodically restart it. |
I currently have five sessions running and they take up all available memory. It takes up all available ram fairly quickly sometimes within the hour.
The instances each have 250, 250, 1000, 1000 and 5200 torrents. They will all compete use up all memory when given the chance (having only that instance running) regardless of the torrent count.
Running on Debian Jessie.
The text was updated successfully, but these errors were encountered: