Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

transmission-daemon extreemly high memory usage #313

Closed
Mechazawa opened this issue Jun 21, 2017 · 125 comments
Closed

transmission-daemon extreemly high memory usage #313

Mechazawa opened this issue Jun 21, 2017 · 125 comments

Comments

@Mechazawa
Copy link

Mechazawa commented Jun 21, 2017

I currently have five sessions running and they take up all available memory. It takes up all available ram fairly quickly sometimes within the hour.

$ free -h
              total        used        free      shared  buff/cache   available
Mem:            15G         15G        157M         11M        127M         50M
Swap:          8.0G        4.8G        3.2G
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root     31717  0.4  4.0 5475364 662052 ?      Ssl  Jun18  20:47 /usr/bin/transmission-daemon -f -g /path/to/config
root     31619  0.3  4.4 5017080 719940 ?      Ssl  Jun18  16:25 /usr/bin/transmission-daemon -f -g /path/to/config
root     31501  0.8  6.0 5278224 990924 ?      Ssl  Jun18  38:28 /usr/bin/transmission-daemon -f -g /path/to/config
root     31859  1.0  9.3 8096668 1530164 ?     Ssl  01:38   6:08 /usr/bin/transmission-daemon -f -g /path/to/config
root      4456  4.5 67.0 15763784 10955100 ?   Ssl  04:57  16:30 /usr/bin/transmission-daemon -f -g /path/to/config
$ smem -k
  PID User     Command                         Swap      USS      PSS      RSS 
31717 root     /usr/bin/transmission-daemo   611.9M   644.2M   644.7M   647.4M 
31619 root     /usr/bin/transmission-daemo    46.7M   701.0M   702.1M   705.5M 
31501 root     /usr/bin/transmission-daemo   181.6M   878.4M   878.8M   881.4M 
31859 root     /usr/bin/transmission-daemo     1.2G     2.5G     2.5G     2.5G 
 4456 root     /usr/bin/transmission-daemo   374.2M     9.4G     9.4G     9.4G

The instances each have 250, 250, 1000, 1000 and 5200 torrents. They will all compete use up all memory when given the chance (having only that instance running) regardless of the torrent count.

Running on Debian Jessie.

Linux 4.9.0-1-amd64 #1 SMP Debian 4.9.6-3 (2017-01-28) x86_64 GNU/Linux
transmission-daemon 2.92 (14714)
@messyidea
Copy link

messyidea commented Jun 23, 2017

Maybe it is a bug of debian 9.
Today I have meet this problem too.
I have build a statically linked transmission-daemon. It works ok.

Here is a temporary way to solve this problem.
Replace the /usr/bin/transmission-daemon with statically linked transmission.
Change the /etc/systemd/system/multi-user.target.wants/transmission-daemon.service as follows:

[Unit]
Description=Transmission BitTorrent Daemon
After=network.target

[Service]
User=debian-transmission
Type=simple
ExecStart=/usr/bin/transmission-daemon -f --log-error
ExecStop=/bin/kill -s STOP $MAINPID
ExecReload=/bin/kill -s HUP $MAINPID


[Install]
WantedBy=multi-user.target

Then run the command:

systemctl daemon-reload
systemctl restart transmission-daemon

transmission-daemon.zip

@Mechazawa
Copy link
Author

Recompiling transmission with static linking seems to have done the trick

@Astara
Copy link

Astara commented Jul 12, 2017

recompiling transmission w/static linking will cause transmission to keep separate copies of system libraries in memory for itself -- using more memory.

I only have about 350 torrents using 7.5G virtual, 2g resident, since each torrent is many GB long, I would hope it uses alot of memory for buffering. According to top, I see transmission using about 2.092% of memory and about 1.3% of of one cpu. The more memory I give it for buffering, the lower the cpu usage and less disk activity.

How big are the torrents you are serving? A single suse release comes out on DVD and could easily take >4G. You want to be able to specify memory usage, since if you set it too low, it will slow down torrent serving and raise unnecessary disk activity.

@mikedld
Copy link
Member

mikedld commented Jul 28, 2017

@messyidea, @Mechazawa, whey you guys talk about "static linking", what exactly do you mean by that? What are the dependencies of transmission-daemon (as reported by ldd) provided by the distro versus the one you built? It seems unlikely to me that just rebuilding something with same set of dependencies will produce different results, no matter how the linking was performed... E.g. could it be that when rebuilding you configured Transmission with uTP disabled while it was enabled in the distro package or vice versa? Or used different crypto backend (openssl, wolfssl, mbedtls)?

@messyidea
Copy link

@mikedld
Here is my build script.
https://gist.github.com/messyidea/16f9c63d7b219d2a4755e30aedd8268e
Based on this repo.
https://github.com/lancethepants/transmission-mipsel-static/blob/master/transmission.sh

@Mechazawa
Copy link
Author

I tried a regular rebuild from sources yesterday and that seems to be working fine as well for me.

@xavery
Copy link
Contributor

xavery commented Aug 15, 2017

I got hit by this issue as well. I set up a simple memory usage monitor which pulls the resident set size from /proc and asks the daemon for the current download/upload speeds, as reported to systemd.
I was hoping that the download/upload speeds would correlate with the memory usage. However, it seems like the daemon uses a ton of memory at start (the machine was swapping a lot), and then - somehow - the memory usage stabilises to around 800MB and stays this way irrelevant of the down/up speeds. I have about 200 running torrents in the session.
You can see the graph here. The x axis represents the number of seconds since the monitoring was started. The memory usage is in pages, multiply by 4 to get KBs. I can provide the data file on request.

Now, I'm no Transmission expert by any means, but I guess this cannot be related to the cache-size-mb setting at all : there is only one cache object for the whole Transmission session, which is resized only while reading the settings to make its size equal to the one requested in the settings. Then I thought that maybe I/O prefetching is responsible for this situation (since my Transmission was compiled with support for posix_fadvise), but then the memory usage wouldn't be counted towards the daemon process itself, but rather towards the kernel's I/O buffers reported as the buff/cache column in the output of free.

Running Debian testing and Transmission 38d3d53.

@nakhan98
Copy link

nakhan98 commented Sep 17, 2017

I'm experiencing the same issue with memory leaks on 2.92 on raspbian stretch (raspberry pi model b+ w/ 512mb ram) . The version in the raspbian repository is unusable and immediately jumps to 50% memory usage after startup according to htop and starts climbing. The system soon after starts swapping and becomes unresponsive.

Building 2.92 from source works better but memory usage is still high (around 30%) compared to 2.84. And memory leaks occur if i seed from a network mounted drive (sshfs) .

I only upgraded to stretch a few weeks ago and prior to this I had no such problems with 2.84 on raspbian jessie. It was rock solid and transmission was running constantly for months at a time with zero issues.

I built and installed 2.84 from source yesterday and everything seems back to normal so it's unlikely to be an OS issue. Memory usage rarely exceeds 20% and at the moment is hovering under 15%.

Other relevant info, I have cache-size-mb set to 8. I have around 700 torrents loaded but about 250 are seeding at any given time.

@Astara
Copy link

Astara commented Sep 18, 2017

I'm looking at 2 of my torrents -- one 400MB torrent has a piece size of 256KB, another 6G torrent has a piece size of 8M. Of the next 6, 3 have 8MB piece sizes w/the lowest size at 512KB and 4 have 4MB piece sizes. That's 1 piece of the torrent that has to take 4MB of memory while it is sending 1 piece to 1 client. If all of the torrents you serve have a piece size of 1MB each, then to have 256 of them sending or receiving 1 piece to 1 client at a time would take 256MB of buffer memory.

If 2.84 was working "rock solid" for you, I'd stick with it.

@andreygursky
Copy link

Once I've been forced to change the announcer URLs from http to https, transmission-daemon (2.92 trunk 14736) a couple of minutes after start deadlocks Debian 9 testing (Buster) running on 1.5G available RAM (without swap) for 30-60 minutes (no chance to kill manually) until oom-killer does the job. I've figured out a workaround: now I must start such torrents in small batches and delayed for several minutes inbetween, since transmission-daemon doubles its memory consumption after announcing just about 50 torrents.

@Astara
Copy link

Astara commented Sep 26, 2017

Having an announcer URL be https would be very resource intensive. It's the setting up of the TCP connection that is most intense (which was why UDP was added for announcer URL's). Changing it to HTTPS slows it down by several factors. I wouldn't think it would be reasonable to try to support announcing in https -- I just wouldn't think it would be technically feasible with today's hardware.

Update - above, I was just thinking of the increased cost of the computations involved in setting up an encrypted session. I wasn't thinking about the network costs which could exacerbate performance issues as HTTPS connections may require contacting multiple 3rd party servers to check keys against revocation lists and following chains of authority between a trusted root and the server you are talking to. All of those require roundtrip network costs (and waits) as remote links in the authority chain are verified. Given much of that verification involves waiting, I'd suspect they'd exact a higher memory cost than CPU cost.

@andreygursky
Copy link

andreygursky commented Sep 26, 2017

My workaround just delayed the issue a little bit. I've seen spikes in the memory consumption until it hanged again. It seems, that if announcing fails for some reason, than more and more torrents get announced at the same time, which I have to avoid. So digging into sources is unavoidable.

@Astara, yes, I'm aware that TCP with SSL costs more than clear-text UDP, but I'm not speaking of millions of torrents. 1.5GB of RAM are not enough for 50 TCP connections with SSL? This must be a bug in transmission or in a way, how transmission uses libcurl, I guess.

@Mechazawa
Copy link
Author

I refuse to believe that it has anything to do with SSL. It feels a lot more like a memory leak.

@andreygursky
Copy link

@Mechazawa, until I switched announcers to SSL, I had never problems. And a memory leak cannot fluctuate, it can only accumulate, instead I'm observing spikes in memory consumptions, which, I guess, coincide with reannouncements.

@Mechazawa
Copy link
Author

@andreygursky I'm currently graphing the memory usage of one of the daemons. I'll post the results once it crashes due to lack of memory

@andreygursky
Copy link

andreygursky commented Sep 27, 2017

@Mechazawa, what flavour of curl development package was installed when you recompiled transmission (GnuTLS, OpenSSL, NSS,..)?

@Astara
Copy link

Astara commented Sep 27, 2017

@Mechazawa You've already said that recompiling with static linking fixed the problem. That proves it can' be a problem in the code and can't be a memory leak. Besides, what does a memory leak that goes away w/static leaking feel like?

If you believe it to be a memory leak -- show that it is so. Run it under Valgrind. It can track every allocation. Compile transmission w/symbols and it can point exactly to what memory is "lost" (no references to it) or what hasn't been freed at exit (but may still be referenced). Most import is memory that has no references to it at program exit. On top of that, valgrind will show you exactly where the memory that was leaked was allocated.

I've had transmission 2.92 running for months, and not had it change in its memory usage -- I don't use SSL but I do allow encryption when requested. None of the sites I connect to use SSL trackers -- mostly http, some udp and some magnets. But no way can I believe transmission can run for months, have a memory leak problem and NOT crash or run out of memory. I've been restarting it more recently to free up network bandwidth, but my main daemon (I usually run 2 instances to get better priority grading) has been running since Sep 12... this is the 27'th.. that's 15 days so far -- not that I'd expect it to be any different, as the binary hasn't changed.

IF there's a memory leak, it has to be in code I don't use -- like the SSL announce code -- which is a real bad idea anyway. Looking around the web I see figures of 200-400ms/connection -- that would be repeated every announce and with every torrent. 50 torrents might take as much as 25 seconds. If running xmission in a container might take double or triple that.

That reminds me -- @andreygursky -- how many cpu's does your machine have and are you running transmission in a container or VM? Networking would be one of the slower virtualization areas for connection creation/teardown. If you are doing SSL in a container, 50 torrents might tax things (I wouldn't think it likely), but costs of repeated SSL session creation/teardown times virtualization costs might add up...

BTW -- I'm not a transmission developer, but I do build my own and have used it for several years. I usually have it running 24x7 on my home machine, so I'd notice it if it was misbehaving...

p.s. I'm running the daemon(s) on linux and use the Windows Transmission GUI to interact with them (for the most part). They get started @ boot time via scripts and run as their own user. The one that's been running for 15 days shows 1266963K (~1.2G) usage, which is ballpark / normal for ~330 torrents in seeding state.

@andreygursky
Copy link

I've removed libcurl4-gnutls-dev and libssl-dev and installed libcurl4-openssl-dev and libssl1.0-dev instead. Then I've rebuilt transmission. This seems to solve the issue. Searching reveals following: https://stackoverflow.com/questions/45498537/https-request-memory-leak-with-curlasynchttpclient and curl/curl#1086. Don't know yet how related.

If my issue is unrelated to the original one, let me know, to open a new one or @mikedld could just add a new entry into known issues list?

P.S. It's a little bit strange, Debian cherry-picked OpenSSL 1.1 patch, but uses GnutTLS flavour of libcurl. And even if you like to use OpenSSL flavour of libcurl, it is only of version 1.0, while mixing is not possible (and not something good in general), thus using OpenSSL 1.1 is not possible at the moment.

@andreygursky
Copy link

@Astara, it's a pretty old PC with one core 64bit CPU.

@Mechazawa
Copy link
Author

Running 4 daemons with ±2000 torrents each (all of them have ssl announce urls). I've graphed out memory usage for now (till the daemon crashed).

image

I don't have time right now to look at it more in-depth though.

@Astara
Copy link

Astara commented Sep 28, 2017

2000 torrents each? I use between 1.5-3Gb with 350 torrents with no SSL. Is that the memory usage for all 4 daemons? So about 8000 torrents? Have you ever had this working w/transmission or any other torrent program? Just seems like quite a few torrents. If it is a memory leak, you should be able to reproduce it with 1 instance and 500 torrents -- might just take longer. Have you tried that or are you just running 4 instances w/8000 torrents total.

Did you mention your HW? How much memory is on the machine? 8G? How many Cores? What's the %cpu usage during this time...

Are any of the torrents active? I.e. actually servicing clients w/traffic? Are they all waiting for connections, or are some number of them "queued"? FWIW, I don't use the 'queues' feature of transmission -- so when I have 350 torrents -- they are all awaiting client connections and are all announcing every ~ half hour (I think). Some of the terminology seems a bit confusing, as when someone says they have 2000 torrents, not sure if all are listening or only a small number are and the rest are queued.

From what I can tell, some people's machines were getting overloaded with more than 10-20 active torrents -- so the queuing was added to allow only some to actually be active at one time. For myself, I split my torrents when I got about 400, since I noticed my GUI response was getting too slow for my tastes -- but that's running the GUI over a local net via the RPC client. Just guessing, but probably had to do with the RPC interface and too many requests had to be done to update each display. Might be helped if the RPC interface had bulk operations, but that presumes anyone can remake/recompile the GUI as I'm told its author has stopped updating it (I tried to compile it, but couldn't get it to compile under the Pascal compiler they used.

Anwway, when you get time, you might try running 1 instance with 50 clients under Valgrind (it's open source at valgrind.org). I found it invaluable in tracking down memory problems (mine involved use-after-free pointers). You wouldn't want to run it till it ran out of memory -- would generate way too much output. Only need to run it through 1 announce (via SSL) cycle, then stop the program -- it will show you any lost memory. Even 1 client is likely to show the problem if it is really a leak -- even pointer reuse, since it keeps track of everything.

@andreygursky -- BTW , you say 50 torrents with SSL will dup the bug ... You might try running valgrind as well, even with 1 torrent+SSL and see what it comes up with. I believe it can find memory problems even with a normal binary -- but to have it mean anything and to track it down, you'd eventually want to recompile with symbols. BTW -- RE: the problem on stackoverflow -- good find! Even there you can see that someone else changed the mix a bit using different backends and widely different memory usages (even though they couldn't reproduce the problem). I'm glad you found a workaround for now...

@Mechazawa
Copy link
Author

I tried @andreygursky's sollution and that seems to have fixed it.

image

@Astara
10Gigs of ram and 4 cores @3.2Gh/z. Transmission-daemon uses barely any CPU at any time. I was able to reproduce it with a single instance and 1k torrents. My torrents are all active and seeding I rarely have more then 3 items queued.

@nakhan98
Copy link

nakhan98 commented Sep 29, 2017

I can also confirm that compiling transmission 2.92 with libcurl4-openssl-dev and libssl1.0-dev on debian/raspbian stable appears to fix the memory issues. Thanks @andreygursky !

@Bisaloo
Copy link

Bisaloo commented Sep 30, 2017

Has someone reported this bug to the debian maintainers?

@andreygursky
Copy link

@Mechazawa, exactly the spikes I spoke about.

@Bisaloo, I've found this issue here from the Debian's bug report transmission-daemon 2.92 high memory usage.

@Seeder101
Copy link

so does this issue #333 is relevant ?

@Seeder101
Copy link

I don't know if Debian is going to fix this downstream?

@Mechazawa
Copy link
Author

@Seeder101 The package maintainers should be notified: https://packages.debian.org/source/stretch/transmission

@Seeder101
Copy link

Seeder101 commented Oct 29, 2017

@Mechazawa https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=865624
this hasn't change since Jun if anyone knows how to interact with this bug/issue in the debian system would be great (I mean to make the maintainers take a deeper look into this)

@Astara
Copy link

Astara commented Oct 29, 2017

You mean to "make" the maintainers...??? Huh? How do you propose to "make" a volunteer to do what you want? Maybe try editing the product generation script (for rpm would be "spec" file.. not sure what it would be for debian)... and submitting the fixed version as a proposed fix?

ahem... and people wonder why developers get burnt out....

@andreygursky
Copy link

Next, libraries used by transmission-daemon, plus the debian package, it's version, and the filesystem location.

libcurl3-gnutls Version: 7.52.1-5+deb9u12 /usr/lib/x86_64-linux-gnu/libcurl-gnutls.so.4

This is what makes trouble. transmission should be built with libcurl4-openssl-dev installed (instead of libcurl4-gnutls-dev).

@afk11
Copy link

afk11 commented Oct 31, 2020

@andreygursky I guess I've been lucky this hasn't affected my for very long, but for me, this issue only started over the last 3-4 months. I have edited my post above, but on 2020-07-30 I updated from the version from 2020-02-26. I've discarded the 'half installed' part of the log above, but if I had to guess, 7.52.1-5+deb9u11 introduced the problem, 7.52.1-5+deb9u10 was fine

2020-07-30 06:13:32 status installed libcurl3-gnutls:amd64 7.52.1-5+deb9u11
2020-02-26 06:44:42 status installed libcurl3-gnutls:amd64 7.52.1-5+deb9u10

Maybe that build merged new code, but maybe they upgraded build OS or changed build options?

@andreygursky
Copy link

I looked up when the package was updated on my system.

Downgrading is of course the simplest solution. But Debian oldstable packages are likely to be updated only because of security issues, thus making downgrades only for short time justifiable.

@afk11
Copy link

afk11 commented Oct 31, 2020

@andreygursky it's not crashing fast enough to make it unusable, it just took 1 hour to get killed. I added 'Restart=always' to the systemd config so I don't have to intervene.

I'll wait 20 minutes, see how fast memory increases, then see if deleting some torrents helps.

@andreygursky
Copy link

if deleting some torrents helps

Instead it is enough to ensure, that not too many torrents with https trackers are announced almost simultaneously. You may try to stop them and then start one after another with some delay.

@afk11
Copy link

afk11 commented Oct 31, 2020

I leave my torrents paused when finished (for shame) but, but could transmission be connecting to the trackers even when the torrent is paused?

I removed all torrents, and it seemed to stop growing, but stuck at 460Mb (total used ram on system). I didn't wait for long, but restarting it dropped it back down to 109Mb used by the system and it doesn't seem to be growing, so yay, problem solved.

I guess it's a deterant to seeding if the smoothest option is deleting all after 50 :/ but if it makes crashes less frequent I'll take it

@andreygursky
Copy link

but could transmission be connecting to the trackers even when the torrent is paused?

No, it shouldn't.

@Lipown
Copy link

Lipown commented Oct 31, 2020

Try this raspberrypi/linux#3210 (comment) - I did not tried it on transission but in my case works so far well.

@sirskills
Copy link

sirskills commented Nov 9, 2020

Mine has been stable for 2+ days now. I think my issue is actually coming from the transgui on my windows machine. If I leave it open it seems to chew into the memory on my pi's daemon. I use SSL for the transgui. Edit: What I mean by coming from is the SSL calls from Transmission remote GUI to the daemon.

@afk11
Copy link

afk11 commented Nov 11, 2020

@sirskills pretty sure I can confirm this sequence of events.. Mine had been running smoothly but a crash happened overnight. I wasn't downloading anything yesterday, but I did open up transmission-remote-gtk and the connections to the server are using TLS (terminated by nginx though, not transmission-daemon)

There was no traffic to the machine yesterday except from 9.11pm, and a little afterwards traffic started to rise. I don't think I did anything with the downloads around then..

@afk11
Copy link

afk11 commented Nov 11, 2020

transmission-remote-crash-cropped

@sirskills
Copy link

@sirskills pretty sure I can confirm this sequence of events.. Mine had been running smoothly but a crash happened overnight. I wasn't downloading anything yesterday, but I did open up transmission-remote-gtk and the connections to the server are using TLS (terminated by nginx though, not transmission-daemon)

There was no traffic to the machine yesterday except from 9.11pm, and a little afterwards traffic started to rise. I don't think I did anything with the downloads around then..

I am also using nginx reverse proxy, could be something.

@afk11
Copy link

afk11 commented Nov 11, 2020

I'll try take it a step further and get memory usage of nginx + transmission-daemon over time, since we're just seeing kernel vs userspace above..

@afk11
Copy link

afk11 commented Nov 11, 2020

Oh, but I've only seen oom-killer reap the transmission-daemon process - not nginx. So we need to see what part of transmission-daemon's memory is growing..

Going to log the pid's smap file for a few hours and see what's happening https://unix.stackexchange.com/questions/36450/how-can-i-find-a-memory-leak-of-a-running-process

@sirskills
Copy link

Oh, but I've only seen oom-killer reap the transmission-daemon process - not nginx. So we need to see what part of transmission-daemon's memory is growing..

Going to log the pid's smap file for a few hours and see what's happening https://unix.stackexchange.com/questions/36450/how-can-i-find-a-memory-leak-of-a-running-process

Yea I haven't seen memory increase on nginx, just the transmission-daemon. Not sure what pmap is.

@afk11
Copy link

afk11 commented Nov 11, 2020

I might have to recompile transmission-daemon to enable all the debug symbols, no line numbers currently, but here's a report by the memleax tool after attaching to my running transmission-daemon (via nginx reverse proxy performing TLS termination) https://gist.github.com/afk11/f97b25952016195f9944e8cee325a857

@afk11
Copy link

afk11 commented Nov 11, 2020

I ran strings on the memory region that grew (according to pmap), the start has torrent filenames, some tracker urls, etc, but soon it moves onto really random strings.

At the very end of the file, it's complete gibberish
Just before it turns completely gibberish, I see sections like this:
running strings on the memory dump from GDB:

// output trimmed
?dAA*
AplI0.
UUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUU
8 AE
8&d 
// output trimmed

and if we scroll around a bit more, I noticed some batches of UUUUU were prefixed by LAME3.9something

// cat dump-outputfile.dump | grep LAME
LAME3.98
$LAME3.98UUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUU
LAME3.98
LAME3.98
*LAME3.98
LAME3.98
LAME3.93UUUUUU

Probably not super helpful, will try build from source later

@sirskills
Copy link

Mine has been stable for at least 10 days now. All I've had to do is make sure I don't leave my transmission-remote GUI open on my windows machine.

@Tharn
Copy link

Tharn commented Dec 1, 2020

I experienced this issue even with the "fixed" Debian version 2.94-2 and openssl in use instead of gnutls. It wasn't as pronounced as some people here experienced it, but transmission-daemon still grew over 2-3 days from 45MB RSS (which is exactly the amount of memory transmission-daemon uses on Windows with the same # of torrents) to 180MB RSS.

With transmission-daemon 3.0 backported I am no longer seeing it. Initial memory was again around 45MB RSS, and then it grew pretty quickly to 55MB as torrents were added and downloaded. But so far it's been relatively stable at 55MB RSS.

This is with around 650 seeded torrents, btw. If memory use changes noticeably one way or the other, I will edit.

I can only recommend you guys pull version 3.0 from testing. It's relatively easy since there are so few dependencies. I haven't seen any problems.

@ckerr
Copy link
Member

ckerr commented Oct 7, 2021

I'm closing this issue due to inactivity and due to the last report that this is resolved by using 3.0

@ckerr ckerr closed this as completed Oct 7, 2021
@hasezoey
Copy link

i think i have a similar problem on ubuntu 22.04 with transmission-daemon, with just 6 torrent files running, it feels like it keeps all the torrents directly in memory in the process itself instead of using streaming or linux's build-in caching:

$ ps -aux | grep transmission
debian-+    1711  0.8 39.7 10632612 6461816 ?    Ssl  Apr30 133:43 /usr/bin/transmission-daemon -f --log-error

htop_sylvi

$ apt list --installed "transmission*"
Listing... Done
transmission-cli/jammy,now 3.00-2ubuntu2 amd64 [installed]
transmission-common/jammy,now 3.00-2ubuntu2 all [installed,automatic]
transmission-daemon/jammy,now 3.00-2ubuntu2 amd64 [installed]

Also, when i add another torrent (that is about ~2GB), it over the time of the torrent downloading increases by about another GB of memory usage

@stevenengler
Copy link

i think i have a similar problem on ubuntu 22.04 with transmission-daemon, with just 6 torrent files running, it feels like it keeps all the torrents directly in memory in the process itself instead of using streaming or linux's build-in caching:

I've also run into memory issues since upgrading to Ubuntu 22.04. Over about 3 days, the memory usage of transmission-daemon climbs until all memory is used and the entire system (a rpi) freezes. This is with no active torrents. I did not have this issue on Ubuntu 20.04.

@andreygursky
Copy link

It looks like a different issue. In few words, this issue is about very high memory consumption spikes during reannouncing torrents to trackers using https (instead of plain http), especially if high number of such reannounces are scheduled simultaneously. The workaround is to build transmission with libcurl4-openssl-dev instead of libcurl4-gnutls-dev, which was applied in Debian/Ubuntu. @hasezoey, could you please open a new issue with the description you provided? Then @stevenengler could copy the comment there.
Thanks.

@hasezoey
Copy link

It looks like a different issue. In few words, this issue is about very high memory consumption spikes during reannouncing torrents to trackers using https (instead of plain http), especially if high number of such reannounces are scheduled simultaneously. The workaround is to build transmission with libcurl4-openssl-dev instead of libcurl4-gnutls-dev, which was applied in Debian/Ubuntu. @hasezoey, could you please open a new issue with the description you provided? Then @stevenengler could copy the comment there. Thanks.

created a new issue #3077

@mrx23dot
Copy link

I'm running
transmission-daemon 3.00 (bb6b5a062e)
and seeding around 20 torrents, after a day it uses 4GB RAM and maxed out swap, even though current speed is 0down/20KB up

Sure, it went up to 2MB/s during the day, but why would that affect current state?
I cannot use it as a seed box this way, having 200+ torrents running 24/7 with maxed out up speed.

After restarting the daemon and started seeding the same torrents, RAM went down to 160MB, which is lot nicer. Shouldn't we leave disk caching to the OS? Or what is the RAM used for?

I think there is a garbage collection missing after spikes.

@mrx23dot
Copy link

Also there is no open ticket for RAM issues, the solution mentioned was to periodically restart it.

@andreygursky
Copy link

I think there is a garbage collection missing after spikes.

This issue is not about GC (transmission doesn't use GC). Your issue seems to be a different one: see #3055 (and #3077).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests