Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

High passive memory usage (leak?) since version 4.3 (Upgrade to Ubuntu 22.04 and Transmission 3.00) #2469

Closed
7 of 8 tasks
ameinild opened this issue Dec 16, 2022 · 60 comments
Closed
7 of 8 tasks

Comments

@ameinild
Copy link

ameinild commented Dec 16, 2022

Is there a pinned issue for this?

  • I have read the pinned issues and could not find my issue

Is there an existing or similar issue/discussion for this?

  • I have searched the existing issues
  • I have searched the existing discussions

Is there any comment in the documentation for this?

  • I have read the documentation, especially the FAQ and Troubleshooting parts

Is this related to a provider?

  • I have checked the provider repo for issues
  • My issue is NOT related to a provider

Are you using the latest release?

  • I am using the latest release

Have you tried using the dev branch latest?

  • I have tried using dev branch

Docker run config used

version: '3'

services:

  transmission:
    image: haugene/transmission-openvpn:latest
    container_name: transmission-openvpn
    healthcheck:
      disable: true 
    environment:
      - TRANSMISSION_DOWNLOAD_DIR=/data/download
      - CREATE_TUN_DEVICE=true
      - OPENVPN_PROVIDER=CUSTOM
      - OPENVPN_USERNAME=****
      - OPENVPN_PASSWORD=****
      - WEBPROXY_ENABLED=false
      - TRANSMISSION_PEER_PORT=49115
      - TRANSMISSION_PORT_FORWARDING_ENABLED=false
      - LOCAL_NETWORK=10.10.0.0/22
      - DROP_DEFAULT_ROUTE=true
      - TRANSMISSION_WATCH_DIR_ENABLED=false
      - TRANSMISSION_DHT_ENABLED=true
      - TRANSMISSION_PEX_ENABLED=true
      - PUID=1000
      - PGID=1000
      - NO_LOGS=true
    ports:
      - '8333:9091'
    volumes:
      - /mnt/zfs/postern/torrents:/data
      - /mnt/docker-data/transmission/str-ams102_a309011.ovpn:/etc/openvpn/custom/default.ovpn
      - /mnt/docker-data/transmission:/config
      - /etc/localtime:/etc/localtime:ro
    cap_add:
      - NET_ADMIN
    logging:
      driver: json-file
      options:
        max-size: 10m
    restart: unless-stopped

Current Behavior

The Transmission Container uses over 10 GB of memory after running for 10 days with around 25 torrents.

Transmission_Memory_Leak

Expected Behavior

I expect the container to not use over 10 GB of memory when only seeding a couple of torrents at a time.

How have you tried to solve the problem?

Works without issue for version 4.2 (Ubuntu 20.04 and Transmission 2,X)

Log output

2022-12-11T01:00:39.554835090Z Starting container with revision: b33d0fe4c938259a0d4eb844e55468f387456121
2022-12-11T01:00:39.659198076Z Creating TUN device /dev/net/tun
2022-12-11T01:00:39.664931765Z Using OpenVPN provider: CUSTOM
2022-12-11T01:00:39.664976101Z Running with VPN_CONFIG_SOURCE auto
2022-12-11T01:00:39.664985258Z CUSTOM provider specified but not using default.ovpn, will try to find a valid config mounted to /etc/openvpn/custom
2022-12-11T01:00:39.671329834Z No VPN configuration provided. Using default.
2022-12-11T01:00:39.671385155Z Modifying /etc/openvpn/custom/default.ovpn for best behaviour in this container
2022-12-11T01:00:39.672167391Z Modification: Point auth-user-pass option to the username/password file
2022-12-11T01:00:39.695956777Z sed: cannot rename /etc/openvpn/custom/sedPEe4Fh: Device or resource busy
2022-12-11T01:00:39.696593127Z Modification: Change ca certificate path
2022-12-11T01:00:39.755163243Z Modification: Change ping options
2022-12-11T01:00:39.755246763Z sed: cannot rename /etc/openvpn/custom/sedL2NPCH: Device or resource busy
2022-12-11T01:00:39.775036205Z sed: cannot rename /etc/openvpn/custom/sedUOYd6Q: Device or resource busy
2022-12-11T01:00:39.798124953Z sed: cannot rename /etc/openvpn/custom/sed0MejDr: Device or resource busy
2022-12-11T01:00:39.841101873Z Modification: Update/set resolv-retry to 15 seconds
2022-12-11T01:00:39.931314660Z Modification: Change tls-crypt keyfile path
2022-12-11T01:00:39.931325205Z Modification: Set output verbosity to 3
2022-12-11T01:00:39.931341936Z sed: cannot rename /etc/openvpn/custom/sedYAEKv9: Device or resource busy
2022-12-11T01:00:39.931352007Z sed: cannot rename /etc/openvpn/custom/sedpmpJtK: Device or resource busy
2022-12-11T01:00:39.931360395Z sed: cannot rename /etc/openvpn/custom/sedunTnJi: Device or resource busy
2022-12-11T01:00:39.931368830Z sed: cannot rename /etc/openvpn/custom/sednSwLKg: Device or resource busy
2022-12-11T01:00:39.931377274Z sed: cannot rename /etc/openvpn/custom/sedMBlNI2: Device or resource busy
2022-12-11T01:00:39.931385634Z sed: cannot rename /etc/openvpn/custom/sed3qQXNV: Device or resource busy
2022-12-11T01:00:39.941248316Z sed: cannot rename /etc/openvpn/custom/sedP9qI2j: Device or resource busy
2022-12-11T01:00:39.950407893Z Modification: Remap SIGUSR1 signal to SIGTERM, avoid OpenVPN restart loop
2022-12-11T01:00:40.019717774Z sed: cannot rename /etc/openvpn/custom/sedTeXOgM: Device or resource busy
2022-12-11T01:00:40.033250085Z Modification: Updating status for config failure detection
2022-12-11T01:00:40.033315038Z sed: cannot rename /etc/openvpn/custom/sedzWkYUw: Device or resource busy
2022-12-11T01:00:40.067269476Z sed: cannot rename /etc/openvpn/custom/sedNeVS5M: Device or resource busy
2022-12-11T01:00:40.078484365Z sed: cannot rename /etc/openvpn/custom/seddiPlVF: Device or resource busy
2022-12-11T01:00:40.103030778Z Setting OpenVPN credentials...
2022-12-11T01:00:40.264497533Z adding route to local network 10.10.0.0/22 via 172.23.0.1 dev eth0
2022-12-11T01:00:40.264546724Z 2022-12-11 02:00:40 WARNING: Compression for receiving enabled. Compression has been used in the past to break encryption. Sent packets are not compressed unless "allow-compression yes" is also set.
2022-12-11T01:00:40.319442490Z 2022-12-11 02:00:40 DEPRECATED OPTION: --cipher set to 'AES-256-CBC' but missing in --data-ciphers (AES-256-GCM:AES-128-GCM). Future OpenVPN version will ignore --cipher for cipher negotiations. Add 'AES-256-CBC' to --data-ciphers or change --cipher 'AES-256-CBC' to --data-ciphers-fallback 'AES-256-CBC' to silence this warning.
2022-12-11T01:00:40.346038041Z 2022-12-11 02:00:40 OpenVPN 2.5.5 x86_64-pc-linux-gnu [SSL (OpenSSL)] [LZO] [LZ4] [EPOLL] [PKCS11] [MH/PKTINFO] [AEAD] built on Jul 14 2022
2022-12-11T01:00:40.346090577Z 2022-12-11 02:00:40 library versions: OpenSSL 3.0.2 15 Mar 2022, LZO 2.10
2022-12-11T01:00:40.346100072Z 2022-12-11 02:00:40 WARNING: --ns-cert-type is DEPRECATED.  Use --remote-cert-tls instead.
2022-12-11T01:00:40.346109059Z 2022-12-11 02:00:40 NOTE: the current --script-security setting may allow this configuration to call user-defined scripts
2022-12-11T01:00:40.346117744Z 2022-12-11 02:00:40 Outgoing Control Channel Authentication: Using 256 bit message hash 'SHA256' for HMAC authentication
2022-12-11T01:00:40.346126644Z 2022-12-11 02:00:40 Incoming Control Channel Authentication: Using 256 bit message hash 'SHA256' for HMAC authentication
2022-12-11T01:00:40.346135291Z 2022-12-11 02:00:40 WARNING: normally if you use --mssfix and/or --fragment, you should also set --tun-mtu 1500 (currently it is 1400)
2022-12-11T01:00:40.608963329Z 2022-12-11 02:00:40 TCP/UDP: Preserving recently used remote address: [AF_INET]176.67.80.9:1194
2022-12-11T01:00:40.609027167Z 2022-12-11 02:00:40 Socket Buffers: R=[131072->131072] S=[16384->16384]
2022-12-11T01:00:40.609038286Z 2022-12-11 02:00:40 Attempting to establish TCP connection with [AF_INET]176.67.80.9:1194 [nonblock]
2022-12-11T01:00:40.627760642Z 2022-12-11 02:00:40 TCP connection established with [AF_INET]176.67.80.9:1194
2022-12-11T01:00:40.627803799Z 2022-12-11 02:00:40 TCP_CLIENT link local: (not bound)
2022-12-11T01:00:40.627812954Z 2022-12-11 02:00:40 TCP_CLIENT link remote: [AF_INET]176.67.80.9:1194
2022-12-11T01:00:40.642535460Z 2022-12-11 02:00:40 TLS: Initial packet from [AF_INET]176.67.80.9:1194, sid=9f40cbc3 86e2f62d
2022-12-11T01:00:40.676952982Z 2022-12-11 02:00:40 VERIFY OK: depth=1, C=US, ST=TX, L=Dallas, O=strongtechnology.net, CN=strongtechnology.net CA, emailAddress=lecerts@strongtechnology.net
2022-12-11T01:00:40.676999175Z 2022-12-11 02:00:40 VERIFY OK: nsCertType=SERVER
2022-12-11T01:00:40.677008417Z 2022-12-11 02:00:40 NOTE: --mute triggered...
2022-12-11T01:00:40.716204565Z 2022-12-11 02:00:40 2 variation(s) on previous 3 message(s) suppressed by --mute
2022-12-11T01:00:40.716254334Z 2022-12-11 02:00:40 [openvpn] Peer Connection Initiated with [AF_INET]176.67.80.9:1194
2022-12-11T01:00:41.723682262Z 2022-12-11 02:00:41 SENT CONTROL [openvpn]: 'PUSH_REQUEST' (status=1)
2022-12-11T01:00:41.745246746Z 2022-12-11 02:00:41 PUSH: Received control message: 'PUSH_REPLY,dhcp-option DNS 198.18.0.1,dhcp-option DNS 198.18.0.2,ping 1,ping-restart 60,comp-lzo no,route-gateway 100.64.32.1,topology subnet,socket-flags TCP_NODELAY,ifconfig 100.64.32.8 255.255.254.0,peer-id 0,cipher AES-256-GCM'
2022-12-11T01:00:41.745296865Z 2022-12-11 02:00:41 OPTIONS IMPORT: timers and/or timeouts modified
2022-12-11T01:00:41.745306305Z 2022-12-11 02:00:41 NOTE: --mute triggered...
2022-12-11T01:00:41.745314835Z 2022-12-11 02:00:41 2 variation(s) on previous 3 message(s) suppressed by --mute
2022-12-11T01:00:41.745323254Z 2022-12-11 02:00:41 Socket flags: TCP_NODELAY=1 succeeded
2022-12-11T01:00:41.745331612Z 2022-12-11 02:00:41 OPTIONS IMPORT: --ifconfig/up options modified
2022-12-11T01:00:41.745340021Z 2022-12-11 02:00:41 OPTIONS IMPORT: route-related options modified
2022-12-11T01:00:41.745348382Z 2022-12-11 02:00:41 OPTIONS IMPORT: --ip-win32 and/or --dhcp-option options modified
2022-12-11T01:00:41.745356890Z 2022-12-11 02:00:41 NOTE: --mute triggered...
2022-12-11T01:00:41.745365192Z 2022-12-11 02:00:41 3 variation(s) on previous 3 message(s) suppressed by --mute
2022-12-11T01:00:41.745373568Z 2022-12-11 02:00:41 Data Channel: using negotiated cipher 'AES-256-GCM'
2022-12-11T01:00:41.745382004Z 2022-12-11 02:00:41 Outgoing Data Channel: Cipher 'AES-256-GCM' initialized with 256 bit key
2022-12-11T01:00:41.745390287Z 2022-12-11 02:00:41 Incoming Data Channel: Cipher 'AES-256-GCM' initialized with 256 bit key
2022-12-11T01:00:41.745398588Z 2022-12-11 02:00:41 net_route_v4_best_gw query: dst 0.0.0.0
2022-12-11T01:00:41.745407018Z 2022-12-11 02:00:41 net_route_v4_best_gw result: via 172.23.0.1 dev eth0
2022-12-11T01:00:41.758771221Z 2022-12-11 02:00:41 ROUTE_GATEWAY 172.23.0.1/255.255.0.0 IFACE=eth0 HWADDR=02:42:ac:17:00:02
2022-12-11T01:00:41.772554152Z 2022-12-11 02:00:41 TUN/TAP device tun0 opened
2022-12-11T01:00:41.772606012Z 2022-12-11 02:00:41 net_iface_mtu_set: mtu 1400 for tun0
2022-12-11T01:00:41.775118954Z 2022-12-11 02:00:41 net_iface_up: set tun0 up
2022-12-11T01:00:41.775161323Z 2022-12-11 02:00:41 net_addr_v4_add: 100.64.32.8/23 dev tun0
2022-12-11T01:00:43.826150917Z 2022-12-11 02:00:43 net_route_v4_add: 176.67.80.9/32 via 172.23.0.1 dev [NULL] table 0 metric -1
2022-12-11T01:00:43.826223721Z 2022-12-11 02:00:43 net_route_v4_add: 0.0.0.0/1 via 100.64.32.1 dev [NULL] table 0 metric -1
2022-12-11T01:00:43.826234178Z 2022-12-11 02:00:43 net_route_v4_add: 128.0.0.0/1 via 100.64.32.1 dev [NULL] table 0 metric -1
2022-12-11T01:00:43.899186309Z sed: cannot rename /etc/openvpn/custom/sedWasA9G: Device or resource busy
2022-12-11T01:00:43.913757854Z sed: cannot rename /etc/openvpn/custom/sedoLg3bb: Device or resource busy
2022-12-11T01:00:43.958914937Z Up script executed with device=tun0 ifconfig_local=100.64.32.8
2022-12-11T01:00:43.958962694Z Updating TRANSMISSION_BIND_ADDRESS_IPV4 to the ip of tun0 : 100.64.32.8
2022-12-11T01:00:43.965486737Z TRANSMISSION_HOME is currently set to: /config/transmission-home
2022-12-11T01:00:43.965570702Z WARNING: Deprecated. Found old default transmission-home folder at /data/transmission-home, setting this as TRANSMISSION_HOME. This might break in future versions.
2022-12-11T01:00:43.965613573Z We will fallback to this directory as long as the folder exists. Please consider moving it to /config/<transmission-home>
2022-12-11T01:00:43.974559387Z Enforcing ownership on transmission config directory
2022-12-11T01:00:43.983238151Z Applying permissions to transmission config directory
2022-12-11T01:00:43.983291867Z Setting owner for transmission paths to 1000:1000
2022-12-11T01:00:43.994004629Z Setting permissions for download and incomplete directories
2022-12-11T01:00:44.142834012Z Mask: 002
2022-12-11T01:00:44.142884610Z Directories: 775
2022-12-11T01:00:44.142893883Z Files: 664
2022-12-11T01:00:44.175889217Z Setting permission for watch directory (775) and its files (664)
2022-12-11T01:00:44.207608299Z 
2022-12-11T01:00:44.207668538Z -------------------------------------
2022-12-11T01:00:44.207678341Z Transmission will run as
2022-12-11T01:00:44.207686675Z -------------------------------------
2022-12-11T01:00:44.207694852Z User name:   abc
2022-12-11T01:00:44.207702811Z User uid:    1000
2022-12-11T01:00:44.207710850Z User gid:    1000
2022-12-11T01:00:44.207719004Z -------------------------------------
2022-12-11T01:00:44.207727200Z 
2022-12-11T01:00:44.207735196Z Updating Transmission settings.json with values from env variables
2022-12-11T01:00:44.370908924Z Attempting to use existing settings.json for Transmission
2022-12-11T01:00:44.370959457Z Successfully used existing settings.json /data/transmission-home/settings.json
2022-12-11T01:00:44.370969000Z Overriding bind-address-ipv4 because TRANSMISSION_BIND_ADDRESS_IPV4 is set to 100.64.32.8
2022-12-11T01:00:44.370977670Z Overriding dht-enabled because TRANSMISSION_DHT_ENABLED is set to true
2022-12-11T01:00:44.370996490Z Overriding download-dir because TRANSMISSION_DOWNLOAD_DIR is set to /data/download
2022-12-11T01:00:44.371027272Z Overriding incomplete-dir because TRANSMISSION_INCOMPLETE_DIR is set to /data/incomplete
2022-12-11T01:00:44.371036866Z Overriding peer-port because TRANSMISSION_PEER_PORT is set to 49115
2022-12-11T01:00:44.371045267Z Overriding pex-enabled because TRANSMISSION_PEX_ENABLED is set to true
2022-12-11T01:00:44.371256010Z Overriding port-forwarding-enabled because TRANSMISSION_PORT_FORWARDING_ENABLED is set to false
2022-12-11T01:00:44.371265355Z Overriding rpc-password because TRANSMISSION_RPC_PASSWORD is set to [REDACTED]
2022-12-11T01:00:44.371274214Z Overriding rpc-port because TRANSMISSION_RPC_PORT is set to 9091
2022-12-11T01:00:44.371282825Z Overriding rpc-username because TRANSMISSION_RPC_USERNAME is set to 
2022-12-11T01:00:44.371291386Z Overriding watch-dir because TRANSMISSION_WATCH_DIR is set to /data/watch
2022-12-11T01:00:44.371299668Z Overriding watch-dir-enabled because TRANSMISSION_WATCH_DIR_ENABLED is set to false
2022-12-11T01:00:44.399837435Z sed'ing True to true
2022-12-11T01:00:44.401112807Z DROPPING DEFAULT ROUTE
2022-12-11T01:00:44.414481264Z STARTING TRANSMISSION
2022-12-11T01:00:44.414530300Z Transmission startup script complete.
2022-12-11T01:00:44.468961129Z 2022-12-11 02:00:44 WARNING: this configuration may cache passwords in memory -- use the auth-nocache option to prevent this
2022-12-11T01:00:44.471995265Z 2022-12-11 02:00:44 Initialization Sequence Completed

HW/SW Environment

No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 22.04.1 LTS
Release:        22.04
Codename:       jammy


Client: Docker Engine - Community
 Version:           20.10.21
 API version:       1.41
 Go version:        go1.18.7
 Git commit:        baeda1f
 Built:             Tue Oct 25 18:01:58 2022
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          20.10.21
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.18.7
  Git commit:       3056208
  Built:            Tue Oct 25 17:59:49 2022
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.6.12
  GitCommit:        a05d175400b1145e5e6a735a6710579d181e7fb0
 runc:
  Version:          1.1.4
  GitCommit:        v1.1.4-0-g5fd4c4d
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

Anything else?

There appears to be a fix in a later version of Transmission 3, mentioned on the Transmission Github:

transmission/transmission#3077

It appears this Nightly build fixes the issue.

@kjwill555
Copy link

+1

for me, 28 torrents, container has only been up for 2 days and is currently using 2.97 GB out of my system's 4.00 GB RAM

@onthecliff
Copy link

I also have this problem.

@jucor
Copy link

jucor commented Jan 1, 2023

Same problem. Is there a way to update the version of transmission used, please?

@ameinild
Copy link
Author

ameinild commented Jan 1, 2023

Use the beta branch with Transmission 4.0.0 beta-2 - this works for me.

This image: https://hub.docker.com/layers/haugene/transmission-openvpn/beta/images/sha256-e7193f482b62412b7ded7a244cf3f857c8ed08961dcc18b660ad1b55cb96efe5?context=explore

And this command to pull:
docker pull haugene/transmission-openvpn:beta

@jucor
Copy link

jucor commented Jan 1, 2023

Thanks a lot @ameinild , that sorted it!
Edit: I might have spoken too fast. RAM usage rising up again, with only a single torrent. Will let it grow overnight and report. Running on a low-RAM NAS, so this could be enough of a problem to require running an alternate client -- but rare to find one nicely packaged for VPN like this (thanks @haugene !)

@onthecliff
Copy link

Use the beta branch with Transmission 4.0.0 beta-2 - this works for me.

This image: https://hub.docker.com/layers/haugene/transmission-openvpn/beta/images/sha256-e7193f482b62412b7ded7a244cf3f857c8ed08961dcc18b660ad1b55cb96efe5?context=explore

And this command to pull: docker pull haugene/transmission-openvpn:beta

unfortunately transmission 4.00 is banned on about a third of the private trackers that I use so this isn't an option but thank you for the suggestion.

@ameinild
Copy link
Author

ameinild commented Jan 2, 2023

Yeah, I know it's an issue that Transmission Beta is banned on private trackers. In this case, I would suggest instead reverting to an earlier Docker image, based on Ubuntu 20.04 and Transmission 2.9X, like 4.2 or 4,1. Or possibly the Dev branch - don't know if this fixes the issue yet? But there should be other possibilities. 😎

There are actually several Docker tags to try out:

https://hub.docker.com/r/haugene/transmission-openvpn/tags

@jucor
Copy link

jucor commented Jan 3, 2023

@ameinild I'm still getting the memory issue :( I'm now at 5.43 GB after running 2 days with one single torrent :/
image
Weirdly enough, the host system (a Synology DSM 6) only reports 809 MB total memory used, so I'm really puzzled.
Any idea what's going on please?

@ameinild
Copy link
Author

ameinild commented Jan 3, 2023

I have no idea - the Beta version works perfectly for me on Ubuntu. You could try rolling back to an earlier release. Else, wait for a stable version of Transmission where they fix the memory leaks that are clearly present. 😬

@jucor
Copy link

jucor commented Jan 4, 2023 via email

@joe-eklund
Copy link

I am stopping in to say I am having memory leaks even on the new beta. I have 128 GB of ram so I didn't really notice this until recently when it also filled up all my swap.

Here is the memory after running for a few hours with < 40 torrents.

ram

@jucor
Copy link

jucor commented Jan 4, 2023 via email

@ameinild
Copy link
Author

ameinild commented Jan 4, 2023

It's strange. It seems the memory leak issue is hitting randomly for different versions of Transmission and on different OS'es. On Ubuntu 22.04 I had no issue with Trans 2.94, huge memory leak an Trans 3.00, and no problem again on Trans 4.0-beta. This would make it very difficult to troubleshoot, I guess.. 😬

@joe-eklund
Copy link

joe-eklund commented Jan 4, 2023

I am also using Ubuntu 22.04.

After it quickly jumped back up to over 20GB, I instituted a memory limit through Portainer and that has helped. Now it doesn't go above whatever limit I set. I am not sure if it will affect the functionality of the container though. Guess we'll see.

I also switched back from beta to latest since that didn't fix it anyway and I would rather run a stable version.

@DaveLMSP
Copy link

DaveLMSP commented Jan 7, 2023

I'm running Linux Mint 20.3 with v4.3.2 of the container. I haven't tried alternate versions of Transmission, but I became aware of this issue when I noticed high swap usage on the host. After running for about a week with 17 torrents, the container was using 31.5GB of memory and another 6GB of swap. I've been using a limit through Portainer for the last several days without any major issues. I have seen its status listed as 'unhealthy' a couple times, but it resumed running normally after restarting via Portainer.

@ivalkenburg
Copy link

Same issue here. Im not sure what changed, it started doing this recently. The image im using was pulled 2 months ago. Either i didnt notice it until now, or something changed...

@seanmuth
Copy link

seanmuth commented Feb 4, 2023

Same here. Capped the container @ 12GB (64GB system) and it ramps to 12GB super quickly. I restart the containernightly as well.
image

@haugene
Copy link
Owner

haugene commented Feb 10, 2023

I was hoping that Transmission 4.0.0 would be our way out of this, troubled to hear that some is still experiencing issues with it 😞

The release is now out 🎉 https://github.com/transmission/transmission/releases/tag/4.0.0 🎉 and already starting to get whitelisted on private trackers. But if there's still memory issues then we might have to consider doing something else.

If this could get fixed upstream, or we could narrow it down to a server OS and then report it then that would be the best long term solution I guess. If not the only thing that comes to mind is changing the distro of the base image to see if that can have an effect. Before we start automatically restarting Transmission within the image or other hackery 😬

The beta tag of the image was updated with the 4.0.0 release version so 🤞

@DaveLMSP
Copy link

A couple weeks ago I noticed the web interface getting unresponsive when the container was listed as unhealthy and set up a cron job to restart the container every 4 hours. Initially I tried longer intervals, but the container would go unhealthy as soon as the 1GB memory limit was reached and essentially stop working. With a 4 hour restart window I'm able to catch the container before it goes unresponsive, and it's been working great. If it would be helpful, I can adjust the restart interval and post container logs.

@ameinild
Copy link
Author

The latest version with Transmission 4.0.0 release still works good for me on Ubuntu 22.04 server. 👍

@theythem9973
Copy link

Saw this thread on transmission itself about high memory usage, even with 4.0; may be pertinent: transmission/transmission#4786

@haugene
Copy link
Owner

haugene commented Feb 14, 2023

Very curious to follow that thread @theythem9973. Hopefully they'll find something that can help us here as well 🤞

But this issue was reported here when upgrading to 4.3 of this image which is using a much older build of Transmission, and we also previously ran v3.00 of Transmission under alpine without issues (tag 3.7.1). So, I'm starting to doubt the choice of Ubuntu 22.04 as a stable option 😞 We moved from alpine for a reason as well, so I'm not sure if we want to go back there or go Debian if Ubuntu doesn't pan out.

Before we change the base image again I'm just curious if there's a possibility to solve this by rolling forward instead.
I've created a new branch for the 22.10 (kinetic) release. Anyone up for pulling the kinetic tag of the image and see if that works any better?

@haugene
Copy link
Owner

haugene commented Feb 15, 2023

In addition to the kinetic tag I now also tried rolling back to focal as the base image and installing Transmission via the ppa so that we still stay on Transmission 3.00. So you can also try using tag focal and see if that's better.

@CurtC2
Copy link

CurtC2 commented Mar 4, 2023

Found this was clobbering me as well. I pulled kinetic I'll let you know how it goes. Pre-kinetic this was all transmission:
image

Hmm, kinetic isn't working for me:
Checking port...
transmission-vpn | Error: portTested: http error 400: Bad Request
transmission-vpn | #######################
transmission-vpn | SUCCESS
transmission-vpn | #######################
transmission-vpn | Port:
transmission-vpn | Expiration Fri Mar 3 00:00:00 EST 2023
transmission-vpn | #######################
transmission-vpn | Entering infinite while loop
transmission-vpn | Every 15 minutes, check port status
transmission-vpn | 60 day port reservation reached
transmission-vpn | Getting a new one
transmission-vpn | curl: (3) URL using bad/illegal format or missing URL
transmission-vpn | Fri Mar 3 23:28:02 EST 2023: getSignature error
transmission-vpn |
transmission-vpn | the has been a fatal_error
transmission-vpn | curl: (3) URL using bad/illegal format or missing URL
transmission-vpn | Fri Mar 3 23:28:02 EST 2023: bindPort error
transmission-vpn |
transmission-vpn | the has been a fatal_error

Trying focal. Update focal has been running download & seed for 12+ hours with zero memory increase.

@pkishino
Copy link
Collaborator

pkishino commented Mar 4, 2023

I see that 4.0.1 is released with possible fix in transmission. I’ll make a new build with this on the beta branch

@ameinild
Copy link
Author

ameinild commented Mar 4, 2023

The latest beta with Transmission 4.0.1 is (still) working fine for me on Ubuntu 22.04 and Docker 23.0.1. 👍

@pkishino
Copy link
Collaborator

pkishino commented Mar 7, 2023

I’m running latest transmission 4.0.1 since last week on two containers with 5/10 gb limits, larger container is fairly constant with 50ish seeding torrents, always around 7-8gb used die last 4 months of stats I have(since older versions as well). Second one is main dl container, carrying between 0-10 ish torrents and seldomly goes above 2gb.
Running on an old Mac mini, Monterrey and latest docker for Mac.

@Salamafet
Copy link

Same problem here with the 4.0.1 version.

After restarting the container, everything back to normal. I have modified my docker-compose file to set a limit of RAM just in case.

@timothe
Copy link

timothe commented Mar 14, 2023

Same for me on Synology DSM 7.1.1 and latest Docker image. Always maxing out the available memory.
I have to limit the usage in Docker.

@sgtnips
Copy link

sgtnips commented Mar 21, 2023

Been running Focal for almost 3 weeks and it's looking good. Below can see my memory settle down halfway through week 9.

However bizarrely the container thinks it's chewing up 10GB of mem when the total system is barely using 2GB. Maybe it's all cached and I've just never looked too closely before.

Anyway, Focal looks good for me on Ubuntu 22.04 and Docker 23.0.1.

mem

system2 1month

@pkishino
Copy link
Collaborator

Is anyone actually seeing a leak on 5.0.2?? That uses 22.04 as base but runs transmission 4.0.3

@Sabb0
Copy link

Sabb0 commented Apr 26, 2023

The extent of my knowledge is..... I pulled the latest branch a week or so ago and the memory usage crashed the container within an hr. Only just got round to looking into it, so pulled the focal banch (no change in seed number etc) and it's been fine overnight.

I'll pull the latest branch again and see what happens.

@pkishino
Copy link
Collaborator

pkishino commented Apr 26, 2023 via email

@Sabb0
Copy link

Sabb0 commented Apr 26, 2023

Unfortunately, I don't have that info - not very useful I know! I will report back later once it's been running for a few hours.

I don’t recall what version :latest is but I’ve been running :5.0.2 and it works great. When you pulled latest, what version of transmission was it using?

@ilike2burnthing
Copy link
Contributor

:latest and :5.0.2 are the same.

@ameinild
Copy link
Author

I'm now running :latest (:5.0.2), and still no issues for me. Was previously running :beta (with Transmission 4.0.2), which also worked fine.

@Sabb0
Copy link

Sabb0 commented Apr 27, 2023

After a day, the latest branch is running around 600MB, so not crazy.

For some comparison, I have a version 3.3 container at 350MB. The same seeds etc. I assume the difference is due to 3.3 running alpine.

@theythem9973
Copy link

I've been using a focal for a couple weeks now. Previously using "latest" (don't know the exact version). focal has been great - it's sitting pretty at ~700 MB when previously it'd grow to upwards of 18 GB until it hits swap / crashes.

@haugene
Copy link
Owner

haugene commented Apr 28, 2023

Glad to hear it @theythem9973 👌 We're on to something 😄 Are you also up for testing with the newest 5.x release? The 5.0.2 tag?

@timothe
Copy link

timothe commented Apr 28, 2023

Latest version is drastically reducing memory usage. I'm running at 88Mb with 2 days running... Case closed imo

@joe-eklund
Copy link

FWIW, I am running latest and still have this issue. Hitting my 4 GB limit I have set through Portainer.

@haugene
Copy link
Owner

haugene commented Apr 28, 2023 via email

@joe-eklund
Copy link

joe-eklund commented Apr 28, 2023

I am using the tag: haugene/transmission-openvpn:latest. I just tried to pull again and nothing changed. Portainer is reporting that the image is up to date. I poked around the logs but didn't see anything that jumped out at me and said that I was using latest. But I am fairly confident I am using https://hub.docker.com/layers/haugene/transmission-openvpn/latest/images/sha256-df0b4b4c640004ff48103d8405d0e26b42b0d3631e35399d9f9ebdde03b6837e, given that Portainer says what the container is using is the most up to date.

I swapped to 5.0.2 and now Portainer has the same image as being tagged for both 5.0.2 and latest, so it's the same image whether I change to 5.0.2 or use latest. I will leave it as 5.0.2 and monitor, but I suspect it will exhibit the same behavior since the actual image being used didn't change. Right now it's at ~600MB and every few seconds it is going up by ~30 MB.

EDIT: I looked at the logs and see Starting container with revision: 1103172c3288b7de681e2fb7f1378314f17f66cf.

@haugene
Copy link
Owner

haugene commented Apr 28, 2023 via email

@joe-eklund
Copy link

joe-eklund commented Apr 28, 2023

OS is Ubuntu 22.04 LTS. Docker 23.0.3. And after restarting the container a couple hours ago it's back up to my Portainer memory limit (4GB).

@theythem9973
Copy link

Glad to hear it @theythem9973 👌 We're on to something 😄 Are you also up for testing with the newest 5.x release? The 5.0.2 tag?

Hey @haugene! Yeah, I do think we're onto something! I don't think I'm quite ready to try '5.0.2' yet, since it's making the jump to Transmission 4.0 (although 4.0.3 which skipped some growing pains). Surprisingly, I'm pretty luddite about these things. Let me mull over it over the weekend to read more about 4.0.

Thanks for everything you all do!

@ocangelo
Copy link

ocangelo commented May 19, 2023

having the same problem with :latest on a synology with latest dsm and, it's just using all the available memory.
what are some good memory limits for transmission?

@pkishino
Copy link
Collaborator

having the same problem with :latest on a synology with latest dsm and, it's just using all the available memory.
what are some good memory limits for transmission?

Please try the :focal branch

@enchained
Copy link

I had random server crashes on a dedicated hetzner for a while, timing of some could be associated with transmission activity, but I didn't investigate the ram usage then. I updated to :5.0.2 specifically, and some time after it I noticed ~20GB memory usage by this container, and set mem_limit: 4g in compose. The crashes stopped, but the usage level is always at 4GB now.

Looks like I'm unable to switch to :focal cause it gives me Options error: Unrecognized option or missing or extra parameter(s) in /etc/openvpn/nordvpn/ch403.nordvpn.com.ovpn:22: data-ciphers (2.4.7) and a container crash loop. Could someone please update the focal branch with latest fixes if it might help?

Can I try any other branch in the meanwhile?

@Qhilm
Copy link

Qhilm commented Jun 6, 2023

I also have had extremely high memory usage: with 6 torrents downloading, Transmission would gobble up 10+GB of RAM within two hours and render my synology NAS unresponsive (the drop is when I restarted the container, 100% = 12GB):

image

I was using the latest branch and switched to the focal branch as recommended above, it immediately solves the problem, you can see memory is not going up anymore after the last drop (which is when I redeployed with the new image):

image

My Docker Compose (the branch is now focal, this is the original docker compose):

version: '3.8'
services:
  transmission-openvpn:
    container_name: 'haugene'
    cap_add:
      - NET_ADMIN
    devices:
      - '/dev/net/tun'
    volumes:
      - /volume1/Laster/transmission-data/:/data
      - /volume1/docker/haugene/resolv.conf:/etc/resolv.conf
      - /volume1/docker/haugene/:/config
    environment:
      - OPENVPN_PROVIDER=PROTONVPN
      - OPENVPN_CONFIG=dk.protonvpn.net.udp
      - OPENVPN_USERNAME=**None**
      - OPENVPN_PASSWORD=**None**
      - LOCAL_NETWORK=192.168.178.0/24
      - OVERRIDE_DNS_1=9.9.9.9
      - OVERRIDE_DNS_2=149.112.112.112
      - OPENVPN_OPTS=--inactive 3600 --ping 10 --ping-exit 60
      - TRANSMISSION_RATIO_LIMIT=3
      - TRANSMISSION_RATIO_LIMIT_ENABLED=true
      - TRANSMISSION_SPEED_LIMIT_UP_ENABLED=true
      - TRANSMISSION_SPEED_LIMIT_UP=200
      - TRANSMISSION_BLOCKLIST_URL=http://list.iblocklist.com/?list=xxxx&fileformat=p2p&archiveformat=gz&username=xxx&pin=xxx
      - TRANSMISSION_RPC_USERNAME=xxx
      - TRANSMISSION_RPC_PASSWORD=xxx
      - TRANSMISSION_RPC_AUTHENTICATION_REQUIRED=true
    logging:
      driver: json-file
      options:
        max-size: 10m
    sysctls:
      - net.ipv6.conf.all.disable_ipv6=0
    ports:
      - '9091:9091'
    image: haugene/transmission-openvpn:latest

networks:
  default:
    external:
      name: mybridge

In case it's useful.

@Qhilm
Copy link

Qhilm commented Jun 6, 2023

I just noticed I had one torrent throwing the error "file name too long" and thought, maybe it's a trigger, but it's not. Removing the bad torrent, switching back to latest branch, the memory usage immediately starts going up again.

This is on DSM 7.1.1-42962 Update 5.

@stale
Copy link

stale bot commented Aug 12, 2023

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs.

@stale stale bot added the inactivity Used by Stale bot to mark issues that will be closed label Aug 12, 2023
@stale stale bot removed the inactivity Used by Stale bot to mark issues that will be closed label Sep 5, 2023
@Qhilm
Copy link

Qhilm commented Sep 5, 2023

I'm running 4.0.4 and it seems stable. Nothing crazy like I posted above. I see 10GB cache and 65MB memory (according to Portainer) after running 5 hours and it hasn't changed at all in the last 15 minutes. Synology reports also very low memory usage and most of all, it's flat, it doesn't increase.

@pkishino
Copy link
Collaborator

pkishino commented Nov 9, 2023

yeah, I think for now this issues can be closed..if anyone encounters similar issues please feel free to comment and we can discuss if we re-open or create a new thread

@pkishino pkishino closed this as completed Nov 9, 2023
@pkishino pkishino pinned this issue Nov 9, 2023
@rassie
Copy link

rassie commented Nov 26, 2023

I don't have any data (yet), but I had to restart my container recently after it has eaten through my RAM and swap. Will try to analyze if it happens again.

@istrait
Copy link

istrait commented Dec 1, 2023

I am continuing to have this issue. Running latest and DSM 7.2.1-69057. When I tried the focal branch as asked for above, I get the same error as enchained. (Looks like I'm unable to switch to :focal cause it gives me Options error: Unrecognized option or missing or extra parameter(s) in /etc/openvpn/nordvpn/ch403.nordvpn.com.ovpn:22: data-ciphers (2.4.7) and a container crash loop.)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests