New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ClamAV can be a resource hog, and doesn't need to be running if it's not configured. #49
Comments
Customers are repeatedly hitting this and running out of memory. Can we not fix in the current release?
|
I concur. This really needs fixing in the current release (not just for stretch).
|
We should fix this urgently. The default cloud server (1GB, Symbiosis) is effectively broken.
|
The footprint for ClamAV is now up to 486 MB (from just under 400 MB a few months ago) on a machine doing nothing, and we're starting to see customers who have it enabled on 1 GB machines have issues with the VMs OOMing. At present a clean fresh Symbiosis image on a 1 GB cloud server now reports 50.6% of the RAM used by This probably needs to be bumped up.
|
This is already supposed the be the case!
|
We've had a few Symbiosis users recently (with otherwise pretty quiet machines who are seeing issues with clamav hogging resources - CPU, disk and RAM.
This seems to be down to a memory leak in clamd, which after 150+ days of uptime chews up a significant amount of RAM, which can then cause freshclam to have problems allocating memory, leading to it cheaing up CPU time and disk space as it writes
WARNING: [LibClamAV] mpool_malloc(): Can't allocate memory ([0-9]* bytes).
to freshclam.log over and over until the disk is full, then starts consuming all the CPU time on the box.Really, clamd doesn't need to even be running if its not configured to be used, and provides users a false sense of security if they see it running on the box but its not configured, however it may be worth an extra daily/weekly forced restart/reload of the service if it is being used, to clear out any memory leak issues.
The text was updated successfully, but these errors were encountered: