Skip to content
This repository has been archived by the owner on Feb 16, 2020. It is now read-only.

Memory leak in Index-Mode #287

Open
fischerling opened this issue Apr 2, 2017 · 11 comments
Open

Memory leak in Index-Mode #287

fischerling opened this issue Apr 2, 2017 · 11 comments

Comments

@fischerling
Copy link
Contributor

I experience a slow but steady growth in memory usage if lumail is kept open in
index-mode.

This is happens on both devices I use (current master).

A while ago I ran lumail through valgrind, but the output was massive and my
knowledge of valgrind little so I ignored it.

I expect the cause to be wrong usage of ether libgmime or libmagic. Reallocating
some objects without freeing the old ones.

@skx
Copy link
Member

skx commented Apr 3, 2017

I suspect you're right; but I do trust the libmagic usage, as that's a singleton we only open that particular library once.

Testing magic is simplest, since you can run the sample code like so:

    $ ./lumail2 --no-def --load-file ./sample.lua/mime_type.lua --no-curses --load-path ./lib/
                                                 /etc/passwd -                       text/plain
                                                   /etc/motd -                       text/plain
                                                     /bin/ls -         application/x-executable

I'll create a debug-build and run that particular test under valgrind shortly.

@skx
Copy link
Member

skx commented Apr 6, 2017

One of the problems right now is that we "leak" memory via all the singletons. They're not true leaks, but they show up as such.

I suspect we should add a nuke method to our singleton template, such that we explicitly free every singleton at termination-time. That will make the real leaks more visible and stop the false-reporting. I'll aim to get that done shortly, before doing more work.

skx added a commit that referenced this issue Apr 9, 2017
This ensures that any global objects are *definitely* cleaned
and freeds when the process terminates.

The intention is that this will allow real leaks to be detected
more easily, via valgrind.

This updates #287.
skx added a commit that referenced this issue Apr 9, 2017
@skx
Copy link
Member

skx commented Apr 9, 2017

OK with those changes in place I can run lumail2 under valgrind like so:

   $ valgrind --leak-check=full ./lumail2-debug  --load-file ./lumail2.lua   2>leak.log

The output shows one definite leak:

    ==29233== 28 bytes in 1 blocks are definitely lost in loss record 203 of 479
    ==29233==    at 0x4C28C20: malloc (vg_replace_malloc.c:296)
    ==29233==    by 0x716FF49: __res_vinit (res_init.c:317)
    ==29233==    by 0x71713A4: __res_maybe_init (res_libc.c:125)
    ==29233==    by 0x7172C26: __nss_hostname_digits_dots (digits_dots.c:45)
    ==29233==    by 0x716369F: gethostbyname (getXXbyYY.c:108)
    ==29233==    by 0x4594D7: l_CNet_hostname(lua_State*) [clone .part.14] (net_lua.cc:78)
    ==29233==    by 0x459917: l_CNet_hostname(lua_State*) (lua.h:215)
    ==29233==    by 0x4E41C3C: ??? (in /usr/lib/x86_64-linux-gnu/liblua5.2.so.0.0.0)
    ==29233==    by 0x4E4D59C: ??? (in /usr/lib/x86_64-linux-gnu/liblua5.2.so.0.0.0)
    ==29233==    by 0x4E41FA7: ??? (in /usr/lib/x86_64-linux-gnu/liblua5.2.so.0.0.0)
    ==29233==    by 0x4E415BE: ??? (in /usr/lib/x86_64-linux-gnu/liblua5.2.so.0.0.0)
    ==29233==    by 0x4E42200: ??? (in /usr/lib/x86_64-linux-gnu/liblua5.2.so.0.0.0)
    ==29233== 

man gethostbyname suggests that this function returns static buffers, so there is no actual leak.

Will experiment with doing more before terminating to see what leaks I can find, if any. I'm only really concerned with anything that valgrind shows as definitely being lost.

skx added a commit that referenced this issue Apr 9, 2017
skx added a commit that referenced this issue Apr 9, 2017
This cleans up a little leak affecting all views/modes.

This is part of #287
skx added a commit that referenced this issue Apr 9, 2017
This is a huge win, for #287.
@skx
Copy link
Member

skx commented Apr 9, 2017

My testing now shows a leak of 28 bytes, from CNet::Hostname() for the process:

  • launch lumail
  • press a to see all maildirs
  • scroll down twice, via "down-arrow".
  • press Q to quit.

The closedir addition saved almost 50Mb. Weird.

TODO:

  • Repeat the same (basic) testing in a maildir - i.e. showing a list of messages not a list of maildirs.

skx added a commit that referenced this issue Apr 9, 2017
This was used for #287.
skx added a commit that referenced this issue Apr 9, 2017
Specifically we were leaking message-headers, via the call to
g_mime_utils_header_decode_text, and we left a couple of
messages around.

This now allows me to open an index, read a few messages, and quit
with only a leak of ~200 bytes.  (Still need to work on that).

This updates #287
@skx
Copy link
Member

skx commented Apr 9, 2017

Hopefully make valgrind will be less noisy and more useful for others now.

I need to stop for a few days, but if I've not fixed everything at least I've made real progress :)

@skx
Copy link
Member

skx commented Apr 29, 2017

I think I've made enough changes here that this is a non-issue, but I'd welcome evidence to the contrary.

@skx skx closed this as completed May 1, 2017
@fischerling
Copy link
Contributor Author

Yesterday I compiled lumail with luajit to see if there are any speed improvements.

The speedup is marginal, but I found out that the memory leak is lua related.
Because lumail with luajit doesn't eat up my RAM.

I will try lua5.2 for comparison.

@skx skx reopened this May 19, 2017
@fischerling
Copy link
Contributor Author

It happens also with lua5.2.

Are you experiencing this too or is it caused by my configuration ?

@skx
Copy link
Member

skx commented May 20, 2017

I just don't see it, even under valgrind..

@fischerling
Copy link
Contributor Author

Ok then it has to be something with my own configuration.
I am sorry for all the trouble and I will let you know when I find the cause.

@skx
Copy link
Member

skx commented May 20, 2017

No need to apologize - if you're seeing leaking memory then that is definitely a bug.

But unless/until I can see it myself I can't do too much about it. :(

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

2 participants