Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Barrier request / reply produces memory leak #88

Open
KAndrey opened this issue Jul 13, 2016 · 10 comments
Open

Barrier request / reply produces memory leak #88

KAndrey opened this issue Jul 13, 2016 · 10 comments

Comments

@KAndrey
Copy link

KAndrey commented Jul 13, 2016

The problem as follows. A lot of barrier requests / replies produce memory leak.

@KAndrey KAndrey changed the title Dead lock after stop controller channel Barrier request / reply produces memory leak Jul 13, 2016
@ynkjm
Copy link

ynkjm commented Jul 14, 2016

Hi @KAndrey

Would you give us more details about your issue.
Your OpenFlow controller sample code or Lagopus logs are very beneficial for us to repro your issue on our environment.

Thanks,

@ynkjm ynkjm added the question label Jul 14, 2016
@KAndrey
Copy link
Author

KAndrey commented Jul 14, 2016

Hello, ynkjm.
You can reproduce this problem using oftest.
As example, following code can make many requests for detecting leak:

import logging
from oftest import config
import oftest.base_tests as base_tests
import ofp
from oftest.testutils import *

class barrier(base_tests.SimpleDataPlane): 

    def runTest(self):
        i = 10000

        while i:
           print str(i)
           do_barrier(self.controller, timeout=1)
           print 'Ok'
           i -= 1

You can put this code to file "oft/demo/barrier.py" and use following string to execute:
sudo ./oft -V 1.3 --test-dir="demo" --default-timeout=1 --default-negative-timeout=1 -i 5@eth1 -p 6633 barrier

After a lot of tests we can see Increase of memory resources usage. This test was made without any traffic.

Regards.

@ynkjm
Copy link

ynkjm commented Jul 15, 2016

Hi @KAndrey

Thanks!
We will repro this issue and fix it.

@ynkjm
Copy link

ynkjm commented Jul 20, 2016

Hi @KAndrey

We have conducted tests in our lab using your OF application.
But I could not see any memory-leak problem with vmstat.
vmstat showed that the amount of memory-free decreased, and the amount of memory-cached increased. Then any change related to sum of the memory-free and memory-cached was not found.

What tool did you use for your test? We need more detail configuration about Lagopus and your environment.

Thanks

@KAndrey
Copy link
Author

KAndrey commented Jul 20, 2016

Hi, @ynkjm.
I mean "leak" is increasing memory usage. If this test will work several hours (near a day) memory usage (cached memory) will occupy about 1GB and more. In my opinion such resource usage is wasteful. So unused memory cache need to be restricted.
Regards.

@ynkjm
Copy link

ynkjm commented Jul 22, 2016

Hi @KAndrey

To simplify the problem, we ran an oftest app and a lagopus vswitch separately (on separate hosts). We could not see any change of vmstat on Lagopus, but we saw vmstat changes that you have reported.

In our understanding, the memory leak might happen in an oftest side and not in Lagopus vswitch side. If you send us more detail output with pmap command to show more detail memory mapping of the process, that would be very beneficial to find out what happens.

Thanks,

@KAndrey
Copy link
Author

KAndrey commented Jul 22, 2016

Hi, @ynkjm.
I tried to use vmstat, but nothing was found too.
Please, try to use htop in lagopus host. You'll see increasing RES values for lagopus threads (RES means resident size, representation of how much actual physical memory a process is consuming).
Regards

@hhashoww
Copy link

Hi, we found a bug with pbuf pool re-use case

not all ofp message will use the pbuf pool

it will use malloc to allocate a new pbuf entry directly

then pbuf_free() will add the pbuf to pbuf pool

not really release the resource to system

use valgrind tool can find the memory usage information

maybe can check this part in this issue

2016-07-22 16:52 GMT+08:00 KAndrey notifications@github.com:

Hi, @ynkjm https://github.com/ynkjm.
I tried to use vmstat, but nothing was found too.
Please, try to use htop in lagopus host. You''ll see increasing RES values
for lagopus threads (RES means resident size, representation of how much
actual physical memory a process is consuming).
Regards


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
#88 (comment), or mute
the thread
https://github.com/notifications/unsubscribe-auth/AIJglkFW6TENW7s7xPq1BPtMnorwlUaGks5qYITNgaJpZM4JLV1L
.

@ynkjm
Copy link

ynkjm commented Jul 25, 2016

Hi @KAndrey and @hhashoww

Thank you for great information. We are going to investigate this issue.

Thanks!

@ynkjm
Copy link

ynkjm commented Jul 25, 2016

Hi @KAndrey and @hhashoww

Could you share your test output related to this issues?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants