Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tlsf_create_with_pool crashes with this memory size #9

Open
lfnoise opened this issue Aug 21, 2018 · 10 comments
Open

tlsf_create_with_pool crashes with this memory size #9

lfnoise opened this issue Aug 21, 2018 · 10 comments

Comments

@lfnoise
Copy link

lfnoise commented Aug 21, 2018

calling tlsf_create_with_pool with this exact size crashes on my machine:

size_t size = tlsf_block_size_max() + tlsf_size() + tlsf_pool_overhead();
char* mem = (char*)malloc(size);
auto t = tlsf_create_with_pool(mem, size);

The sizes which are 8 bytes bigger or smaller do not crash.

crashes in insert_free_block on this line:

current->prev_free = block;

Thread 1: EXC_BAD_ACCESS (code=2, address=0x100000019)

(lldb) bt
* thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=2, address=0x100000019)
    frame #0: 0x0000000100002ee8 gc2 test`insert_free_block(control=0x0000000106800000, block=0x0000000106801980, fl=25, sl=0) at tlsf.c:612
  * frame #1: 0x0000000100001b5c gc2 test`block_insert(control=0x0000000106800000, block=0x0000000106801980) at tlsf.c:638
    frame #2: 0x00000001000019b3 gc2 test`tlsf_add_pool(tlsf=0x0000000106800000, mem=0x0000000106801988, bytes=4294967312) at tlsf.c:1018
    frame #3: 0x00000001000020b9 gc2 test`tlsf_create_with_pool(mem=0x0000000106800000, bytes=4294973848) at tlsf.c:1100
    frame #4: 0x0000000100003fcc gc2 test`main(argc=1, argv=0x00007ffeefbff7e8) at main.cpp:301
    frame #5: 0x00007fff78e10115 libdyld.dylib`start + 1
    frame #6: 0x00007fff78e10115 libdyld.dylib`start + 1
@lfnoise
Copy link
Author

lfnoise commented Aug 21, 2018

Specifically,
8 bytes smaller succeeds.
size_t size = tlsf_block_size_max() + tlsf_size() + 8; // tlsf_create_with_pool succeeds

8 bytes larger fails, printing an error.
size_t size = tlsf_block_size_max() + tlsf_size() + 24; // tlsf_create_with_pool prints error.

@lfnoise
Copy link
Author

lfnoise commented Aug 21, 2018

Changing > to >= on this line in tlsf_add_pool:
if (pool_bytes < block_size_min || pool_bytes > block_size_max)
like this:
if (pool_bytes < block_size_min || pool_bytes >= block_size_max)

Causes it to print an error instead of crashing.
But then you can't actually allocate a block of block_size_max. The greatest size I can successfully allocate is (block_size_max() - block_size_max()/64)

@lfnoise
Copy link
Author

lfnoise commented Aug 21, 2018

The crash is because fl is 25 in insert_free_block, which is one past the last index.

    static void insert_free_block(control_t* control, block_header_t* block, int fl, int sl)
    {
        block_header_t* current = control->blocks[fl][sl];

So another fix, of which I am unsure of the implications, is to simply increase FL_INDEX_COUNT by 1.
changing this:
FL_INDEX_COUNT = (FL_INDEX_MAX - FL_INDEX_SHIFT + 1),
to this:
FL_INDEX_COUNT = (FL_INDEX_MAX - FL_INDEX_SHIFT + 2),

Then I am able to allocate a block up to size (block_size_max() - tlsf_alloc_overhead()).
That increases tlsf_size to 6800 bytes.

@zeromus
Copy link

zeromus commented Aug 21, 2018

For the record, that's an increase from 6536 to 6800, or 264 bytes (and some extra looping time in a few places) to support this (and none other) case where the pool is the maximum size.

I'm evaluating this change

static const size_t block_size_max = (tlsf_cast(size_t, 1) << FL_INDEX_MAX) - ALIGN_SIZE;

Which is much less wasteful.

But there are two other recent seemingly similar commits which are really bugging me, and I'm not able yet to actually allocate a block that big. Is it possible these other recent commits are really insidiously related to this problem?

Like, I don't understand mapping_search() at all. It's adding some junk to the size? I can't see how that's anything like rounding. maybe it's like rounding if the allocation is actually less than 'round'. But in this case (trying to allocate the entire pool as one block) it's enormously larger.

Maybe the "insidious relation" is just that this stuff hasn't been tested in scenarios with really large parameters. But... check one of those commits "bug when in range [max size-align, max size], goes off end of array".. sounds so similar to your problem.

@lfnoise
Copy link
Author

lfnoise commented Aug 21, 2018

I'm evaluating this change

static const size_t block_size_max = (tlsf_cast(size_t, 1) << FL_INDEX_MAX) - ALIGN_SIZE;

This is the largest I can successfully allocate with that change:

tlsf_block_size_max() - tlsf_block_size_max()/64

@zeromus
Copy link

zeromus commented Aug 21, 2018

Yes, that's due to mapping_search() and the "rounding". it would seem that the upper limit under any circumstances is max-64MB, but of course that doesn't make sense. Just debug that function and you'll see. It's weird.

@lfnoise
Copy link
Author

lfnoise commented Aug 21, 2018

It just seems that block_size_max is actually the block size of the bin which is one past the last bin in the table, and what you can successfully allocate is the bin size of the last bin in the table, which is one quantum lower.

@zeromus
Copy link

zeromus commented Aug 21, 2018

I dont think I agree. If it was true then the best you could do is block_size_max/2. But it's actually 64MB less than block_size_max due to mapping_search() weirdness. It's true that we can't make blocks equal to the maximum block size due to it just barely getting sized out of the bins, but that's why I proposed to fix it by dropping it down by 4 bytes. From what I've seen, if it weren't for other bugs (or logic I dont understand at all), allocating the theoretical maximum minus 4 bytes should work fine.

@lfnoise
Copy link
Author

lfnoise commented Aug 21, 2018

The last bin is not block_size_max()/2, it is block_size_max()/2 + (31 *block_size_max())/(32*2), which is equal to block_size_max() - block_size_max()/64.
because each power of two is broken up into 32 bins, and the last bin is 31 of 32 of those quanta more than the next lower power of two.

I think the "weirdness" is as explained in the TLSF paper. http://www.gii.upv.es/tlsf/files/ecrts04_tlsf.pdf

@zeromus
Copy link

zeromus commented Aug 21, 2018

mmm mapping_search() is a bit misleading. It's not actually rounding. it seems like it's missing a size &= ~round; otherwise it isn't rounding, it's inflating. But the rounding would happen later when stuff gets truncated back down. If I've understood this correctly, it could use some commenting someday.

Anyway, based on this, an allocation of almost block_size_max() would get "rounded" up to the power of two, which is too big (it's the original problem again, it's a power exactly too big for the bins)
So maybe a solution is this:
static const size_t block_size_max = BIGGEST-BIGGEST/64 (not sure how to properly express BIGGEST, yet).
That way when it's "rounded" in mapping_search() it will be one less than the conceptual maximum size power of two (which has too many bits); (and we also need to double check the bounds checked in tlsf_add_pool, in the end)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants