Skip to content
This repository has been archived by the owner on Apr 2, 2019. It is now read-only.

Fix "Maximum call stack size exceeded" warnings when processing lots of #8

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

mcarlson
Copy link

@mcarlson mcarlson commented Jan 6, 2014

items

@dominictarr
Copy link
Owner

if you are getting a stack overflow, it must mean that you are using a sync transform?
If you are not using an async map, you are probably better off just using through it's much simpler.

@mcarlson
Copy link
Author

mcarlson commented Jan 7, 2014

One of my transforms was indeed synchronous. I moved it out of the async map and it's more reliable - see https://github.com/mcarlson/punitive/blob/async/punitive.js#L81. I still get "RangeError: Maximum call stack size exceeded" for certain large datasets. In this case, 22844 items in the 'filtered' array. It's all 100% async now, so that's not the issue.

I haven't been able to find a workaround for this other than the pull request I sent. Any other ideas?

Still stuck here, even after wrapping the callback in process.nextTick() myself...

@dominictarr
Copy link
Owner

fs.readFile isn't sync. hmm, you shouldn't need process.nextTick
if it's in the callback of fs.readFile are you sure this only works if it's there?

hmm. It looks like you are making a key value database, except with files?

Oh... I just looked it your code and saw you also had a leveldb branch.
do you have the same problem with leveldb? I was about to recommend using leveldb for this.

@mcarlson
Copy link
Author

I know, right? It should be working. Still, unless I patch map-stream directly, the issue occurs. I'm sure this will be an ongoing issue for other users... Check out this issue where the author of async throws up his hands and adds process.nextTick: caolan/async#75

I love event-stream, it's beautiful stuff. I'd love to keep using it and know that it will always reliably work. What are your reservations about adding process.nextTick()? Are you concerned about process.nextTick() degrading performance?

FWIW, LevelDB works but the performance isn't nearly as good. It's hard to beat the raw filesystem for speed. That said, leveldb does mostly work - but it's really not designed for appending to values as I'm doing with the lovely fs.appendFile() so some values get dropped on the floor. I'd need to add key-specific locking or somesuch to levelup make it work reliably.

Ahhh, life on the bleeding edge, huh?

@dominictarr
Copy link
Owner

Okay so I don't want to merge a work around,
I want to get to the root of the problem.
Not just to fix map-stream, but also to understand node.

Can you give me a testcase that reproduces this?

Hmm, so you are using leveldb the best way.
You can do something just like appends with leveldb,
just make the key phon+'!'+word. And then pull the values out with a read stream.
Also, use level-writestream.
You should post an issue on leveldb though, and well discuss that there.

@mcarlson
Copy link
Author

Sure, if you follow the instructions to run https://github.com/mcarlson/punitive/tree/async you'll see if blows up every time without my patch..

@notslang
Copy link

notslang commented May 6, 2015

This problem seems to happen when the number of consecutive writable items in writeQueue exceeds the maximum call stack size (like if there's a long operation and then a bunch of really short ones). This causes queueData to become a very deep recursive function with a stack size equal to the number of consecutive writable items in writeQueue.

As an aside, it would be really nice if we could just turn off preservation of item order when we don't need it and skip the queue entirely.

@dominictarr
Copy link
Owner

looking at this code again,
the correct solution should be a loop inside of queueNext, not nextTick

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants