-
-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Saga stopped inside loop without throwing any error #1592
Comments
This issue is caused by the If you look at the call stack when |
Thanks for the report, I'm planning to look into the issue - to see if we can avoid stack overflow when processing a large amount of sync consecutive effects. |
Thanks @shinima @Andarist for confirming the exception. Yes it would be great if the exception didn't get swallowed or even avoid it altogether. In my real project, I was able to get around this by pretty much removing the yield inside loop, which might freeze the UI if the loop is big, but shouldn't be a problem in my case. |
I checked our source code and it seems that saga catches an error here redux-saga/packages/core/src/internal/proc.js Lines 262 to 266 in 63dfb94
but because it handles an error in another function call redux-saga/packages/core/src/internal/proc.js Line 266 in 63dfb94
how can we fix that?
|
I would like to explore this:
I know it would be a significant change, but it would fix some long standing issues. I'm not entirely sure yet how feasible it is to do that now though - need to make a code assessment. |
I'm on vacation, so working on stuff from time to time. I've had a look at the case and some notes: and there are some problems
|
Do u have this on some pushed out branch? |
https://github.com/redux-saga/redux-saga/compare/flatten-stack-experiment |
Could u describe the faced issue in more details? My idea to fix that would be actually quite similar, tracking Alternative solution would be to use some kind of special completion value but that would force us to return each sync value (instead of passing it through completion callback) and probably would make handlers like runPutEffect more complicated |
I considered this option, but that approach
regarding the problem with fork model. we have a failing test https://github.com/redux-saga/redux-saga/blob/master/packages/core/__tests__/interpreter/takeSync.js#L522 how fork model works with changes and it even looks like more logical sequence, but I'm not sure we don't break any users code. as I understand test fails due to internal scheduling, but not entirely understand why - all steps are synchronous. it's easy to fix test with running it as function* root(){
yield fork(s1)
}
middleware.run(root) instead of middleware.run(s1) |
Well, this should be unrelated - but this reminds me that we root should be considered a regular task (its just being run from outside instead of being forked inside) and thus normal scheduling should apply for it. So we should suspend the scheduler before running, I can dig up a test case for this as I had it somewhere, going to create a ticket for this. But going back to the topic here - gonna play around with ur code when I find the time to and going to try pinpoint the problem. BTW does your queue ever becoming bigger than 1? |
no. that's was just easier for brains to consider it as a 'queue', actually it has 0 or 1 element inside. we should switch to ordinary variable later, that's a minor |
@Andarist I updated the branch to get rid of an array. + suspend()
const task = proc(env, iterator, context, effectId, getMetaInfo(saga), null)
+ flush() |
I've done a few tests against branch flatten-stack-experiment. Sadly I found that due to the internal scheduling, all put/take are async -- `yarn bin`/jest --testPathPattern='interpreter/takeSync.js' --testNamePattern='deeply nested forks/puts' As put/take are most used effects in applications, it's a pity that only a few effects (select/fork/cancel/channel etc.) can benefit from stack reduction. |
While this maybe fixes the failing test i dont think it should be considered as a proper fix - this means that the internal order of things has changed and this refactor should be completely transparent, otherwise I'm not confident if it doesnt break some other real world use cases that are not covered currently by our test suite. @shinima takes are indeed async as they act as watchers for actions, but their resolution is sync puts are completely sync, the only hard part of this problem is that they dont necessarily are executed right away when they are scheduled, but we have certainty that they will get executed in the same frame in which they are scheduled so in theory it should be possible at least for puts to benefit from stack reduction, it's only tricky to pull it off for some reason, probably because they might be handled because different tasks |
The fix is the same, but the other PR targets a specific situation - it makes the behaviour of root sagas the same as forked ones. But when it comes to "stack flattening" I believe the change should be transparent to the scheduling, nothing should really change as our desired change is solely avoiding a stack overflow, not changing how things behave. |
I believe we should still try (if possible) to implement stack flattening on top of older code, without #1628 merged in. That way we'd be more confident that the change is indeed transparent to current use cases. |
Hello! Any updates on this issue? |
Stack flattening seems hard to implement, we couldn't do it quickly - it would require further exploration. Maybe what we could do is implementing a DEV counter and warn when encountering such pattern in the code way before it actually throws on its own? 🤔 |
@Andarist This one has been biting us pretty badly in production--we have a |
I was running a test to see performance difference between a regular function call and a |
@Andarist What is the status on this? My case: function* a(){
yield call(loop)
return true // this never returns
}
function* loop(){
// objects: array of thousands++ objects
for (const obj of objects) {
yield call(anotherAsyncSaga, obj) // stalls after ~170 iterations
}
} Is there a possible workaround for this? This seems like a major drawback of redux-saga if you cannot loop for more than ~100 iterations, and something that should be documented and informed to all users. |
The current workaround is to implement simple backpressure into the loop. You could even use microtasks to make sure no code should execute in between. I've tried to change internal implementation, but couldn't figure out how to do that properly while having all existing tests passing. If you have time and resources to work on improving the story around that I would gladly accept a PR fixing this. |
Thanks for the fast reply, @Andarist. Do you have any code examples for implementing simple backpressure into loops and/or use of microtasks? |
Promises (if you have access to native ones) can be used to schedule things on microtask queue. One of the alternatives in a browser is to use a MutationObserver, but that's slightly more complex in implementation. You can check out how core-js implements it here Anyhow, with Promises you could do this: const deferred = Promise.resolve()
const microTick = deferred.then.bind(deferred)
function* loop(){
// objects: array of thousands++ objects
for (const obj of objects) {
yield call(anotherAsyncSaga, obj)
yield call(microTick)
}
} |
Thanks again, @Andarist. That workaround solved the problem for now :) |
There is no ongoing work regarding this right now, I've tried to fix it once but failed to do so and I don't have much time right now to get back to it. I got some new ideas on how maybe I could try to solve it, but it really would be an experiment - I have no idea if my ideas are any good. I encourage anyone interested to try to fix this and prepare a PR for that. |
same issue. any solution ? |
I'll be looking into fixing this in October, but don't get your hopes up - can't promise that I will actually succeed. |
Hey @Andarist wanted to check and see if you had the chance to look into this? Even surfacing the error would be a big help, debugging this issue was pretty challenging and only would surface intermittently when placing try-catches at the right scope in the saga hierarchy. |
I've tried to fix it but, unfortunately, I've failed. Every fix I've attempted has changed the order of certain things happening, it has impacted the internal scheduler a little bit too much. It would certainly be possible to introduce a log for this. Would you be willing to implement it? |
@Andarist Am I facing the same problem, any solution? i'm making a call in an api where it returns more than 1000 records and it just stays in the infinite loop calling the api :/ |
It's a long-standing issue that would be awesome to fix. I don't have time to do this and I've already failed in the past - maybe somebody smarter than me could take a stab at it. I think it's reasonable to put a bounty on this issue - whoever fixes the problem can get 1000usd from our opencollective (assuming that it will have enough funds to pay out this). |
@joes-code why do you use yield in this line? Just remove yield and all works fine |
If you want to use async operation in cycle just add yield delay(0); for correct work
|
I encountered this issue as well while doing a It was trying to loop through roughly 500 - 600 items and was silently causing saga to crash, which basically brought my app to a halt with no errors being thrown because I have everything wired up to saga. The silent crash certainly made it very difficult to narrow down. I initially solved this by just reducing the number of items that were being iterated through, but I can't really control the amount of data my app needs to process so that seemed like a stop-gap. I then also tried the workaround @EarlOld suggested:
That actually worked even when iterating through many more cycles. @EarlOld - can you share why putting the |
Previously inside a `shift()` instruction we would always return the value of `k()`. This caused another tick in the stack frame which caused the stack to increase until an overflow was reached. Previously it only took about 1495 `shift()` calls inside a loop to cause a stack overflow. Now we provide the ability for the end-user to decide whether or not they care about the return value of `k()` with the addition of `k.tail()` which will continue the same stack frame instead of creating a new one, preventing the likely-hood of a stack overflow. The reason why we started investigating this inside `continuation` was because of a previous issue in `redux-saga`: redux-saga/redux-saga#1592 Co-authored-by: Charles Lowell <cowboyd@frontside.com>
Steps to reproduce
I've created a small repo and updated readme with expected and actual outcome.
https://github.com/joes-code/redux-saga-beginner-tutorial
Description of the bug/issue
Inside the saga function, I'm simply looping and adding numbers. Depends on the loop size and the browser you use, saga might finish the process and work as expected, or it might just stop half way and that's it.
Steps to reproduce the bug/issue
Example
https://github.com/joes-code/redux-saga-beginner-tutorial
Actual results
Hello Sagas!
loop # 0
0 "-" 405450
loop # 1
1 "-" 405450
loop # 2
The Expected results
Hello Sagas!
loop # 0
0 "-" 405450
loop # 1
1 "-" 405450
loop # 2
2 "-" 405450
loop # 3
3 "-" 405450
loop # 4
4 "-" 405450
Environment information
The text was updated successfully, but these errors were encountered: