-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix decorator pattern problem with big nestings #545
base: main
Are you sure you want to change the base?
Conversation
it does not seem that evaluation of enumerators in ctor solves the issue. only if you move evaluation of every value into the ctor you can prevent the stacking of func-calls... but I wouldnt go this way.. |
I think, the problem of filling up the stack is not sovable as long as there is a lambda function passed as member in the constructor. I like the object oriented concept of the IEnumerator - it reveals only what is requested and offers delay options for everything else. It refuses direct indexing access as its duty is to enumerate, not be a memory box. I guess that as long as there are lamdba functions passed into ctors, this cannot be prevented for any object offering decorator style combinations as the final building call on the outer object will always travel down to the center object and when that vector is too long for the stack this will happen. To the other problem - regarding performance, the choice in this library was to offer "sticky by default" and that comes with the cost of performance. We hit the stack upper bounds with less operations due to this, which sets focus on the problem that when combining enumerations there are caches built that are very much unnecessary as they can be disposed once the final enumeration has been formed. In the secondary Joined ctors I also noticed some which copy the whole iteration, which has unnecessary costs when accessing only a fraction of the result. This can be seen as a possible performance improvement. Joined offers some ctors which are built to result in a live object (bool live = true). These should at least be used in Atoms internal objects like Map (where it wasn't used) and it might need proof to work also. Libraries built ontop of Atoms might profit from raising awareness of this caching costs and using live objects more often. One thought about LinQ: These are extension (=static) methods and I would guess therefore they cannot do anything else more fancy than somehow decorate their function input...? |
I prooved the theory of lambda functions in ctors being the cause of stackoverflow to be false. I could eliminate all lambda functions by refactoring, introducing objects like ArrayEnumerator, SingleEnumerator etc. The problem persists so I think the problem is only solvable by avoiding nesting objects too deeply in general, allowing functions to return in a shallow enough call stack. The guideline of keeping levels of abstraction as low as possible contributes to this. However, the unnecessary copying of lists and usage of unnecessary caches, I would like to address. At the moment, I cannot submit push requests because the library in its current state (.Net Framework version) is not compiling on Mac with M2. |
Please check if the PR fulfills these requirements
What kind of change does this PR introduce?** (check one with "x")
What is the current behavior? (You can also link to an open issue here)
#540
What is the new behavior?
Does this PR introduce a breaking change? (check one with "x")
If this PR contains a breaking change, please describe the impact and migration path for existing applications
Other information