Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Collisionhandling of instantanious, depend events #519

Open
freemin7 opened this issue May 25, 2020 · 11 comments
Open

Collisionhandling of instantanious, depend events #519

freemin7 opened this issue May 25, 2020 · 11 comments

Comments

@freemin7
Copy link

To help out with some errors in occurring in #515 and to allow handling of proper collisions There is no way to handle this with idx only. A change to VectorCallback or the creation of the new callback is required.

There are still open questions how to do this without allocations and what kind of array to pass.
@kanav99 offered to help me and to figure things out.

@devmotion
Copy link
Member

Just copied from my discussion on Slack with @freemin7:

Due to https://github.com/SciML/OrdinaryDiffEq.jl/blob/a5fe013d4c377ef4a39fd7d12bdc54e4558e11a2/src/integrators/integrator_utils.jl#L251 and similar line, you have to change integrator.vector_event_last_time as soon as you return an event_idx that is not an Int. Even if vector_event_last_time is moved to the callback cache (in which case it has to be an array as well since otherwise you can't have pooled and non-pooled VectorContinousCallbacks) you have to change all integrators. I mean, it will affect many packages but I think that's fine if it works and improves callback handling. BTW I always thought it's a bit weird to have this special field for VectorContinuousCallback (which is misused even if there are only regular callbacks), so maybe it would actually be a good idea to separate that from the integrator and remove it all together from the integrators.

So my suggestion would be to

  • move integrator.vector_event_last_time::Int to callback_cache.event_last_time::Vector{Int}
    • in principle it could be of type Int if all VectorContinuousCallbacks do not pool events, but I'm not sure if this optimization is worth it since it might make the implementation unnecessarily complex if it's not clear if we are dealing with integers or arrays of integers
  • remove the handling of VectorContinuousCallback-specific implementation details from all integrators (if possible)
    • it seems apart from setting integrator.vector_event_last_time that's mainly the creation of the callback cache. I guess this logic is quite similar for different integrators, so maybe it could be moved to DiffEqBase.callback_cache (or something similar)

At a first glance that seems doable, but since I'm not familiar with all details of VectorContinuousCallback it would be good if some other people could comment on this proposal.

@kanav99 @ChrisRackauckas

@kanav99
Copy link
Contributor

kanav99 commented Jun 2, 2020

Yes I think we should move it to callback_cache, after all it's supposed to store all that. I also think that we should set the value of event_last_time and vector_event_last_time inside the DiffEqBase code.

@freemin7
Copy link
Author

freemin7 commented Jun 9, 2020

Since @ChrisRackauckas hasn't commented yet. I assume it is fine by him. I poked him about the proposal.

I will look into it but probably still need help as i am not familiar with the inner workings as much.

@ChrisRackauckas
Copy link
Member

Yes I think we should move it to callback_cache, after all it's supposed to store all that. I also think that we should set the value of event_last_time and vector_event_last_time inside the DiffEqBase code.

Agreed. It would make things cleaner to do per-callback caching, and it would make it easier to maintain.

in principle it could be of type Int if all VectorContinuousCallbacks do not pool events, but I'm not sure if this optimization is worth it since it might make the implementation unnecessarily complex if it's not clear if we are dealing with integers or arrays of integers

it seems apart from setting integrator.vector_event_last_time that's mainly the creation of the callback cache. I guess this logic is quite similar for different integrators, so maybe it could be moved to DiffEqBase.callback_cache (or something similar)

Agreed: both are good points. For the first, if we are dealing with a whole array of callbacks, then a small array of integers is probably the least of our optimization worries.

So these are all good suggestions. I fully trust David and Kanav's opinions here.

@freemin7
Copy link
Author

I will look into working on it this weekend but i am still really unsure where to get started. I will poke @kanav99 on Slack.

@KalelR
Copy link

KalelR commented Oct 22, 2022

Hello, has any further work been done to solve this? I am currently facing this issue for a different system with units that can have simultaneous events (coupled neurons that fire synchronously). Would really appreciate some insight into how to deal with this! Thanks!

@freemin7
Copy link
Author

We can have a chat in the JuliaLang Slack.
I have thought about it a bit but not written code.

@ChrisRackauckas
Copy link
Member

Yeah it's not too hard to solve it now. We just need to add a step after the rootfind to find if any other events are within epsilon of the event, and if so... and boom there's the part I don't like.

Easy implementation: if so, then allocate an array of indices for the events that coincide. Then extend the VectorContinuousCallback interface that there's two different dispatches: affect!(integ, idx::Int) and affect!(integ, idxs::Vector{Int}). But okay, we shouldn't actually allocate since that would kill performance. So instead, let's make there be a Vector{Bool} or a BitArray in cache that we then flip to true all of the ones that had an event. Okay, do we do two dispatches or now do just the one and always use the vector of booleans? That makes it easier on the user, but the idx::Int case saves you from having to do a search, and is rather common.

So I'm thinking, two dispatches, BitArray. But now, if we do the BitArray, then we need to have it in the integrators. So we need to go add it to OrdinaryDiffEq.jl, Sundials.jl, and ODEInterfaceDiffEq.jl first to implement this. And then I go, well I'm not two convinced about the two dispatches so I guess I'll wait... 😅

But honestly it's not hard to implement, so we should just decide and do the easy thing now. A quick way to do this is:

  1. Setup the BitArray one in a way that allocates if !isdefined(integrator,:callback_event_idxs), that way it has an easy fallback and requires no other PRs to get this going.
  2. Add a pass after the rootfinder that scans to find all of the events within 100eps() of zero. Make that a tunable parameter. See if that works well enough to start.
  3. Write for the two dispatch form.
  4. Call this non-breaking because instantaneous events already did fail, even though this does add a new dispatch as required to the interface. I think it makes sense because we already had issues in this case.

That gets a working version of this out there. Then go downstream and add the cache array, and go do more tests to see if a better definition of "instantaneous" is needed.

Thoughts?

@freemin7
Copy link
Author

I had a discussion which clarified something. @KalelR s issue is about simultaneous events not working.
The problem which motivated me to open the issue is about something more complicated. It is about simultaneous events which need to be processed in together to work and an ulp earlier effect! can change what batches are run after that. So an 100*eps() grouping for processing solution will lead to solutions with an error that grows like O(t) or worse.

This should not stop @KalelR from getting simpler solution @ChrisRackauckas proposed before someone implements what is necessary for my case.

I see two ways forward:

  • have a collision strategy which i could later replace something else that works for mkre, this would require a new interface and more plumbing
  • have a separate GroupedCallback in the future which might do some more expensive stuff and does the right thing for me and doesn't impose any constraints on getting multiple simultaneous independent events working for @KalelR s use case.

I encouraged @KalelR to start a second issue about fixing VectorCallback. That way this issue can remain focused on the newly termed GroupedCallback.

Thoughts?

@ChrisRackauckas
Copy link
Member

What's the difference between GroupedCallback and VectorContinuousCallback?

@freemin7
Copy link
Author

freemin7 commented Oct 23, 2022

Sorry i turned this into a blog entry but i wanted to document all my ideas. For my future sake.

Maybe it turns out that all the special cases i imagine GroupedCallback would need to handle are addressed by VectorContinousCallback. In this case no GroupedCallback would be required. In my mental model VectorCallback is meant as version of ContinuousCallback which is vectorized for performance but has the semantics of ContinuousCallback otherwise.

In my mind the semantics of VectorContinousCallback and ContinousCallback are allowed to assume that the ordering of events being effect!ed within the interval which is summarised doesn't matter.

GroupedCallback guarantees that all indices which are effect!ed together belong to one group event. The collision function (which gets root searched) for VectorContinousCallback is defined over the system state $((x,t) \in R^(n+1)) \mapsto R^m$ mapping into m events.
The GroupedCallback is defined as function $(R^(n+1) \mapsto \cup_{x \in {0,1}^n, j in {1, \dots, m}} (\prod_{i \in {1, \dots, n}} R^{x_i})×{j} $ ($m$ different families of all upto n-way interactions).
Why does this distinction matter?

  • 2 vs 3 vs 4 way interactions might behave differently and while this dispatch could be implemented in user code in the VCC. GroupedCallback would be more composable.
  • certain things might happen simultaneously but do not long belong to same event group. $n$ pairs of particles colliding in different corners of the room might collide at the same time but their collisions are independent handling collision events together might scale like O(elements_involved^3) (solving a linear equation) and O((2n)^3) is worse then n*O(2^3)
  • The interface of VectorCallback promises all events which happen at the same time are processed together. If the way of grouping also includes locality in ODE state constraints means a lot of those interactions do not need to be computed which this formulation can effectively hide from the effect! callback and root finding. Example:
  • In case of a particle simulation distributed over multiple compute elements VCC requires all an all-gather and all compute elements have to wait for VCC to be done. GroupedCallback doesn't need to block this way unless all particles are in collision.
  • VCC and GC have the same expressiveness for DifferentialEquations.jl. I think this language makes sense for modelling callbacks correctly and scalebly. Symbolic tooling is not able to reason well about VCCs. It has to treat it as one big blackbox. Humans will also make subtle mistakes with VCCs and humans are not mining this part of the model space because it is hard to reason about. ModelingToolkit could be able to reason about GC [1]
  • MTK could go one step further: As the root finding parameter goes to zero the dynamic of the group is bound by the DAE instead of this ODE. As long as the DAE and the ODE are consistent if >0 this is fine. In the VCC paradigm i wouldn't know how to implement this.

[1] and warn users if the model is ill-defined and have a bunch options to address those issues. Ignoring higher order collisions can be acceptable. Ignoring some classes of rapid collisions can be too. Those tradeoffs could be expressed in the modelling language without having to write a specialized solver for domain X where they are required.

Does this demonstrate that they are distinct concepts?
Thoughts?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants