You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
leads to a ~420km route and triggers a calculation that traverses 3.4 million nodes (for LM) or even 4.6 million nodes for A*.
If I use the "epsilon"-trick (e.g. "astarbi.epsilon": 1.4) the request will be lighter (and faster): only 2.4 million nodes are traversed (or 2.5 million nodes for A*) or even under 1 million for 1.8 (with a worse weight of course). We could investigate this to get a better understanding and advertise this parameter (or give it a better name) to improve speed by sacrificing correctness. Or even use a value greater one by default for the custom model and distances above 400km.
On this website they mentioned this paper where they created this visulation. It seems that they use a similar approach. What they found with pxWD was (I think) that they started with a stricter heuristic (epsilon=1) at the beginning of the traversal and then increased it while they traversed. This sounds interesting.
The text was updated successfully, but these errors were encountered:
Currently a simple request like:
leads to a ~420km route and triggers a calculation that traverses 3.4 million nodes (for LM) or even 4.6 million nodes for A*.
If I use the "epsilon"-trick (e.g.
"astarbi.epsilon": 1.4
) the request will be lighter (and faster): only 2.4 million nodes are traversed (or 2.5 million nodes for A*) or even under 1 million for 1.8 (with a worseweight
of course). We could investigate this to get a better understanding and advertise this parameter (or give it a better name) to improve speed by sacrificing correctness. Or even use a value greater one by default for the custom model and distances above 400km.On this website they mentioned this paper where they created this visulation. It seems that they use a similar approach. What they found with
pxWD
was (I think) that they started with a stricter heuristic (epsilon=1) at the beginning of the traversal and then increased it while they traversed. This sounds interesting.The text was updated successfully, but these errors were encountered: