You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Recently, I started delving into much more complicated PDE's. While previously I had done all derivations by hand and double/triple checked against a colleague, I realized that a symbolic algebra system may be helpful to reduce the monotony and mitigate the chance for error (especially in the case of utilizing gPINN where each PDE must be differentiated w.r.t. the input variables).
So with that thought, I designed a Mathematica -> Python code so that I could take my output expression in Mathematica and convert it to my particular DeepXDE syntax. After ironing out all the bugs, I was excited to try my implementation, although I did notice that my expressions were much longer than if I had performed the same derivations by hand. Not a problem, I thought, they're mathematically equivalent! Well, something seems to be wrong with these equivalent expressions because my errors are huge (10e14 for one PDE, the hand derivation form of it will eventually have an error of ~10e-4).
My question, why would an equivalent form of a PDE not yield the same results? I could 100% understand a performance degradation with a much lengthier and more complicated expression, but these results are just wildly off. Any clue as to why this would be? Having a pipeline of Mathematica -> DeepXDE seems like such a great way to expedite the work, but am I just dreaming?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Recently, I started delving into much more complicated PDE's. While previously I had done all derivations by hand and double/triple checked against a colleague, I realized that a symbolic algebra system may be helpful to reduce the monotony and mitigate the chance for error (especially in the case of utilizing gPINN where each PDE must be differentiated w.r.t. the input variables).
So with that thought, I designed a Mathematica -> Python code so that I could take my output expression in Mathematica and convert it to my particular DeepXDE syntax. After ironing out all the bugs, I was excited to try my implementation, although I did notice that my expressions were much longer than if I had performed the same derivations by hand. Not a problem, I thought, they're mathematically equivalent! Well, something seems to be wrong with these equivalent expressions because my errors are huge (10e14 for one PDE, the hand derivation form of it will eventually have an error of ~10e-4).
My question, why would an equivalent form of a PDE not yield the same results? I could 100% understand a performance degradation with a much lengthier and more complicated expression, but these results are just wildly off. Any clue as to why this would be? Having a pipeline of Mathematica -> DeepXDE seems like such a great way to expedite the work, but am I just dreaming?
Beta Was this translation helpful? Give feedback.
All reactions