Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Setting constraints by index instead of boundary mask #972

Open
ATG122 opened this issue Oct 27, 2023 · 3 comments
Open

Setting constraints by index instead of boundary mask #972

ATG122 opened this issue Oct 27, 2023 · 3 comments

Comments

@ATG122
Copy link

ATG122 commented Oct 27, 2023

Dear all,

I'm working on a coupled simulation and want to set non-uniform boudaries. In the current state, this could be done by setting a constraint for every face. This on the other hand would require a boundary mask for each constraint containing (N_f-1) booleans that are false and 1 boolean thats true (with N_f being the number of faces).

When working with larger meshes, this would require a large amount of memory, which could be reduced by setting the constraint not with a boundary mask but a list of indices, on which faces the constraint is to be applied. According to the python API this feature was either implemented in the past or at least planned, since it reads::

Constrain the Variable to have a value at an index or mask location specified by where.

Is it possible to (re-)implement this feature and can you estimate the necessary effort? I would assume that there is a certain interessent when coupling simulation or setting non-uniform boundaries in general. Please let me know if I should provide more information.

Thanks in advance!

@guyer
Copy link
Member

guyer commented Oct 27, 2023

While I thought this already worked, as I said in response to your StackOverflow question, it has not worked for at least 13 years. FiPy is not efficient with memory; it's a tradeoff we have to make to squeeze performance out of Python.

We are more likely to prioritize this if you can demonstrate that the memory associated with specifying constraints is a particular bottleneck for you. Otherwise, if your problem size is such that you cannot afford several multiples of the degrees of freedom, FiPy is probably not the tool for you.

@ATG122
Copy link
Author

ATG122 commented Oct 28, 2023

Working with an HPC cluster is indeed posible here. Therefore, I don't think that memory should be a critical point. Nonetheless this will result in longer simulation times which in turn causes higher costs for the HPC usage. This is why I try to optimize those things. Of course, Python and therefore FiPy is in general not the best language/tool to achieve maximum efficiency but I made the experience that it can be strongly optimized (e.g. avoiding loops and append-commands).

I will start the coupling with setting constraints with full boundary masks and will get back to this issue if it turns out to considerably slow down the simulation. I hope to make some progress the next weeks.
Thank you very much!

@guyer
Copy link
Member

guyer commented Oct 30, 2023

If you have not measured, then you do not know where the bottlenecks are. This is a general truth, but it is absolutely the case with FiPy. In FiPy, if you are writing Python loops on anything but time step, then you are absolutely suffering a speed penalty. On the other hand, when using vectorized numpy operations, efficiency is not achieved where you think it is. Keeping things around, like masked index arrays, is often performs vastly better than what one might do when writing the algorithm in C or Fortran.

I'm not saying we'll never look at this, just that we're more likely to prioritize it with quantitative benchmarks and profiles that demonstrate that it's a problem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants