Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optional binary sensor mask and sensor_weights argument for kWaveArray.combine_sensor_data and kWaveArray.get_distributed_source_signal #342

Open
bvale1 opened this issue Mar 11, 2024 · 8 comments
Assignees
Labels
enhancement New feature or request

Comments

@bvale1
Copy link
Contributor

bvale1 commented Mar 11, 2024

This option is not included in MATLAB k-wave however adding it to my k-wave-python branch is saving me loads of time and avoids repeating computations unessesarily.

The idea is the user can provide the precomputed binary sensor mask as an optional argument to the kWaveArray.combine_sensor_data and kWaveArray.get_distributed_source_signal functions, if it passes the sanity checks then we do not need to call mask = self.get_array_binary_mask(kgrid).

I also do the same for sensor_weights and sensor_local_ind, as they take a long time to compute as the functions are CPU only. When large running 3D simulations repeatedly with the exact same source geometry I can save the precomputed mask, weights and indices to a h5 file and load them up each run.

I'd be happy to clean up my code and PR kwave_array.py if you think this would be useful to have in one of the upcoming k-wave-python versions.

Kind regards,
Billy.

@bvale1 bvale1 added the enhancement New feature or request label Mar 11, 2024
@djps
Copy link
Collaborator

djps commented Mar 11, 2024

When large running 3D simulations repeatedly with the exact same source geometry I can save the precomputed mask, weights and indices to a h5 file and load them up each run.

Sounds like a good feature which I can see this being useful. There may need to be a check that there is no change in the source configuration - so the source geometry and the source characterisation by tol_star stay the same. For tol_star I guess this would mean that the medium surrounding the transducer remains homogeneous. This is not a feature in matlab.

@bvale1
Copy link
Contributor Author

bvale1 commented Mar 12, 2024

Certainly, the disadvantage of this feature is that it makes the simulation easier to break. It's mostly up to the user to make sure they haven't changed the source configuration at all since they precomputed the weights, otherwise the simulation will yield incorrect results.

I included a few assertion sanity checks, e.g. for the mask:

assert isinstance(mask, np.ndarray) and mask.dtype == bool, "'sensor_mask' must be a boolean numpy array"
if kgrid.dim == 3:
    assert mask.shape == (kgrid.Nx, kgrid.Ny, kgrid.Nz), "'sensor_mask' must be the same size as kgrid"
elif kgrid.dim == 2:
    assert mask.shape == (kgrid.Nx, kgrid.Ny), "'sensor_mask' must be the same size as kgrid"
elif kgrid.dim == 1:
    assert mask.shape == (kgrid.Nx), "'sensor_mask' must be the same size as kgrid"

and for the sensor weights:
assert isinstance(sensor_weights, list) and len(sensor_weights) == self.number_elements, "'sensor_weights' must be a list of the same length as the number of elements"

But it's still possible for incorrect weights to pass these if the grid size and number of sensors is the same.

I actually handle this on my end (not within k-wave) with a folder of precomputed weights, each saved with a config file so my code checks automatically to see if a simulation with the same source has been run before. But I was less certain about including this kind of functionality into k-wave.

@waltsims
Copy link
Owner

waltsims commented Mar 13, 2024

I think instead of an argument in the signature, the best strategy would be to cache precalculated weights in a temp directory and reload them if available. This does not modify the function calls for the user and ensures correct execution. @bvale1 Do you think you can modify your branch accordingly?

@waltsims
Copy link
Owner

waltsims commented Mar 13, 2024

If you want the caching to persist only for the lifetime of your simulation program, func_tools.cache() would be a suitable decorator. For persistent caching to disk, a decorator that saves to disk can be used, like shown here.

@waltsims
Copy link
Owner

waltsims commented Mar 13, 2024

A little further digging allowed me to find this example.

I tried this on a separate piece of code, and it worked well:

from joblib import Memory
from functools import cache


memory = Memory('cache/')

@cache
@memory.cache
def expensive_func():
    pass

if __name__ == "__main__":
    expensive_func() # runs long

    expensive_func() # runs quickly

@bvale1
Copy link
Contributor Author

bvale1 commented Mar 13, 2024

This looks a lot better than my current approach, yes I should be able to modify my branch to do this. I think I'll have time this weekend.

@peterhollender
Copy link
Contributor

peterhollender commented Apr 15, 2024

@bvale1 did you give the decorator version a try? I'd been a running a locally-patched version implementing keyword-based injection similar to your original suggestion, but @waltsims's suggestion seems like the way to go. If you haven't tried it yet, I can give it a whirl this week.

@bvale1
Copy link
Contributor Author

bvale1 commented Apr 16, 2024

@peterhollender I haven't tried caching with the decorator yet, just continuing to work with my branch in it's current state. If you do try it I'd be curious to know how it goes.

@waltsims waltsims self-assigned this May 21, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants