Abnormal memory usage when working with large object collections #600
Replies: 2 comments 6 replies
-
Hi @foodaggression, nice to see that you make use of our latest Could you provide a self-contained example which shows the RAM issue, otherwise we would need to write one on our own to do some profiling? Also, we are currently working on a new Hope it helps in the meantime. We'll get back to you ;) Sincerely, Alex - |
Beta Was this translation helpful? Give feedback.
-
Hi @foodaggression, A way to get around the memory issue, is, as you mentioned, to divide the computation into chunks. import magpylib as magpy
import matplotlib.pyplot as plt
import numpy as np
import pyvista as pv
def get_field_in_chunks(sources, points, nchunks=1, field="B", **kwargs):
"""Compute B-or-H-field for given sources and points in chunks. This allows to limit the memory
usage of the fully vectorized getB or getH functions of the Magpylib library by dividing the
observer points into subsets (`nchunks`). This should only be used if the number of sources AND
the number of observers is very large, otherwise the computation time may decrease
significantly (e.g. TriangularMesh with 10000 facets + 1000 observers).
Parameters
----------
sources: source and collection objects or 1D list thereof
Sources that generate the magnetic field. Can be a single source (or collection)
or a 1D list of l source and/or collection objects.
points: array_like
Aarray_like positions of shape (n1, n2, ..., 3) where the field
should be evaluated. All positions are given in units of [mm].
nchunks: int
Positive integer corresponding to the number of position chunks which the computation
should be divided into.
Notes
-----
See `magpylib.getB` or `magpylib.getH` for other keyword arguments.
"""
Np = np.prod(points.shape[:-1]) # number of points
nchunks = min(max(0, nchunks), Np) # make sure 0<nchunks<Np
points = np.array(points, dtype=float)
slices = [slice(i * Np // nchunks, (i + 1) * Np // nchunks) for i in range(nchunks)]
func = getattr(magpy, f"get{field}")
B = np.concatenate(
[func(magnet, points.reshape(-1,3)[sl], **kwargs) for sl in slices]
)
return B.reshape(points.shape)
Nt = 10000 # target number of triangles
r = int(np.sqrt(Nt/2))
sphere = pv.Sphere(theta_resolution=r, phi_resolution=r)
magnet = magpy.magnet.TriangularMesh.from_pyvista(
magnetization=[0, 100, 0],
polydata = sphere,
)
N = 100 # number points per side length
xs = np.linspace(-1e3, 1e3, N)
ys = np.linspace(0, 1e3, N)
grid = np.array([[(x, y, 0) for x in xs] for y in ys])
B = get_field_in_chunks(magnet, grid, nchunks=100) This takes around 1min on my laptop. We may integrate this into the main Hope this helps ;) Sincerely, Alex |
Beta Was this translation helpful? Give feedback.
-
I'm calculating the magnetic field of a permanent magnet with a complex shape. To do this, I take the CAD drawing of the permanent magnet, export its surface as a STL Mesh and then use python to assemble a collection of magpylib triangle objects.
This process is fast, it only takes seconds to build a collection of 20k triangles - which is the lower bound for the meshing tolerance on the dimensions I care about. If I use a lower number of triangles, the field solution still changes significantly.
Calculating the field of this complex permanent magnet is also still relatively fast (around 10 seconds for 1000 points - testament to the excellence of magpylib!), but I run into problems when I want to calculate magnetic field maps at higher resolutions. Calling getB on this collection and a (100,100) array of locations uses more than 30GB of memory!
So what I currently do is building the magnet once, and then calling getB from within a loop along one axis of the map, and storing the result of each iteration by hand. This way, the field map completes in less than 2 minutes.
Is there a smarter way to do this? Does anybody know, why memory requirements of complex magnets explode with the number of observer points (I understand why the 20k triangles need quite a bit of memory)?
Any suggestions are appreciated!
Beta Was this translation helpful? Give feedback.
All reactions