You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Say you have some idiotically large map that take 500Gb to hold in memory, too much for a single node, but possible to have over a range of nodes. What we need is:
some infrastructure so that MPI nodes can load and save the data from a file (or multiple files) efficiently and distribute the memory load.
some infrastructure for a resolution degrade-reduce operation, i.e. MPI nodes average down the map, each operating on pixels it holds, and then the head node gets the properly degraded low-resolution map that takes into account the small star cuts correctly
some infrastructure to load a map around some sky position. I.e. I specify the ra,dec and angular radius and only those covering pixels get loaded into memory
How exactly this should be done should be discussed with the 3x2pt and TXPipe in mind, so I'm CCing @joezuntz .
The text was updated successfully, but these errors were encountered:
As for point (3), this is basically supported: you can read in an arbitrary list of pixels at the nside of the coverage map. So you can use healpy to ask for all the pixels covering a ra/dec/radius and read those in. I'll add a convenience method so you can do just this.
The other MPI stuff is definitely more work and will require some thought. On the reading side I think this is easier than on the writing side.
Say you have some idiotically large map that take 500Gb to hold in memory, too much for a single node, but possible to have over a range of nodes. What we need is:
How exactly this should be done should be discussed with the 3x2pt and TXPipe in mind, so I'm CCing @joezuntz .
The text was updated successfully, but these errors were encountered: