{"payload":{"feedbackUrl":"https://github.com/orgs/community/discussions/53140","repo":{"id":45433516,"defaultBranch":"master","name":"VELOCIraptor-STF","ownerLogin":"pelahi","currentUserCanPush":false,"isFork":false,"isEmpty":false,"createdAt":"2015-11-03T01:27:38.000Z","ownerAvatar":"https://avatars.githubusercontent.com/u/12030027?v=4","public":true,"private":false,"isOrgOwned":false},"refInfo":{"name":"","listCacheKey":"v0:1706685234.0","currentOid":""},"activityList":{"items":[{"before":"6892b04cab39f2700a242137a07586737837dd4a","after":"3cf1c50bc3207c084cd93f77f7b211bc12be330e","ref":"refs/heads/development","pushedAt":"2024-01-31T07:18:03.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"pelahi","name":"Pascal Jahan Elahi","path":"/pelahi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/12030027?s=80&v=4"},"commit":{"message":"Added features to baryon runs (#118)\n\n- now by default if running with baryons, code will not run unbinding on\r\n substructures till baryons have been associated with substructures.\r\n This ensures that very baryon dominated substructures are present in\r\n the catalogue. Otherwise, when a dark matter substructure candidate is\r\n found, it is possible that the unbinding process will remove it entirely\r\n before baryons can be associated with it, leaving unidentified what\r\n would be baryon dominated substructures.\r\n- updated the user interface to allow this option to be turn off and\r\n documented this option","shortMessageHtmlLink":"Added features to baryon runs (#118)"}},{"before":"c9e486ff0b2e7cb143926dc639ca35e697d22bbe","after":"6892b04cab39f2700a242137a07586737837dd4a","ref":"refs/heads/development","pushedAt":"2024-01-31T07:17:14.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"pelahi","name":"Pascal Jahan Elahi","path":"/pelahi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/12030027?s=80&v=4"},"commit":{"message":"Updates to improve MPI communication (#117)\n\n- fixed issue when mpi communication could generate too many requests\r\n and cause crashes\r\n- fixed bug where offset for chunked communication would cause crashes\r\n- improved how point-to-point communication is handled by generated a\r\n random set of pairs communicating\r\n- cleaned up code so that easier to maintain how pt2pt communication is\r\n run. Also easier to update how pairs are processed.\r\n- makes use of c++17 so fixed CMakeList to ensure this is used","shortMessageHtmlLink":"Updates to improve MPI communication (#117)"}},{"before":"08daf07ea0766781907b6447b50bb2c0855c5a22","after":"c9e486ff0b2e7cb143926dc639ca35e697d22bbe","ref":"refs/heads/development","pushedAt":"2024-01-31T07:15:29.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"pelahi","name":"Pascal Jahan Elahi","path":"/pelahi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/12030027?s=80&v=4"},"commit":{"message":"Bug fix when reading HDF5 and allowing for particles to be outside the periodic domain in the input","shortMessageHtmlLink":"Bug fix when reading HDF5 and allowing for particles to be outside th…"}},{"before":null,"after":"c1371cc5aaa37af265050128e2a34f8697c3310a","ref":"refs/heads/feature/baryon-unbinding","pushedAt":"2024-01-31T07:13:54.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"pelahi","name":"Pascal Jahan Elahi","path":"/pelahi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/12030027?s=80&v=4"},"commit":{"message":"Added features to baryon runs\n\n- now by default if running with baryons, code will not run unbinding on\n substructures till baryons have been associated with substructures.\n This ensures that very baryon dominated substructures are present in\n the catalogue. Otherwise, when a dark matter substructure candidate is\n found, it is possible that the unbinding process will remove it entirely\n before baryons can be associated with it, leaving unidentified what\n would be baryon dominated substructures.\n- updated the user interface to allow this option to be turn off and\n documented this option","shortMessageHtmlLink":"Added features to baryon runs"}},{"before":null,"after":"879908272561bcc82010d4f4bc8ab1f7f6223f05","ref":"refs/heads/bugfix/mpi-communicaiton","pushedAt":"2024-01-31T07:08:51.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"pelahi","name":"Pascal Jahan Elahi","path":"/pelahi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/12030027?s=80&v=4"},"commit":{"message":"Updates to improve MPI communication\n\n- fixed issue when mpi communication could generate too many requests\n and cause crashes\n- fixed bug where offset for chunked communication would cause crashes\n- improved how point-to-point communication is handled by generated a\n random set of pairs communicating\n- cleaned up code so that easier to maintain how pt2pt communication is\n run. Also easier to update how pairs are processed.\n- makes use of c++17 so fixed CMakeList to ensure this is used","shortMessageHtmlLink":"Updates to improve MPI communication"}},{"before":"98ce4f4b5177ea5679734eabb608bb9c892239b2","after":null,"ref":"refs/heads/bugfix/redompidomainsafterexport","pushedAt":"2024-01-04T08:34:51.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"pelahi","name":"Pascal Jahan Elahi","path":"/pelahi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/12030027?s=80&v=4"}},{"before":"03f84f0310eff845a22e22607c0ae71fb7e382f4","after":null,"ref":"refs/heads/feature/propusingboundparticles","pushedAt":"2024-01-04T08:34:23.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"pelahi","name":"Pascal Jahan Elahi","path":"/pelahi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/12030027?s=80&v=4"}},{"before":"883712792a5eed443f1ca07b52990b6d50edb9c7","after":null,"ref":"refs/heads/feature/testreducedpushpop","pushedAt":"2024-01-04T08:34:19.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"pelahi","name":"Pascal Jahan Elahi","path":"/pelahi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/12030027?s=80&v=4"}},{"before":"33e7c1c5ad235b4db6646837567d47e55bf68968","after":null,"ref":"refs/heads/feature/generalcalculations","pushedAt":"2024-01-04T08:34:16.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"pelahi","name":"Pascal Jahan Elahi","path":"/pelahi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/12030027?s=80&v=4"}},{"before":"0685878751e53658b4d484a0c1625a1a2dc37832","after":null,"ref":"refs/heads/swift-interface-dev","pushedAt":"2024-01-04T08:34:05.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"pelahi","name":"Pascal Jahan Elahi","path":"/pelahi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/12030027?s=80&v=4"}},{"before":"8cbb143a0dcd3e725e9eb7e3f49632c641f46417","after":null,"ref":"refs/heads/feature/clang-formatted-code","pushedAt":"2024-01-04T08:33:56.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"pelahi","name":"Pascal Jahan Elahi","path":"/pelahi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/12030027?s=80&v=4"}},{"before":"18d88ae79e03f070e18de41d4a66632bbbc6b709","after":null,"ref":"refs/heads/optimisation/improve-omp-linking","pushedAt":"2024-01-04T08:33:36.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"pelahi","name":"Pascal Jahan Elahi","path":"/pelahi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/12030027?s=80&v=4"}},{"before":"7266692310a53728f4fada353baa9fd5e300ac79","after":null,"ref":"refs/heads/feature/openmp-gpu","pushedAt":"2024-01-04T08:33:32.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"pelahi","name":"Pascal Jahan Elahi","path":"/pelahi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/12030027?s=80&v=4"}},{"before":"5b3ca7df4d5e06d123e28ae53ccb367d7db5beae","after":null,"ref":"refs/heads/feature/openacc","pushedAt":"2024-01-04T08:33:18.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"pelahi","name":"Pascal Jahan Elahi","path":"/pelahi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/12030027?s=80&v=4"}},{"before":"cf2d77d1c05d831c21a7c4fcd3d22c27b5af1ccc","after":null,"ref":"refs/heads/feature/minpotupdate","pushedAt":"2024-01-04T08:32:55.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"pelahi","name":"Pascal Jahan Elahi","path":"/pelahi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/12030027?s=80&v=4"}},{"before":"185e0015e6f23d8924cb6e4faa90220fb75020ab","after":null,"ref":"refs/heads/feature/speedupsorts","pushedAt":"2024-01-04T08:32:46.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"pelahi","name":"Pascal Jahan Elahi","path":"/pelahi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/12030027?s=80&v=4"}},{"before":"88912f5e142419205bdb014e1128c8fd3484b320","after":null,"ref":"refs/heads/feature/cacheimprovement","pushedAt":"2024-01-04T08:32:42.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"pelahi","name":"Pascal Jahan Elahi","path":"/pelahi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/12030027?s=80&v=4"}},{"before":"14d5ec1f066dda9e291cde7caeaaa007448f2891","after":null,"ref":"refs/heads/feature/splitnodeoptimisation","pushedAt":"2024-01-04T08:32:30.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"pelahi","name":"Pascal Jahan Elahi","path":"/pelahi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/12030027?s=80&v=4"}},{"before":"2667be3a5fcb6bd849254620965d0cec34a613a8","after":null,"ref":"refs/heads/bugfix/load_extra_propeties_for_apertures","pushedAt":"2024-01-04T08:32:23.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"pelahi","name":"Pascal Jahan Elahi","path":"/pelahi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/12030027?s=80&v=4"}},{"before":"de11dae5b54a64c45a7daba0dd4b2b5aa1e69ed5","after":null,"ref":"refs/heads/bugfix/mpilocalparticledensity","pushedAt":"2024-01-04T08:32:19.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"pelahi","name":"Pascal Jahan Elahi","path":"/pelahi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/12030027?s=80&v=4"}},{"before":"43b0238809a27e572cb3ff39d6f11ac723991fc9","after":null,"ref":"refs/heads/feature/report-pinning","pushedAt":"2024-01-04T08:32:01.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"pelahi","name":"Pascal Jahan Elahi","path":"/pelahi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/12030027?s=80&v=4"}},{"before":"2652b39434e8af66df4f29ee5f11dbc63382d3b7","after":null,"ref":"refs/heads/bugfix/mpiextraprop","pushedAt":"2024-01-04T08:25:30.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"pelahi","name":"Pascal Jahan Elahi","path":"/pelahi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/12030027?s=80&v=4"}},{"before":"9902dfb63ab991c40154e99490a2c1921449c06f","after":"08daf07ea0766781907b6447b50bb2c0855c5a22","ref":"refs/heads/development","pushedAt":"2024-01-04T08:24:51.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"pelahi","name":"Pascal Jahan Elahi","path":"/pelahi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/12030027?s=80&v=4"},"commit":{"message":"Update VERSION","shortMessageHtmlLink":"Update VERSION"}},{"before":"387651f011d136d7a9df13f4353e9f47020bfea7","after":null,"ref":"refs/heads/feature/nonperiodicwrapped_input","pushedAt":"2024-01-04T08:24:33.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"pelahi","name":"Pascal Jahan Elahi","path":"/pelahi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/12030027?s=80&v=4"}},{"before":"01ad9886cffb5125e5ab0638d281c7b2da39832f","after":"9902dfb63ab991c40154e99490a2c1921449c06f","ref":"refs/heads/development","pushedAt":"2024-01-04T08:24:29.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"pelahi","name":"Pascal Jahan Elahi","path":"/pelahi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/12030027?s=80&v=4"},"commit":{"message":"Added the ability to wrap input in period (#115)","shortMessageHtmlLink":"Added the ability to wrap input in period (#115)"}},{"before":null,"after":"387651f011d136d7a9df13f4353e9f47020bfea7","ref":"refs/heads/feature/nonperiodicwrapped_input","pushedAt":"2024-01-04T08:23:56.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"pelahi","name":"Pascal Jahan Elahi","path":"/pelahi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/12030027?s=80&v=4"},"commit":{"message":"Added the ability to wrap input in period","shortMessageHtmlLink":"Added the ability to wrap input in period"}},{"before":null,"after":"78bdeea32893577b4a715f75a22603ce878e2dcd","ref":"refs/heads/feature/codecleanup-icrar","pushedAt":"2024-01-04T08:07:23.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"pelahi","name":"Pascal Jahan Elahi","path":"/pelahi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/12030027?s=80&v=4"},"commit":{"message":"Update to NBodylib used (#114)","shortMessageHtmlLink":"Update to NBodylib used (#114)"}},{"before":"78bdeea32893577b4a715f75a22603ce878e2dcd","after":null,"ref":"refs/heads/feature/codecleanup-icrar","pushedAt":"2024-01-04T08:07:18.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"pelahi","name":"Pascal Jahan Elahi","path":"/pelahi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/12030027?s=80&v=4"}},{"before":"15f89aaf46c6e0ce349e2d0ba3f3550e4642009d","after":"01ad9886cffb5125e5ab0638d281c7b2da39832f","ref":"refs/heads/development","pushedAt":"2024-01-04T08:07:12.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"pelahi","name":"Pascal Jahan Elahi","path":"/pelahi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/12030027?s=80&v=4"},"commit":{"message":"Feature/codecleanup icrar (#113)\n\n* Bug fix for MPI communication of extra properties\r\n\r\nFixes issue identified in https://github.com/ICRAR/VELOCIraptor-STF/issues/89\r\nhttps://github.com/ICRAR/VELOCIraptor-STF/issues/87\r\nStill needs testing to verify changes are correct.\r\n\r\n* Code clean-up \r\n\r\nAdded changes pulled from ICRAR fork. Key additions here are clean-up of writing HDF5 properties files, code clean-up of the Options structure, and some associated small changes in a few other files to account for changes from * to std::vector and removal of unused variables. Also have some minor code clean-up of the fitting routines.\r\n\r\n* Fix mismatch in header and properties written\r\n\r\nData sets written in properties file updated to match with the data sets outlined by the properties header in allvars.cxx\r\n\r\n* Improve how compilation options are set and stored\r\n\r\nCombination of commits from ICRAR master \r\n8fb8b64059cc479d15b238fb22f4c93a4168d900\r\ncc4f71756f487292da74c7e77b1285d45351bb99\r\nVR stores compilation options as string and reports this at runtime.\r\n\r\n* Minor fix when parsing config\r\n\r\nAs highlighted in c2202da2de84e3c4bb32f20601387180c08def78 of icrar/master fork\r\n\r\n* Add Timer class for easy time tracking\r\n\r\nThis was taken almost verbatimg from shark, where we've been using it\r\n(successfully) for years.\r\n\r\nSigned-off-by: Rodrigo Tobar \r\n\r\n* Add utility code for formatting time and memory\r\n\r\nThis code will give us proper formatting for memory and time amounts. I\r\ntook it from shark, where we have been using this for a while, plus some\r\nminor simplifications and modifications.\r\n\r\nSigned-off-by: Rodrigo Tobar \r\n\r\n* Fixed mismatch in return argument between definition and implementation\r\n\r\nCalcGravitationalConstant defined as having return type Double_t but implementation has double. This is fine in default compilations as Double_t = double but not if running in single precision. Fixed.\r\n\r\n* Update to ioutils\r\n\r\n* Move non-positive particle density errors up\r\n\r\nBased on icrar/master fork commit a9c6285d1db02477becbb456bb16447340a1c3d5 \r\n\r\nMinor changes to make exceptions files as explicitly vr_exceptions. \r\n\r\nComment was \r\n\"\r\nAfter some experiments I was able to reproduce the non-positive density\r\nerrors earlier on in the code, right after all particles have their\r\ndensities calculated. I thus decided to refactor the creation of the\r\nerror message out of its previous place (localbgcomp.cxx) and move it\r\ninto an exception that can be easily thrown from various places in the\r\ncode.\r\n\r\nSigned-off-by: Rodrigo Tobar \r\n\"\r\n\r\n* Adding logger class \r\n\r\nAddition of a logger class. Code requires further updates to use logger and timer class. First commit is from icrar/master 93d6384dd6ee16f34d58204b2dc35727e9e447d9.\r\n\r\n* Update to user interface\r\n\r\nFixed typo and also updated code to use the logger class.\r\n\r\n* Added Logger, Timer, and MemReport calss\r\n\r\nChanged logging information from using cout to using the LogStatement class. Also removed explicit use of chrono in favour of the Timer class. Memory usage reporting also updated to use the LogStatement formatting. \r\nMinor bug fix in writing profiles in io.cxx. Had incorrectly used the wrong set of MPI tasks to write meta information when doing parallel HDF5. \r\nThis update is based on multiple commits in icrar/master which updated the logging of VR.\r\n\r\n* Avoid memory leak on MPISetFilesRead calls\r\n\r\nFixes minor memory leak. From icrar/master commit b3619fa13573550a664f957d2eb052b0bc9aa1c6. \r\n\r\nAlmost all calls to MPISetFilesRead resulted on small memory leaks\r\nbecause callers were allocating the array that was passed as an argument\r\n(as a pointer reference), which then internally was allocated again,\r\ncreating a leak even if the caller was careful of deallocating what they\r\nhad allocated.\r\n\r\nTo make things simpler in the long run I replaced all these arrays by\r\nstd::vector objects, ensuring their memory is automatically managed\r\nthrough RAII, and cleaning up the interface of MPISetFilesRead a bit.\r\n\r\nThis addresses #65, but also similar memory leaks that were hidden in\r\nthe code handling the other input formats (RAMSES, Gadget, NChilada).\r\n\r\nSigned-off-by: Rodrigo Tobar \r\n\r\n* Minor bug fixes\r\n\r\nCleaned up some minor memory leaks and incorrect function definition. Also minor update to cmake related to logging. Removed unused memory reporting functions.\r\n\r\n* Minor code clean-up of io routines\r\n\r\nAdded in logging, fixed some tabbing and added inclusion of io.h to io routines.\r\n\r\n* Update to HDF5 interface based on commits from icrar fork\r\n\r\nThese are fairly simple but do the work. In time we might want to add\r\nmore bits of code like this for other containers though.\r\n\r\nSigned-off-by: Rodrigo Tobar \r\n\r\nAdd verbosity flag to H5OutputFile class\r\n\r\nThis will allow us to set the flag on a per-file basis whenever we want\r\nto print more information about the output of a particular file.\r\n\r\nSigned-off-by: Rodrigo Tobar \r\n\r\nOutput written dataset names and dimensionality\r\n\r\nSigned-off-by: Rodrigo Tobar \r\n\r\nSimplify safe_hdf5 function usage\r\n\r\nThe previous incarnation of this function required callers to specify\r\nthe function return type, but safe_hdf5 already has all the information\r\nrequired to automatically calculate this.\r\n\r\nThis new version of safe_hdf5 automatically calculates the return type,\r\nand also prints the hdf5 function name in the error message. To do so\r\nwithout much friction safe_hdf5 is actually a macro that feeds\r\n_safe_hdf5, the actual function.\r\n\r\nSigned-off-by: Rodrigo Tobar \r\n\r\nUse safe_hdf5 in write_dataset_nd\r\n\r\nMost of the hdf5 operations performed within write_dataset_nd had their\r\nerror code or result not checked, and therefore we could be failing\r\nsilently before the actual data writing happens.\r\n\r\nThis commit adds usage of safe_hdf5 in the HDF5 operations occurring in\r\nwrite_dataset_nd, thus potentially alerting us of otherwise silent\r\nerrors.\r\n\r\nSigned-off-by: Rodrigo Tobar \r\n\r\nRework bits of H5OuptutFile\r\n\r\nThe H5OutputFile contained some pieces of logic that were very hard to\r\nfollow easily around the create/append/close methods. Also, these\r\nmethods were defined in the header file itself, but this only adds\r\ncomplexity to the header file and adds to the compilation time.\r\n\r\nThis commit both re-implements some of the H5OutputFile class methods\r\n(the create/append/close public methods, plus associated private\r\nmember), and moves the actual definition of the methods to a new\r\nh5_output_file.cxx module, leaving the header thinner and easier to\r\nfollow.\r\n\r\nThe implementation of H5OutputFile uses the FileExists function. This\r\nfunction is only declared in proto.h, which is a single, big pool of\r\nfile definitions. We have been trying to move away from that and into a\r\nper-module header file; therefore we used this opportunity to create a\r\nnew io.h file and move the declaration of FileExists into it. This\r\nrequired including the new header in a few modules, all of which are\r\nincluded in this commit.\r\n\r\nSigned-off-by: Rodrigo Tobar \r\n\r\nSimplify FileExist implementation\r\n\r\nThere were a few unused or unnecessary variables, now the implementation\r\nis a two-liner.\r\n\r\nSigned-off-by: Rodrigo Tobar \r\n\r\nMove safe_hdf5 into new h5_utils module\r\n\r\nHaving a h5_utils header+implementation module is in general probably a\r\ngood idea, as it will help moving some of the more general utilities out\r\nof hdfitems.h, which currently has defined loads of code inline.\r\n\r\nAdd new dataspace_information function\r\n\r\nThis will allow us to inspect more closely the datasets we have opened,\r\neither for reading or for writing.\r\n\r\nSigned-off-by: Rodrigo Tobar \r\n\r\nInspect datasets before they are written\r\n\r\nThis will hopefully help us identify what's going on.\r\n\r\nSigned-off-by: Rodrigo Tobar \r\n\r\nRemove redundant parameters from write_dataset\r\n\r\nThe only \"flag\" parameter from the write_dataset family of functions\r\nthat was being actively used was flag_parallel; the rest were always\r\nleft to their default value (true) and therefore served only to pollute\r\nthe interface.\r\n\r\nThis commit removes these parameters, defining them as local, fixed\r\nvariables in the final routine that performs writing. Later on we can\r\nsimplify the code in that function based on these fixed values.\r\n\r\nSigned-off-by: Rodrigo Tobar \r\n\r\nCleanup and move main write_dataset_nd function\r\n\r\nThe implementation of the main write_dataset_nd function was slightly\r\nconfusing, as it contained some code duplication and too many #ifdef\r\nmacros spread around the code. Additionally, it still used the different\r\n\"flag\" variables that we now know are always true.\r\n\r\nThis commit re-implements this routine. The overall structure is still\r\nthe same, and the logic remains untouched. However variables have been\r\nrenamed for clarity, common code has been factored out into reusable\r\nfunctions, #ifdef macro has been greatly reduced, and the code has been\r\nsimplified after removing all usage of the \"flags\" variables.\r\n\r\nSince this is a big change, I'm taking the opportunity to move the\r\nimplementation of this function down to the new h5_output_file.cxx file,\r\nthus avoiding re-compiling it each time we include the hdfitems.h\r\nheader.\r\n\r\nSigned-off-by: Rodrigo Tobar \r\n\r\nCleanup and move rest of write_dataset functions\r\n\r\nNow that the main write_dataset_nd function has been cleaned up and\r\nmoved to the h5_output_file we can move the rest of the functions that\r\ndepend on it. During this move I removed some small duplicate checks,\r\nadjusted indentation, and added asserts for otherwise unnecessary\r\narguments.\r\n\r\nThe same treatment was given to write_dataset templated functions, which\r\nmust remain in the header.\r\n\r\nSigned-off-by: Rodrigo Tobar \r\n\r\nUnify write_attribute method implementations\r\n\r\nThere were two very similar implementations of this functionality, one\r\nfor plain scalar values and another one for string, but at the end they\r\nwere very much the same.\r\n\r\nThis commit puts the common bits of both implementations in a private\r\nmethod, then adjusts the two existing public methods to invoke the\r\nprivate one with the correct arguments.\r\n\r\nIn addition to this unification, the version of write_attribute taking a\r\nstring value was failing to close the new data type it created, leading\r\nto object identifier leaks in HDF5. This has been now fixed.\r\n\r\nFinally, scalar attributes can (and are, in VR) only be written in\r\nserial. A new assertion to this effect has been added to the code.\r\n\r\nSigned-off-by: Rodrigo Tobar \r\n\r\nWrite HDF5 datasets in 2GB chunks\r\n\r\nHDF5 doesn't properly support writing >= 2GB of data in a single I/O\r\noperations. Even though it was reported that 1.10.2 fixed this\r\nlimitation, we have seen several situations in which this limit breaks\r\nour code in strange ways.\r\n\r\nThis commit changes the actual writing of the dataset. Instead of\r\nwriting the given data on a single H5Dwrite call, we perform writes as\r\nbig as 2 GB at a time. Chunking is done only in the first dimension,\r\nwhich is also the dimension that is written in parallel from multiple\r\nMPI ranks.\r\n\r\nI also took the opportunity to further adjust some variable names and\r\nsimplify the code in the write_dataset_nd function.\r\n\r\nThis is the main change required to remove the problems we were reported\r\nin #88.\r\n\r\nSigned-off-by: Rodrigo Tobar \r\n\r\nClose objects in exact reverse order\r\n\r\nThis might not be a problem in reality, but better safe than sorry.\r\n\r\nSigned-off-by: Rodrigo Tobar \r\n\r\nAdd test program for H5OutputFile\r\n\r\nThis in general is a useful tool to have (and even the beginning of a\r\nunit test suite...), and in particular it helps us reproduce #88 much\r\nfaster.\r\n\r\nSigned-off-by: Rodrigo Tobar \r\n\r\nFix incorrect assertion\r\n\r\nWhen writing HDF5 files in parallel, attribute writing must happen in\r\nparallel. Sometimes it's not possible to bypass this limitation, and\r\ntherefore attribute writing must be done by a single rank even on\r\nMPI-able scenarios.\r\n\r\nThe H5OutputFile class has been designed with this usage in mind. While\r\nrefactoring the code as part of #88, I added some assertions to make\r\nsure this held true from the caller site (see e703f6f). This assertion\r\nis faulty however, as it only applies when compiling against MPI-enabled\r\nHDF5 libraries, which is what this commit fixes.\r\n\r\nThis faulty assertion was reported in #113.\r\n\r\nSigned-off-by: Rodrigo Tobar \r\n\r\n* Bug fix\r\n\r\nMinor bug fix resulting from a rebase + minor updates of the set of commits in icrar master related to the hdf5 update.\r\n\r\n* Fixed improper const\r\n\r\n* Restructure SO information collection and writing\r\n\r\nSpherical overdensity information (Particle IDs and types) are collected\r\nby two different routines: GetInclusiveMasses and GetSOMasses. Both\r\nroutines use the following technique: first, for \"ngroup\" groups, a C\r\narray-of-vectors (AOV) of \"ngroup + 1\" elements was allocated via \"new\"\r\n(which was eventually delete[]'ed). Vectors for each group are filled\r\nindependently as groups are processed; the loop over groups is\r\n1-indexed, and indexing into these AOVs happens using the\r\nsame iteration variable, meaning that their 0 element is skipped.\r\nFinally, these AOVs are passed to WriteSOCatalog for writing.\r\nWritingSOCatalog is aware of the 1-based indexing, and additionally it\r\nflattens the AOVs into a single vector (thus duplicating their memory\r\nrequirement) to finally write the data into the output HDF5 file.\r\n\r\nThis commit originally aimed to reduce the memory overhead of the final\r\nwriting of data into the HDF5 file (see #71). The main change required\r\nto achieve this is to perform the flattening of data at separate times,\r\nsuch that particles IDs and types are not flattened at the same time,\r\nbut one after the other, with memory from the first flattening operation\r\nbeing released before the second flattening happens, thus reducing the\r\nmaximum memory requirement of the application. This goal was achieved.\r\nHowever, while performing these changes two things became clear:\r\nfirstly, that using a vector-of-vectors (VOV) was a better interface\r\nthan an AOV (due to automatic memory management), and secondly that the\r\n1-based indexing of the original AOVs introduced much complexity in the\r\ncode. The scope of these changes was then broadened to cover these two\r\nextra changes, and therefore this commit considerably grew in size. In\r\nparticular the 0-indexing of the VOVs allowed us to more easily use more\r\nstd algorithms that clarify the intent in certain places of the code.\r\n\r\nThere are other minor changes that have also been included in this\r\ncommit, mostly to reduce variable scopes, reduce code duplication, and\r\nsuch. Assertions have also been sprinkled here and there to add further\r\nassurance that the code is working as expected.\r\n\r\nAs an unintended side-effect, this commit also fixed the\r\nwrongly-calculated Offset dataset, which was off by one index in the\r\noriginal values. This problem was reported in #108, and seems to have\r\nalways been there.\r\n\r\n* Adding logging information in local field\r\n\r\nMinor updates related to logging information along with some cleaning-up some code.\r\n\r\n* Split memory usage collection and reporting\r\n\r\nIt will be useful in some places of the code to measure memory usage but\r\nnot necessarily to display it as a normal \"report\" in the logs. Thus,\r\nsplitting metrics collection from reporting will help us reuse the\r\ncollection bits in other places of the code.\r\n\r\nWhile doing this I also took the chance of simplifying how these memory\r\ncollection metrics are performed. For starters, we don't need to know\r\nabout that many fields, as we're usually only interested on VM/RSS\r\ncurernt/peak usage. Additionally we can use a single file as the source\r\nof the information, which should help speeding up this data collection\r\nas well.\r\n\r\nSigned-off-by: Rodrigo Tobar \r\n\r\nUse a simpler function names in memory reports\r\n\r\nPretty names are unnecessarily long: they don't really give any useful\r\ninformation, specially because we already have the location in the file\r\nwhere the report is generated\r\n\r\nSigned-off-by: Rodrigo Tobar \r\n\r\nSimplify memory usage reporting calls\r\n\r\nMemory reporting doesn't really need to use the Options object anymore\r\n(it was previously used to keep track of peak memory usage, which the\r\nkernel already does for us); additionally we already have the file/line\r\nlocation showing up on the log statement when we generate a memory\r\nreport, so there's no need to have the information twice in the same\r\nline.\r\n\r\nSigned-off-by: Rodrigo Tobar \r\n\r\nRemove memuse_* fields from Options class\r\n\r\nThese were not used in reality, or were previously not used correctly,\r\nso there's no need to keep them around. If we need some memory tracking\r\nfunctionality in the future, we can reinstate them in a different form.\r\n\r\nSigned-off-by: Rodrigo Tobar \r\n\r\nFinish VR graciously on uncaught exceptions\r\n\r\nVR's main function didn't have a high-level try/catch block to handle\r\nuncaught exceptions rising from the code, leaving the behavior of the\r\nprogram to the fate of whatever default the C++ runtime library\r\nprovides.\r\n\r\nThis commit adds such a block. When an uncaught exception is received,\r\nan error is logged, and then either an MPI_Abort is issued in\r\nCOMM_WORLD, or a simpler exit is invoked on non-MPI builds.\r\n\r\nSigned-off-by: Rodrigo Tobar \r\n\r\nLog memory requirements for data transfers\r\n\r\nThis might help us understand what exactly is causing #118 (if logs are\r\ncorrectly flushed and don't get too scrambled).\r\n\r\nSigned-off-by: Rodrigo Tobar \r\n\r\n* Minor updates \r\n\r\nCleaned up some hdf5 code and some minor clean-up of code. May need to check the unbind code for nested parallelism.\r\n\r\n* Adding most bound BH ID and position to output\r\n\r\nAdded the position and ID of the BH that is closest to the most bound dark matter particle.\r\nThis was tested in COLIBRE and works well.\r\n\r\n* Update to store entropy and temperature\r\n\r\nStoring temperature in gas particles requires update to Particle class in NBodylib as well as update to HDF5 interface.\r\n\r\n* Update to SO calculations\r\n\r\nBased on commits merged in 21ffc7070d94b6d8c2b07b2a8c820c35b98c6ed3\r\n\r\nThese updates are based on several commits to improve and add to SO and half mass radii calculations made by Claudia Lagos \r\n\r\n- Code has been updated to compute the radii of differentspherical overdensities in log10(rho)-natural radii space as it is done by Roi Kugel. Once radii are computed, the enclosed masses are calculated in the log10(M)-radii space.\r\n- small modification to the SO calculations to follow what Pascal has in his branch so-gasstar-masses but keeping the interpolation in log10 space. With this I find that the M200crit halo MF of EAGLE between VELOCIraptor and SUBFIND are in perfect agreement down to ~32 particles, which is the limit imposed in SUBFIND to find subhalos.\r\n- I’m not using interpolation to compute all half-mass radii, and do that by looping over particles that are ordered in radii.\r\n- The previous calculation for Rhalfmass_star was wrong while the aperture r50 were correct (this was determined by comparing the VR catalogues with those of SUBFIND in EAGLE). It seems this was because the particle rc was defined differently in Rhalfmass_star than in the aperture calculation, and because Rhalfmass_star wasn't using interpolation. I have now updated the Rhalfmass_* calculations to follow the aperture ones.\r\n\r\n* Port SWIFT interface to use better logging\r\n\r\nThis is by no means complete, but is a step in the correct direction.\r\n\r\nSigned-off-by: Rodrigo Tobar \r\n\r\n* De-duplicate InitVelociraptor* functions\r\n\r\nThe body of the two existing functions was almost identical, with that\r\nfrom InitVelociraptorExtra *not* initialising a couple of things, which\r\nlooks like a bug. Their major difference was the Options variable that\r\nthey worked on, so instead of duplicating the code it's much easier to\r\nreuse it and pass the corresponding Options object as an argument.\r\n\r\nSigned-off-by: Rodrigo Tobar \r\n\r\n* Further logging improvements on swiftinterface.cxx\r\n\r\nSigned-off-by: Rodrigo Tobar \r\n\r\n* Updates to swift interface\r\n\r\nUpdate to logging and timing in swift interface\r\n\r\n* Bug fix for spherical overdensity calculation of subhalos\r\n\r\nSO masses and related quantities for subhalos was incorrectly calculated.\r\n\r\n* Feature/core binding report (#110)\r\n\r\n* Added report of core binding\r\n\r\nVR reports where MPI ranks and OpenMP threads are bound. Initially tested on OSX, needs testing on Linux.\r\n\r\n* Update to formating of core binding\r\n\r\n* Minor change, remove unnecessary for loop\r\n\r\n* Cleaned up mpi report of on what nodes code is running\r\n\r\n* Bug fix to HDF5 reading, improved error handling of HDF5 read\r\n\r\n* Logging now also reports function from which it is called\r\n\r\n* Bug fix for omp fof when running with mpi\r\n\r\n* Added the c++ standard to use\r\n\r\n* Bug fix for sending extra properties between MPI ranks\r\n\r\n* Minor fix to ensure MPI calls are all before finalize\r\n\r\n* Added clang formating standard\r\n\r\n* Minor update to reading other formats related to mpi decomposition\r\n\r\n* Minor bug fix introduced in previous commit\r\n\r\n* Update to allow VR to period wrap particles outside the periodic domain for gadget input\r\n\r\n* Fixes for z-mesh scaling with number of mpi processes\r\n\r\n* Update to period wrap of input for gadget input\r\n\r\n* Minor update to binding report\r\n\r\n* Update to NBodylib used (#114)\r\n\r\n---------\r\n\r\nSigned-off-by: Rodrigo Tobar \r\nCo-authored-by: Rodrigo Tobar ","shortMessageHtmlLink":"Feature/codecleanup icrar (#113)"}},{"before":"036b4dcb6ea27c7c61c978a5e916e74c38cf22b3","after":null,"ref":"refs/heads/feature/codecleanup-icrar-newtree","pushedAt":"2024-01-04T08:04:05.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"pelahi","name":"Pascal Jahan Elahi","path":"/pelahi","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/12030027?s=80&v=4"}}],"hasNextPage":true,"hasPreviousPage":false,"activityType":"all","actor":null,"timePeriod":"all","sort":"DESC","perPage":30,"cursor":"djE6ks8AAAAD7b1YPwA","startCursor":null,"endCursor":null}},"title":"Activity · pelahi/VELOCIraptor-STF"}