storeChunk: Workaround for bug with parallel flushing#5510
Merged
PrometheusPi merged 3 commits intoComputationalRadiationPhysics:devfrom Mar 23, 2026
Merged
Conversation
6a4f325 to
250408d
Compare
Contributor
Author
|
Alright, it seems we do need to merge this after all, HDF5 became a lot more painful about collective metadata setup as of HDF5 2.0, see openPMD/openPMD-api#1862. |
PrometheusPi
approved these changes
Mar 23, 2026
8d018f7
into
ComputationalRadiationPhysics:dev
10 checks passed
This was referenced Apr 1, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Workaround for this bug: openPMD/openPMD-api#1794
I think that this has little importance for PIConGPU since we process Iterations collectively anyway, making it unlikely to happen.
But I did stumble over this issue a while ago. Of course I did not document it and have no reproducer at hand now…
TODO:
.writeIterations()with.snapshots()and usingWRITE_RANDOM_ACCESSmight trigger the issue, but in that case it would be restricted to dev versions of openPMD, hence not so important.Reproducer: Either find a simulation that does not write particles on some rank, or fake it:
Then
mpirun -n 2 picongpu -g 192 192 192 -d 1 2 1 --openPMD.period 100:100 -s 100 --openPMD.ext h5e.g. within a LaserWakefield simulation.