Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
629 commits
Select commit Hold shift + click to select a range
711ba45
Add patch-based support for deterministic sampler (#971)
tge25 Jul 30, 2025
28d864c
Refactoring and updates to Domino (#1023)
RishikeshRanade Jul 30, 2025
c387e12
Physicsnemo neighbor tools (#1010)
coreyjadams Jul 31, 2025
194049a
Domino bug fix (#1043)
RishikeshRanade Aug 1, 2025
bcff394
Update header_check.py (#1039)
coreyjadams Aug 1, 2025
0dde722
MeshGraphNet Performance: Automaticaly Use transformer engine for Lay…
coreyjadams Aug 4, 2025
3a251f6
Migrate XAeroNet example to PyTorch Geometric (#1037)
Alexey-Kamenev Aug 4, 2025
9655507
Download and process DrivAerML data for DoMINO using PhysicsNeMo-Cura…
saikrishnanc-nv Aug 4, 2025
b55fdee
Add capability to introduce PDE losses to DoMINO (#1034)
ktangsali Aug 4, 2025
ada0748
fixing bug in unet and reflecting changes in domino (#1045)
RishikeshRanade Aug 5, 2025
6038942
Hybrid MeshGraphNet (#1046)
mnabian Aug 6, 2025
6cd9494
Add refactor of transolver model + darcy cfd example refresh. (#1042)
coreyjadams Aug 6, 2025
571f7e2
Update CHANGELOG.md (#1049)
coreyjadams Aug 6, 2025
7322cef
Doc fix (#1044)
coreyjadams Aug 6, 2025
4e9ba67
upload DPOT and an exmaple training file on NS2d (#1027)
HaoZhongkai Aug 6, 2025
dc0734a
Spelling + Grammar Fixes (#1050)
peterdsharpe Aug 7, 2025
2e1cf65
Improve lead time support for diffusion models (#980)
jleinonen Aug 9, 2025
382df9f
Migrate Lagrangian-MGN example to PyTorch Geometric (#1053)
Alexey-Kamenev Aug 11, 2025
42f3056
Allow 0/1 values for use_fp16 in cordiff model wrappers (#1059)
CharlelieLrt Aug 12, 2025
fb4140c
Refactor GroupNorm and log unmatched state_dict keys (#989)
juliusberner Aug 13, 2025
6ca189f
Restores previous `pre-commit` behavior with respect to formatting of…
peterdsharpe Aug 13, 2025
77b33c5
xmgn inference code (#1067)
mnabian Aug 15, 2025
a7d5769
Migrate Hybrid-MGN example to PyTorch Geometric (#1075)
Alexey-Kamenev Aug 19, 2025
0f3b8c7
DiT Implementation (#1080)
Dibyajyoti-Chakraborty Aug 20, 2025
a78132a
Physicsnemo kNN Utility (#1079)
coreyjadams Aug 20, 2025
c1e726d
Use no_grad for healpix tests (#1090)
pzharrington Aug 22, 2025
7647824
Add torch_geometric and torch_scatter to PhysicsNeMo base Docker imag…
Alexey-Kamenev Aug 6, 2025
208226b
Cfd transolver driverml example (#1052)
coreyjadams Aug 7, 2025
1e3d88a
Add capability to install torch scatter from custom wheel (#1054)
ktangsali Aug 7, 2025
2b9a747
Hot fixes for release (#1058)
ktangsali Aug 12, 2025
0ea1b33
Update README.md (#1064)
coreyjadams Aug 13, 2025
aabc6f0
Sonar setup (#1066)
ktangsali Aug 14, 2025
43a0da0
Domino fixes (#1069)
coreyjadams Aug 15, 2025
42d2e38
Docs unification (#1070)
ktangsali Aug 18, 2025
7b6272c
Sonar updates (#1074)
ktangsali Aug 18, 2025
961cd34
fixing bug and optimizing parameters (#1081)
RishikeshRanade Aug 20, 2025
3c34d4f
Fix radius_search for domain parallelism compatibility. (#1065)
coreyjadams Aug 20, 2025
cf970af
Enable neighbor tools in api docs. (#1086)
coreyjadams Aug 21, 2025
b04cbf5
Domino finetune (#1082)
RishikeshRanade Aug 21, 2025
e6bc119
Add relative comparisons (#1088)
ktangsali Aug 22, 2025
1e90f79
update new versions
ktangsali Aug 25, 2025
d136c25
fix changelog
ktangsali Aug 25, 2025
c5ad592
fix formatting
ktangsali Aug 25, 2025
be3f9af
fix linting in changelog
ktangsali Aug 25, 2025
ab23309
remove the stanford bunny download due to access limitations (avoids …
ktangsali Aug 25, 2025
1b75069
Fix import error from renaming index op items. (#1093)
coreyjadams Aug 26, 2025
59f7af0
Fix Regen readme image render (#1055)
pzharrington Aug 26, 2025
01c2f0c
Migrate Stokes MGN example to PyTorch Geometric (#1087)
Alexey-Kamenev Aug 29, 2025
4dc70a1
Enable multi-gpu ci integration on internal clusters. (#1098)
coreyjadams Aug 29, 2025
df454b2
Migrate Lennard Jones example to PyTorch Geometric (#1103)
Alexey-Kamenev Sep 3, 2025
78584a4
Codeowners (#1094)
coreyjadams Sep 3, 2025
b556aed
Update README.md for XAeroNet (#1101)
ktangsali Sep 3, 2025
9c185ce
Add XAeroNet prerequisites in README (#1104)
Alexey-Kamenev Sep 3, 2025
cd31a32
Add lead time support to StormCast example (#1100)
jleinonen Sep 4, 2025
9d9e4a3
Add significantly more coverage of the domino datapipe to catch more …
coreyjadams Sep 5, 2025
6ebbc59
Speeding up distributed tests (#1095)
coreyjadams Sep 8, 2025
77b3c68
Mixture of Weather Experts (#1084)
Dibyajyoti-Chakraborty Sep 8, 2025
7ee88fd
fix bug in positional embedding (#1096)
jialusui1102 Sep 9, 2025
d4cfd48
Log-uniform sigma sampling in EDMLoss (#1107)
jleinonen Sep 10, 2025
64ad752
Improved Test Coverage for ShardTensor (#1109)
coreyjadams Sep 17, 2025
bf4cda8
Migrate GraphCast to PyTorch Geometric (#1111)
Alexey-Kamenev Sep 20, 2025
ac55165
Add a distributed implementation of kNN + testing. (#1117)
coreyjadams Sep 22, 2025
c4a30b5
Port SDF function to static, torch-only interface (#1119)
coreyjadams Sep 22, 2025
0183ddd
Ensure proper interop stream dependencies in torch-facing functions. …
coreyjadams Sep 22, 2025
768b1ea
Change skip_scale to Python float (#1126)
jleinonen Sep 23, 2025
5d8916c
Refactor DiTBlock to be more modular (#1120)
pzharrington Sep 23, 2025
3d034bb
Diffusion FWI example (#1078)
CharlelieLrt Sep 24, 2025
aedc674
Migrate blood flow example to PyG (#1127)
Alexey-Kamenev Sep 24, 2025
86d9b66
The attention layer should not be causal. (#1129)
coreyjadams Sep 25, 2025
3e899b6
Update README.md (#1130)
ktangsali Sep 26, 2025
449c6db
Add `pre-commit` as a `dev` dependency (#1122)
laserkelvin Sep 30, 2025
096a66f
Fix the dataloader runtime error (#1134)
mnabian Sep 30, 2025
3745beb
Diffusion fwi readme fix (#1132)
CharlelieLrt Sep 30, 2025
b6a1621
Migrate HydroGraphNet example to PyG (#1128)
Alexey-Kamenev Sep 30, 2025
99781d0
Fix the bug in combine_stl_solids.py (#1135)
mnabian Oct 1, 2025
f53eb86
Add a timestamp to avoid mlflow gc (#1136)
ktangsali Oct 1, 2025
5a64525
Fix test_conv_nd and test_conv_ndfc tests flakiness on Blackwell (#1138)
abokov-nv Oct 2, 2025
516ddf4
Implement recursive saving/loading of checkpoint for nested Modules (…
CharlelieLrt Oct 2, 2025
a388518
Bugfix and improvements for diffusion models and corrdiff (#1139)
CharlelieLrt Oct 3, 2025
553be4b
Improve Sharded Convolution Implementation (#1141)
coreyjadams Oct 6, 2025
3e5e68d
Migrate AeroGraphNet example to PyG (#1137)
Alexey-Kamenev Oct 7, 2025
45e5e45
Update attention_patches.py (#1148)
coreyjadams Oct 9, 2025
048b997
Enable torch.nn.functional.pad with ShardTensor (#1147)
coreyjadams Oct 9, 2025
89ad3a6
Remove DGL references in non-DGL code (#1152)
Alexey-Kamenev Oct 10, 2025
a8af5cd
Add XAeroNet PyG inference script (#1146)
Alexey-Kamenev Oct 13, 2025
9fac469
Add DGL to PyG migration guide (#1158)
Alexey-Kamenev Oct 15, 2025
f7b7d8f
Improvements and bugfixes for the Diffusion-FWI example (#1159)
CharlelieLrt Oct 16, 2025
87174c4
DiT refactor to support custom Module (#1151)
pzharrington Oct 16, 2025
2aeb2d3
Fix `stochastic_sampler` - handle `EDMPrecond` model properly (#1154)
melo-gonzo Oct 17, 2025
33b4af2
update code owners (#1160)
coreyjadams Oct 17, 2025
50b3937
Update CODEOWNERS (#1161)
coreyjadams Oct 17, 2025
aac1ce3
DoMINO Performance Optimizations (#1133)
coreyjadams Oct 20, 2025
ad35426
Transolver domain parallel (#1142)
coreyjadams Oct 21, 2025
c5cfcf9
Enhance Transolver with configurable checkpoint and normalization dir…
dran-dev Oct 21, 2025
ccde7f5
Dockerfile: remove DGL, add pyglib install (#1171)
ktangsali Oct 22, 2025
142b666
Ensure a default value for local rank in unified memory. (#1164)
coreyjadams Oct 23, 2025
d766b4d
Domain Parallel Domino (#1165)
coreyjadams Oct 23, 2025
410cf9e
Fix bug in save_checkpoint when model.meta.name is missing + improve …
CharlelieLrt Oct 24, 2025
4287bc7
Crash sample (#1162)
mnabian Oct 24, 2025
bd46225
DistributedManager cleanup and kNN cuml/scipy hotfixes (#1182)
coreyjadams Oct 24, 2025
91cb7d7
Add explicit error message for no world edges (#1183)
mnabian Oct 24, 2025
470e6fa
Added input shape validation for SongUnet (#1181)
CharlelieLrt Oct 27, 2025
0d820ce
Update the end year in license headers (#1184)
mnabian Oct 27, 2025
a1ff9d2
Adds zip and unzip to dockerfile (#1189)
CharlelieLrt Oct 27, 2025
ed8b3ce
Add nvtx and dask dependencies to docker (#1188)
ktangsali Oct 27, 2025
68c1854
Small CorrDiff fixes (#1062)
swbg Oct 27, 2025
8d018f1
Active learning abstraction (#1174)
laserkelvin Oct 27, 2025
d1b6b7b
Extend crash datapipe, readers, and update README (#1194)
mnabian Oct 30, 2025
2975ce8
Add support for str version of memory format param in FIGConvNet (#1200)
Alexey-Kamenev Oct 30, 2025
afe3966
Update PULL_REQUEST_TEMPLATE.md (#1192)
coreyjadams Oct 31, 2025
219ed0d
Fixed inconsistency for shapes of non-square images (#1202)
CharlelieLrt Oct 31, 2025
04d5fe9
Refactored save, load, and from_checkpoint to support zip format by d…
CharlelieLrt Oct 31, 2025
c3ad24c
Add FIGConvNet to crash example (#1207)
Alexey-Kamenev Nov 4, 2025
69cf158
propose fix some typos (#1209)
jeis4wpi Nov 5, 2025
ea7d521
Removed zip compression (#1210)
CharlelieLrt Nov 5, 2025
a6a083a
Update crash readme (#1212)
mnabian Nov 6, 2025
cdd0f84
Bump multi-storage-client to v0.33.0 with rust client (#1156)
dreamtalen Nov 6, 2025
bf85887
Add jaxtyping to requirements.txt for crash sample (#1218)
mnabian Nov 8, 2025
a228f62
Replace 'License' link with 'Dev blog' link (#1215)
ram-cherukuri Nov 10, 2025
f8fd198
Validation fu added to examples/structural_mechanics/crash/train.py (…
dakhare-creator Nov 10, 2025
059fe5d
Add saikrishnanc-nv to github actors (#1225)
saikrishnanc-nv Nov 12, 2025
8252271
Integrate Curator instructions to the Crash example (#1213)
saikrishnanc-nv Nov 12, 2025
adc6602
Adding code of conduct (#1214)
ram-cherukuri Nov 12, 2025
1a52284
Fixed minor bug in shape validation in SongUNet (#1230)
CharlelieLrt Nov 14, 2025
7277097
Add Zarr reader for Crash (#1228)
saikrishnanc-nv Nov 14, 2025
341bf90
Add AR RT and OT schemes to Crash FIGConvNet (#1232)
Alexey-Kamenev Nov 19, 2025
245c111
Formatting active learning module docstrings (#1238)
laserkelvin Nov 20, 2025
76b0c5f
A new X-MeshGraphNet example for reservoir simulation. (#1186)
tonishi-nv Nov 20, 2025
3f2e184
Add knn to autodoc table. (#1244)
coreyjadams Nov 21, 2025
233b207
Update version (#1193)
ktangsali Oct 28, 2025
3314d39
Fix depenedncies to enable hello world (#1195)
ktangsali Oct 29, 2025
cab05b0
Remove zero-len arrays from test dataset (#1198)
coreyjadams Oct 30, 2025
aecf8c2
Merge updates to Gray Scott example (#1239)
ktangsali Nov 20, 2025
9e74b15
Interpolation model example (#1149)
jleinonen Nov 21, 2025
cd2a314
update versions
ktangsali Nov 25, 2025
384d5fd
Enhance checkpoint configuration for DLWP Healpix and GraphCast (#1253)
dran-dev Dec 1, 2025
f33cd2b
Transolver volume (#1242)
coreyjadams Dec 1, 2025
24155c6
Fix README links in transolver and domino examples (#1259)
dran-dev Dec 4, 2025
4edf09f
reduced precision ball query (#1266)
coreyjadams Dec 8, 2025
cdafc3f
chore: adding kelvin to blossom allow list (#1277)
laserkelvin Dec 16, 2025
6987040
chore: adding owners for active learning components (#1275)
laserkelvin Dec 16, 2025
5be1646
Update requirements.txt to include tensorboard (#1280)
paveltomin Dec 16, 2025
130f0e8
Propose fix some typos (#1279)
jeis4wpi Dec 17, 2025
298b751
fix: update DoMINO and GraphCast recipe dependencies (#1283)
dran-dev Dec 17, 2025
89442f8
Update the device (#1285)
ktangsali Dec 18, 2025
9c1105c
V2.0 refactor (#1235)
coreyjadams Dec 18, 2025
b108dad
Update README.md (#1291)
coreyjadams Dec 19, 2025
ff73f57
Update version specification in require_version_spec function (#1293)
peterdsharpe Dec 19, 2025
cd648af
PhysicsNeMo-Mesh: adds minimal mesh utilities + manipulation (#1267)
peterdsharpe Dec 20, 2025
2ec1050
The stale action needs permissions to update the cache (#1296)
coreyjadams Dec 22, 2025
e78ff1a
Update README with Docker run flags (#1298)
ktangsali Dec 22, 2025
0036c26
Adds `CombinedOptimizer` (#1241)
peterdsharpe Jan 6, 2026
0a3d467
Set up ci for merge queue (#1305)
coreyjadams Jan 6, 2026
db22ba6
Fixes Non-Deterministic Tests (#1295)
peterdsharpe Jan 7, 2026
9d12a1c
Exclude 'merge_group' event from CI jobs (#1308)
ktangsali Jan 7, 2026
a33a25b
Ensure blossom-ci workflow autopasses ONLY on the merge queue (#1309)
coreyjadams Jan 7, 2026
5a8ef29
Autopass mergequeue (#1310)
coreyjadams Jan 7, 2026
7fba873
Roll back blossom ci changes (#1312)
coreyjadams Jan 7, 2026
e1d29ba
Create merge-queue-blossom-passthrough.yml (#1313)
coreyjadams Jan 7, 2026
00dde58
Update README with new examples and models (#1254)
ktangsali Jan 7, 2026
78461f4
Update README.md (#1294)
coreyjadams Jan 8, 2026
c316434
Update stale.yml (#1315)
coreyjadams Jan 8, 2026
c585b29
Restructure diffusion subpackage (#1268)
CharlelieLrt Jan 8, 2026
3f31f13
Refactor rnn models for the new standards (#1306)
ktangsali Jan 12, 2026
43de886
Geotransolver Model (#1297)
coreyjadams Jan 13, 2026
8a39890
Mention `CODING_STANDARDS` in `CONTRIBUTING` (#1320)
laserkelvin Jan 13, 2026
5e4b4f4
`pyproject.toml` changes for robustness & housekeeping (#1322)
peterdsharpe Jan 14, 2026
9a3cf30
Fix distributed tests (#1307)
coreyjadams Jan 15, 2026
4283372
Enable github-first GPU CI for nightly runs + PRs. (#1325)
coreyjadams Jan 15, 2026
d69017f
Update container, test bare uv install (but not all deps in either ca…
coreyjadams Jan 15, 2026
3eaa76e
mmiranda fix broken links in ldc pinns readme Additional Reading sect…
megnvidia Jan 16, 2026
f393908
MGN Refactor (#1324)
mnabian Jan 16, 2026
6c7b511
Add ASV support. Add kNN benchmark. (#1323)
Alexey-Kamenev Jan 16, 2026
e63fb47
Adds Installation CI check on Windows; adds UV syntax modernization (…
peterdsharpe Jan 20, 2026
11eaa48
Fea tank filling (#1301)
loliverhennigh Jan 20, 2026
7806dbe
Refactor MGN variants (#1328)
mnabian Jan 21, 2026
08dc147
Fixes Torch 2.10 <-> CuML <-> earth2grid Dependency Issues; Removes d…
peterdsharpe Jan 22, 2026
15933e0
DiT/NATTEN and PositionalEmbedding fixes (#1343)
jleinonen Jan 24, 2026
8d8b1ed
Refactor dlwp_healpix for model standards compliance (#1321)
pzharrington Jan 24, 2026
a89f585
Update CODEOWNERS (#1340)
coreyjadams Jan 26, 2026
6436e63
PhysicsNemo Datapipes (#1304)
coreyjadams Jan 27, 2026
48495d5
Refactors `torch.jit.script` onto more-modern `torch.compile`; modern…
peterdsharpe Jan 27, 2026
7e8e4a8
Refactors for `FullyConnected` and `UNet` models (#1330)
peterdsharpe Jan 27, 2026
0a4238a
Fea update transient sim readme (#1353)
loliverhennigh Jan 27, 2026
f5eab2c
refactor dlwp for model standards (#1354)
pzharrington Jan 28, 2026
404ec59
Fea nn funcitonal (#1359)
loliverhennigh Jan 28, 2026
af06253
Domain Parallel docstring + typehint Formating (#1355)
coreyjadams Jan 28, 2026
aa54bb2
Figconvnet refactor (#1339)
coreyjadams Jan 29, 2026
6362c65
Diffusion preconditioners refactor (#1317)
CharlelieLrt Jan 31, 2026
451d428
initial update for coding standards
dallasfoster Feb 3, 2026
b51aa70
Update GraphCast for model standards (#1358)
pzharrington Feb 3, 2026
b5c4ec6
depcration handling and argument improvements
dallasfoster Feb 3, 2026
ff057b6
Add experimental SO2 equivariant convolution and activation (#1367)
laserkelvin Feb 4, 2026
9fa5bb5
Update end year in license headers (#1362)
mnabian Feb 5, 2026
7b26bbb
Add concatenation wrapper for legacy diffusion models (#1366)
pzharrington Feb 5, 2026
5455011
Update transolver to comply with model standards (#1316)
coreyjadams Feb 5, 2026
4c18a44
Much more aggressive testing against entrypoints and registry. (#1290)
coreyjadams Feb 5, 2026
e94bea8
StormCast: Training improvements and code refactoring (#1379)
albertocarpentieri Feb 6, 2026
40d1319
Moving HealPix Ops into module folder (#1377)
NickGeneva Feb 8, 2026
c9d7603
Domino Model Compliance (#1314)
coreyjadams Feb 9, 2026
65318af
add verbose flag (#1386)
ktangsali Feb 9, 2026
ee9a12d
Release Notes on PhysicsNeMo-Mesh (#1385)
peterdsharpe Feb 10, 2026
35e6a9b
Adds `physicsnemo.mesh`, Part 2/2 (#1333)
peterdsharpe Feb 10, 2026
f2a93b3
Enhance version checking with less boiler plate and better messages. …
coreyjadams Feb 10, 2026
e76880d
Model standards Super Resolution Net (#1336)
ktangsali Feb 10, 2026
e0a3806
Add importlib_metadata to pyproject.toml to handle dependency checkin…
coreyjadams Feb 10, 2026
df618ea
Update train.py (#1396)
coreyjadams Feb 10, 2026
37b3e5c
HealDA layers and DiT patches for PNM Core (#1371)
aayushg55 Feb 11, 2026
171e2f2
Bug fix lennard jones example (#1395)
mnabian Feb 11, 2026
8e81b62
Refactor external models: DPOT, VFGN, TopoDiff, mesh_reduced (#1338)
mnabian Feb 11, 2026
bca90a1
Update SWE example (#1402)
mnabian Feb 11, 2026
bbdebec
Fix dependencies for a few examples (#1383)
ktangsali Feb 11, 2026
ab93e1d
Fix error with RNN example (#1384)
ktangsali Feb 12, 2026
5415257
Ensure datapipes is checked for optional imports too. (#1390)
coreyjadams Feb 12, 2026
b5d4ba1
Bug fixes for ShardTensor+SongUNet (#1400)
jleinonen Feb 12, 2026
246327e
Equivariant layers in three dimensions (#1372)
laserkelvin Feb 12, 2026
bc8905d
DiT Improvements (#1406)
pzharrington Feb 12, 2026
90be90e
FIGConvNet: fixed 'split_by_node_equal', supports multi-GPU execution…
weilr Feb 12, 2026
362e5cc
HealDA Sensor Embedder (#1397)
aayushg55 Feb 12, 2026
e60fbbb
Equivariant normalization layers (#1398)
laserkelvin Feb 12, 2026
ca309af
Fix torch version so import register_prop_rules allows 2.10 prereleas…
coreyjadams Feb 12, 2026
df817dc
Shard tensor dtensor interop (#1408)
coreyjadams Feb 13, 2026
2b63241
Migrate DiT from experimental (#1409)
pzharrington Feb 13, 2026
047df0c
Model standards for FNOs (#1335)
ktangsali Feb 13, 2026
a9a90b4
Enable physicsnemo compatibility usage. (#1382)
coreyjadams Feb 13, 2026
d51fca3
Geometry Guardrail (#1370)
mnabian Feb 13, 2026
61f5ffd
Add HealDA model (#1410)
aayushg55 Feb 13, 2026
5fddeaf
Added diffusion and models in migration guide (#1416)
CharlelieLrt Feb 13, 2026
0ce9a98
Fix model signature from `from_torch` (#1349) (#1350)
giprayogo Feb 13, 2026
c5356c9
Take two make zarr xarray h5py optional (#1414)
coreyjadams Feb 14, 2026
cb086ef
Implemented coding standards to the models in diffusion_unets (#1418)
CharlelieLrt Feb 14, 2026
9b94df7
Update __init__.py (#1419)
mnabian Feb 15, 2026
b4c0b2b
CUDA 12/13 Cross-Compatibility (#1412)
peterdsharpe Feb 17, 2026
b2056f4
add deprecation notices, remove jit
dallasfoster Feb 17, 2026
4e6a0a0
Merge branch 'main' into dallasf/update_pix2pix
dallasfoster Feb 17, 2026
3e61a6f
Fea model refactor fengwu pengwu swinrnn (#1420)
loliverhennigh Feb 17, 2026
b41b242
Crash inference bug fix (#1423)
mnabian Feb 17, 2026
5671e58
Fix missing num_steps parameter for stochastic sampler (#1364)
younes-abid Feb 17, 2026
1708820
Shard tensor view, reshape, and unsqueeze fixes (#1413)
coreyjadams Feb 17, 2026
94d209a
Fix kernel size handling in partial_na2d (#1424)
pzharrington Feb 17, 2026
98ccf9b
Fix the domino config path bug. (#1417)
coreyjadams Feb 18, 2026
04236a2
Refactored diffusion sampler (#1363)
CharlelieLrt Feb 18, 2026
8aa52ad
DPS and SDA guidance for diffusion (#1381)
CharlelieLrt Feb 18, 2026
62adbe4
Fea nn functional asv (#1373)
loliverhennigh Feb 18, 2026
0c83135
Fix sharded Group Norm. (#1421)
coreyjadams Feb 18, 2026
8266475
Fix view and reshape ops when shape is passed as a kwarg. This essen…
coreyjadams Feb 18, 2026
56d573b
Handle dtensor spec in sharded view (#1427)
pzharrington Feb 18, 2026
3ea8146
Mesh Caching Improvements (#1429)
peterdsharpe Feb 19, 2026
70b06ed
Modernize docker builds for 2.0 release (#1428)
ktangsali Feb 19, 2026
10317bf
Merge branch 'main' into dallasf/update_pix2pix
dallasfoster Feb 20, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
58 changes: 58 additions & 0 deletions .cursor/rules/mod-000a-reusable-layers-belong-in-nn.mdc
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
---
description: Reusable layers and building blocks should be placed in physicsnemo/nn, not physicsnemo/models. Examples include FullyConnected, attention layers, and UNetBlock.
alwaysApply: false
---

When creating or refactoring reusable layer code, rule MOD-000a must be followed. Explicitly reference "Following rule MOD-000a, which states that reusable layers should go in physicsnemo/nn..." when explaining placement decisions.

## MOD-000a: Reusable layers/blocks belong in physicsnemo.nn

**Description:**

Reusable layers that are the building blocks of more complex architectures
should go into `physicsnemo/nn`. Those include for instance `FullyConnected`,
various variants of attention layers, `UNetBlock` (a block of a U-Net), etc.

All layers that are directly exposed to the user should be imported in
`physicsnemo/nn/__init__.py`, such that they can be used as follows:

```python
from physicsnemo.nn import MyLayer
```

The only exception to this rule is for layers that are highly specific to a
single example. In this case, it may be acceptable to place them in a module
specific to the example code, such as `examples/<example_name>/utils/nn.py`.

**Rationale:**

Ensures consistency in the organization of reusable layers in the repository.
Keeping all reusable components in a single location makes them easy to find
and promotes code reuse across different models.

**Example:**

```python
# Good: Reusable layer in physicsnemo/nn/attention.py
class MultiHeadAttention(Module):
"""A reusable attention layer that can be used in various architectures."""
pass

# Good: Import in physicsnemo/nn/__init__.py
from physicsnemo.nn.attention import MultiHeadAttention

# Good: Example-specific layer in examples/weather/utils/nn.py
class WeatherSpecificLayer(Module):
"""Layer highly specific to the weather forecasting example."""
pass
```

**Anti-pattern:**

```python
# WRONG: Reusable layer placed in physicsnemo/models/
# File: physicsnemo/models/attention.py
class MultiHeadAttention(Module):
"""Should be in physicsnemo/nn/ not physicsnemo/models/"""
pass
```
54 changes: 54 additions & 0 deletions .cursor/rules/mod-000b-complete-models-belong-in-models.mdc
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
---
description: Complete models composed of multiple layers should be placed in physicsnemo/models, not physicsnemo/nn. These are domain-specific or modality-specific models.
alwaysApply: false
---

When creating or refactoring complete model code, rule MOD-000b must be followed. Explicitly reference "Following rule MOD-000b, which states that complete models should go in physicsnemo/models..." when explaining placement decisions.

## MOD-000b: Complete models belong in physicsnemo.models

**Description:**

More complete models, composed of multiple layers and/or other sub-models,
should go into `physicsnemo/models`. All models that are directly exposed to
the user should be imported in `physicsnemo/models/__init__.py`, such that they
can be used as follows:

```python
from physicsnemo.models import MyModel
```

The only exception to this rule is for models that are highly specific to a
single example. In this case, it may be acceptable to place them in a module
specific to the example code, such as `examples/<example_name>/utils/nn.py`.

**Rationale:**

Ensures consistency and clarity in the organization of models in the repository,
in particular a clear separation between reusable layers and more complete
models that are applicable to a specific domain or specific data modality.

**Example:**

```python
# Good: Complete model in physicsnemo/models/transformer.py
class TransformerModel(Module):
"""A complete transformer model composed of attention and feedforward layers."""
def __init__(self):
super().__init__()
self.attention = MultiHeadAttention(...)
self.ffn = FeedForward(...)

# Good: Import in physicsnemo/models/__init__.py
from physicsnemo.models.transformer import TransformerModel
```

**Anti-pattern:**

```python
# WRONG: Complete model placed in physicsnemo/nn/
# File: physicsnemo/nn/transformer.py
class TransformerModel(Module):
"""Should be in physicsnemo/models/ not physicsnemo/nn/"""
pass
```
47 changes: 47 additions & 0 deletions .cursor/rules/mod-001-use-physicsnemo-module-as-base-class.mdc
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
---
description: All model and layer classes must inherit from physicsnemo.Module (not torch.nn.Module directly) to ensure proper serialization, versioning, and registry functionality.
alwaysApply: false
---

When creating or modifying model classes, rule MOD-001 must be strictly followed. Explicitly reference "Following rule MOD-001, which states that all model classes must inherit from physicsnemo.Module..." when explaining inheritance decisions.

## MOD-001: Use physicsnemo.Module as model base classes

**Description:**

All model classes must inherit from `physicsnemo.Module`. Direct subclasses of
`torch.nn.Module` are not allowed. Direct subclasses of `physicsnemo.Module`
are allowed (note that `physicsnemo.Module` is a subclass of `torch.nn.Module`).
Ensure proper initialization of parent classes using `super().__init__()`. Pass
the `meta` argument to the `super().__init__()` call if appropriate, otherwise
set it manually with `self.meta = meta`.

**Rationale:**

Ensures invariants and functionality of the `physicsnemo.Module` class for all
models. In particular, instances of `physicsnemo.Module` benefit from features
that are not available in `torch.nn.Module` instances. Those include serialization
for checkpointing and loading modules and submodules, versioning system to
handle backward compatibility, as well as ability to be registered in the
`physicsnemo.registry` for easy instantiation and use in any codebase.

**Example:**

```python
from physicsnemo import Module

class MyModel(Module):
def __init__(self, input_dim: int, output_dim: int):
super().__init__(meta=MyModelMetaData())
self.linear = nn.Linear(input_dim, output_dim)
```

**Anti-pattern:**

```python
from torch import nn

class MyModel(nn.Module):
def __init__(self, input_dim: int, output_dim: int):
self.linear = nn.Linear(input_dim, output_dim)
```
Original file line number Diff line number Diff line change
@@ -0,0 +1,65 @@
---
description: New model classes should start in physicsnemo/experimental/nn or physicsnemo/experimental/models during development, where backward compatibility is not guaranteed.
alwaysApply: false
---

When creating new model or layer classes, rule MOD-002a must be followed. Explicitly reference "Following rule MOD-002a, which states that new models should start in physicsnemo/experimental/..." when explaining where to place new code.

## MOD-002a: New models and layers belong in physicsnemo.experimental

**Description:**

For the vast majority of models, new classes are created either in
`physicsnemo/experimental/nn` for reusable layers, or in
`physicsnemo/experimental/models` for more complete models. The `experimental`
folder is used to store models that are still under development (beta or alpha
releases) during this stage, backward compatibility is not guaranteed.

One exception is when the developer is highly confident that the model is
sufficiently mature and applicable to many domains or use cases. In this case
the model class can be created in the `physicsnemo/nn` or `physicsnemo/models`
folders directly, and backward compatibility is guaranteed.

Another exception is when the model class is highly specific to a single
example. In this case, it may be acceptable to place it in a module specific to
the example code, such as `examples/<example_name>/utils/nn.py`.

After staying in experimental for a sufficient amount of time (typically at
least 1 release cycle), the model class can be promoted to production. It is
then moved to the `physicsnemo/nn` or `physicsnemo/models` folders, based on
whether it's a reusable layer or complete model (see MOD-000a and MOD-000b).

**Note:** Per MOD-008a, MOD-008b, and MOD-008c, it is forbidden to move a model
out of the experimental stage/directory without the required CI tests.

**Rationale:**

The experimental stage allows rapid iteration without backward compatibility
constraints, enabling developers to refine APIs based on user feedback. This
protects users from unstable APIs while allowing innovation.

**Example:**

```python
# Good: Stage 1 - New experimental model
# File: physicsnemo/experimental/models/new_diffusion.py
class DiffusionModel(Module):
"""New diffusion model under active development. API may change."""
pass

# Good: After 1+ release cycles, promoted to production
# File: physicsnemo/models/diffusion.py (moved from experimental/)
class DiffusionModel(Module):
"""Stable diffusion model with backward compatibility guarantees."""
pass
```

**Anti-pattern:**

```python
# WRONG: New model directly in production folder
# File: physicsnemo/models/brand_new_model.py (should be in experimental/ first)
class BrandNewModel(Module):
"""Skipped experimental stage - risky for stability"""
pass
```
69 changes: 69 additions & 0 deletions .cursor/rules/mod-002b-add-deprecation-warnings-to-model.mdc
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
---
description: Model classes being deprecated must include deprecation warnings in both docstring and runtime, explaining why and what users should use instead, for at least 1 release cycle.
alwaysApply: false
---

When deprecating a model class, rule MOD-002b must be followed. Explicitly reference "Following rule MOD-002b, which requires adding deprecation warnings to both docstring and runtime..." when implementing deprecation.

## MOD-002b: Add deprecation warnings to deprecating model class

**Description:**

For a model class in the pre-deprecation stage in `physicsnemo/nn` or
`physicsnemo/models`, the developer should start planning its deprecation. This
is done by adding a warning message to the model class, indicating that the
model class is deprecated and will be removed in a future release.

The warning message should be a clear and concise message that explains why the
model class is being deprecated and what the user should do instead. The
deprecation message should be added to both the docstring and should be raised
at runtime. The developer is free to choose the mechanism to raise the
deprecation warning.

A model class cannot be deprecated without staying in the pre-deprecation stage
for at least 1 release cycle before it can be deleted from the codebase.

**Rationale:**

Ensures users have sufficient time to migrate to newer alternatives, preventing
breaking changes that could disrupt their workflows. This graduated approach
balances innovation with stability, a critical requirement for a scientific
computing framework.

**Example:**

```python
# Good: Pre-deprecation with warning
# File: physicsnemo/models/old_diffusion.py
class DiffusionModel(Module):
"""
Legacy diffusion model.

.. deprecated:: 0.5.0
``OldDiffusionModel`` is deprecated and will be removed in version 0.7.0.
Use :class:`~physicsnemo.models.NewDiffusionModel` instead.
"""
def __init__(self):
import warnings
warnings.warn(
"OldDiffusionModel is deprecated. Use NewDiffusionModel instead.",
DeprecationWarning,
stacklevel=2
)
super().__init__()
```

**Anti-pattern:**

```python
# WRONG: No deprecation warning in code
# File: physicsnemo/models/old_model.py
class OldModel(Module):
"""Will be removed next release.""" # Docstring mentions it but no runtime warning
def __init__(self):
# Missing: warnings.warn(..., DeprecationWarning)
super().__init__()

# WRONG: Deprecation without sufficient warning period
# (Model deprecated and removed in same release)
```
50 changes: 50 additions & 0 deletions .cursor/rules/mod-002c-remove-deprecated-model-from-codebase.mdc
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
---
description: After at least 1 release cycle in pre-deprecation stage with warnings, deprecated model classes can be deleted from the codebase.
alwaysApply: false
---

When removing deprecated models, rule MOD-002c must be followed. Explicitly reference "Following rule MOD-002c, which states that a model can only be deleted after at least 1 release cycle in pre-deprecation..." when removing code.

## MOD-002c: Remove deprecated model from codebase

**Description:**

After staying in the pre-deprecation stage (Stage 3) for at least 1 release
cycle, the model class is considered deprecated (Stage 4). It can then be
deleted from the codebase.

A model class cannot be deleted without first spending at least 1 release cycle
in the pre-deprecation stage with proper deprecation warnings (see MOD-002b).

**Rationale:**

This ensures users have sufficient warning and time to migrate their code to
newer alternatives. Premature deletion of models would break user code without
adequate notice, violating the framework's commitment to stability.

**Example:**

```python
# Good: Model spent 1 release cycle in pre-deprecation (v0.5.0 with warnings)
# Now in v0.6.0, can be deleted
# File: physicsnemo/models/old_diffusion.py - DELETED

# Release timeline:
# v0.5.0: Added deprecation warnings (Stage 3)
# v0.6.0: Model can be safely removed (Stage 4)
```

**Anti-pattern:**

```python
# WRONG: Deleting model without deprecation period
# v0.5.0: Model exists without warnings
# v0.6.0: Model deleted - BREAKS USER CODE!

# WRONG: Breaking changes in production without deprecation cycle
# File: physicsnemo/models/diffusion.py
class DiffusionModel(Module):
def __init__(self, new_required_param): # Breaking change!
# Changed API without deprecation warning - breaks user code
pass
```
Loading