Skip to content
Open
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
119 changes: 119 additions & 0 deletions examples/structural_mechanics/crash/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -637,6 +637,125 @@ Muon: Car-crash test MSE at probe location (Driver, Passenger):

</p>

## Community Dataset: OpenRadioss Bumper Beam

This section describes an open-source dataset generated with
[OpenRadioss](https://github.com/OpenRadioss/OpenRadioss) and contributed by the community
to provide a freely accessible alternative for experimenting with this example.

### Dataset Overview

<p align="center">
<img src="https://github.com/user-attachments/assets/d1f935a5-11ef-4519-baf4-6a1f94c42fcf" width="400">
</p>

The dataset is based on the
[OpenRadioss Bumper Beam example](http://openradioss.atlassian.net/wiki/spaces/OPENRADIOSS/pages/11075585/Bumper+Beam),
a standard crash benchmark consisting of a bumper beam impacting a rigid cylindrical pole.
Raw simulation data was generated by iteratively varying two design parameters across 131 runs:

| Parameter | Range | Description |
|---|---|---|
| Shell thickness | 1.2 mm – 2.0 mm | Applied uniformly to `PROP/SHELL` IDs 2 and 7 |
| Pole Y-axis offset | −500 mm – +500 mm | Shifts the rigid pole impact position along the beam |


OpenRadioss ANIM output files are first converted to **d3plot** format using
[Vortex-Radioss](https://github.com/Vortex-CAE/Vortex-Radioss), then curated to **VTP** format
using [PhysicsNeMo-Curator](https://github.com/NVIDIA/physicsnemo-curator) for training.

The dataset (131 runs, including both raw d3plot and curated VTP files) is publicly available
on Hugging Face:

> 🤗 **[AIRBORNEPANDA/BumperBeamCrashExample](https://huggingface.co/datasets/AIRBORNEPANDA/BumperBeamCrashExample)**

### Generating the Dataset Yourself

A full data generation pipeline is provided at
**[HoussemMouradi/OpenRadioss2PhysicsNeMo](https://github.com/HoussemMouradi/OpenRadioss2PhysicsNeMo)**.
It automates the complete workflow: modifying OpenRadioss input decks, running the solver,
converting ANIM → d3plot, and generating `GLOBAL_FEATURES.json`.

Interactive Jupyter notebooks walk through every step and handle all dependency installation
automatically:

| Platform | Notebook |
|---|---|
| Windows (local Jupyter) | [OpenRadioss2PhysicsNEMO_windows.ipynb](https://github.com/HoussemMouradi/OpenRadioss2PhysicsNeMo/tree/main/OpenRadioss2PhysicsNeMo_NotebooksOpenRadioss2PhysicsNeMo_windows.ipynb) |
Comment thread
HoussemMouradi marked this conversation as resolved.
Outdated
| Linux (local Jupyter) | [OpenRadioss_PhysicsNEMO_linux.ipynb](https://github.com/HoussemMouradi/OpenRadioss2PhysicsNeMo/tree/main/OpenRadioss2PhysicsNeMo_Notebooks/OpenRadioss2PhysicsNeMo_linux.ipynb) |
| Google Colab ☁️ | <a href="https://colab.research.google.com/gist/HoussemMouradi/6e88c23e0b272621de0aa46c003c027e/openradioss2physicsnemo_linux.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> |

After running a notebook, follow the
[PhysicsNeMo-Curator crash ETL instructions](https://github.com/NVIDIA/physicsnemo-curator/blob/main/examples/structural_mechanics/crash/README.md)
to curate the d3plot outputs to VTP format for training.

### Training Configuration

The following configuration was used to train on the Bumper Beam dataset. It is based on the
[`bumper_geotransolver_oneshot.yaml`](https://github.com/NVIDIA/physicsnemo/blob/main/examples/structural_mechanics/crash/conf/bumper_geotransolver_oneshot.yaml)
experiment preset, using the **VTP reader**, **GeoTransolver (one-shot)** model, and
**point cloud** datapipe.

Run with:

```bash
python train.py --config-name=bumper_geotransolver_oneshot
```

The key configuration values are:

```yaml
# Bumper beam crash experiment using GeoTransolver (one-shot).
# Usage: python train.py --config-name=bumper_geotransolver_oneshot

experiment_name: "Bumper-GeoTransolver"

defaults:
- reader: vtp
- datapipe: point_cloud
- model: geotransolver_one_shot
- training: default
- inference: default
- _self_
# -- Data ----------------------------------------------------------------------
training:
raw_data_dir: ./CURATED_DATA_VTP/TRAINING_DATA # path to curated VTP train split
raw_data_dir_validation: ./CURATED_DATA_VTP/VALIDATION_DATA # path to curated VTP validation split
global_features_filepath: ./CURATED_DATA_VTP/GLOBAL_FEATURES.json # path to GLOBAL_FEATURES.json
optimizer: muon

# -- Dataset -------------------------------------------------------------------
num_time_steps: 11
num_training_samples: 124
num_validation_samples: 7

inference:
raw_data_dir_test: ???

# -- Datapipe features ---------------------------------------------------------
datapipe:
static_features: [thickness] # per-node static features for this dataset
dynamic_targets:
- effective_plastic_strain
- stress_vm
global_features:
- thickness_scale # shell thickness (mm)
- velocity_x # initial impact velocity in X (mm/ms)
- rwall_origin_y # rigid pole Y-coordinate (mm)
sample_type: all_time_steps

# -- Model ---------------------------------------------------------------------
model:
functional_dim: 4 # input coords (x, y, z) + static_features (1)
out_dim: 50 # (num_time_steps - 1) * 5 = 50 * 5
Comment thread
HoussemMouradi marked this conversation as resolved.
Outdated
global_dim: 3 # must match len(datapipe.global_features)
```

### Results
<img width="480" height="206" alt="1769108712488" src="https://github.com/user-attachments/assets/53883c93-fe3f-4fb2-8d10-f474693137f3" />
<img width="480" height="206" alt="1769108706907" src="https://github.com/user-attachments/assets/85a5ed85-f835-45d7-a6b6-86ed875163c6" />


## TODO

- [ ] **Normalize global features**: Global features (e.g., velocity_x, thickness_scale, rwall_origin_y) are currently passed to the model without normalization. Add support for computing and applying per-feature mean/std (or similar) so global inputs are normalized consistently with node features and positions.
Expand Down