diff --git a/AGENTS.md b/AGENTS.md index 30ab4356..85ede860 100644 --- a/AGENTS.md +++ b/AGENTS.md @@ -8,6 +8,33 @@ referenced files before making changes. --- +## Contents + +- [Required Reading](#required-reading) +- [Core Rules](#core-rules) + - [Git Operations](#git-operations) + - [GitHub CLI (`gh`)](#github-cli-gh) + - [Code Editing](#code-editing) + - [Commit Message Generation](#commit-message-generation) +- [Validation Workflow](#validation-workflow) +- [Project Context](#project-context) +- [Design Principles](#design-principles) + - [Numerical correctness as an invariant](#numerical-correctness-as-an-invariant) + - [Topological correctness as an invariant](#topological-correctness-as-an-invariant) + - [Validation layering](#validation-layering) + - [Symbolic perturbation (SoS)](#symbolic-perturbation-sos) + - [Public‑API stability](#publicapi-stability) + - [Composability](#composability) + - [Idiomatic Rust as a proxy for mathematical clarity](#idiomatic-rust-as-a-proxy-for-mathematical-clarity) + - [Scientific notation in docs](#scientific-notation-in-docs) + - [Performance within scope](#performance-within-scope) + - [Testing mirrors the principles](#testing-mirrors-the-principles) +- [Testing](#testing) +- [Documentation Maintenance](#documentation-maintenance) +- [Agent Behavior Expectations](#agent-behavior-expectations) + +--- + ## Required Reading Before modifying code, agents MUST read: @@ -162,25 +189,11 @@ Refer to `docs/dev/commands.md` for full details. --- -## Testing Rules - -Testing guidance lives in: - -```text -docs/dev/testing.md -``` - -Key principle: - -- Rust changes must pass unit tests, integration tests, and documentation builds. - ---- - ## Project Context - **Language**: Rust - **Project**: d‑dimensional Delaunay triangulation library -- **MSRV**: 1.94 +- **MSRV**: 1.95 - **Edition**: 2024 - **Unsafe code**: forbidden (`#![forbid(unsafe_code)]`) @@ -192,7 +205,171 @@ docs/code_organization.md --- -## Testing Execution Reference +## Design Principles + +This is a scientific d‑dimensional Delaunay triangulation library. Design +decisions trade off in roughly this priority: **numerical correctness → +topological correctness → API stability → composability → idiomatic Rust → +performance within scope**. The sections below spell out what each means +in practice; when in doubt, favour the invariant over the convenient edit. + +### Numerical correctness as an invariant + +- Geometric predicates (`insphere`, `insphere_lifted`, `orientation`) use + the three‑stage pattern from `src/geometry/predicates.rs`: + **Stage 1** — provable f64 fast filter with a Shewchuk‑style errbound; + **Stage 2** — exact sign via Bareiss in `la-stack`; + **Stage 3** — deterministic `InSphere::BOUNDARY` / `Orientation::DEGENERATE` + fallback for non‑finite inputs. A new predicate must either plug into + this pattern or justify the deviation in its docs. +- No f64 operation may silently lose sign information. `unwrap_or(NaN)`, + `unwrap_or(f64::INFINITY)`, or "return `true` on error" are anti‑patterns. +- Algorithms cite their source (Shewchuk, Bowyer–Watson, Edelsbrunner, + Preparata–Shamos, …) in `REFERENCES.md` and document their + conditioning behaviour. +- When two predicate implementations cover the same question + (e.g. `insphere` vs `insphere_distance` vs `insphere_lifted`), a + proptest verifies they agree on the domain where all are defined. + +### Topological correctness as an invariant + +- Every mutating operation preserves the invariants checked by + `Tds::is_valid` (Level 1–3) and `DelaunayTriangulation::is_valid` + (Level 4). An operation that cannot preserve them must fail explicitly + rather than leave the triangulation in an inconsistent state. +- PL‑manifold invariants: facets have multiplicity 1 (boundary) or 2 + (interior); ridges are linked consistently; the Euler characteristic + matches the triangulation's `TopologyGuarantee`. The checks live in + `src/topology/manifold.rs` and `src/topology/characteristics/`. +- Repair paths (`repair_delaunay_with_flips`, `repair_facet_oversharing`, + `delaunayize_by_flips`) bound their work via explicit budgets + (`max_flips`, `max_iterations`, `max_cells_removed`) and surface + non‑convergence as a typed error — never by logging and proceeding. + +### Validation layering + +The library exposes four validation levels, each a superset of the last: + +1. **Level 1 — elements**: individual cells, vertices, facets are + internally consistent (dimensions, UUIDs, coordinate finiteness). +2. **Level 2 — structure**: adjacency pointers and neighbour links form a + valid incidence graph; no dangling keys. +3. **Level 3 — topology**: PL‑manifold‑with‑boundary, Euler characteristic, + ridge‑link consistency. +4. **Level 4 — Delaunay property**: every facet is locally Delaunay. + +Only Level 4 requires predicate evaluation; Levels 1–3 are pure graph +checks. Agents adding validation code should place it at the correct +layer and avoid reaching into lower layers unnecessarily. + +### Symbolic perturbation (SoS) + +Degenerate configurations (cospherical, collinear, coplanar inputs) are +resolved by Simulation‑of‑Simplicity in `src/geometry/sos.rs`. This is +a first‑class invariant, not an implementation detail — callers can +rely on predicates returning a consistent, total ordering even on +degenerate input, and tests under `tests/proptest_sos.rs` enforce that. + +### Public‑API stability + +- Error enums are `#[non_exhaustive]`; public wrapper types are + `#[must_use]`. New variants are additive. +- New functionality is additive: use `crate::prelude::*` (or the focused + `prelude::triangulation`, `prelude::query`, etc.) for ergonomic + re‑exports; never silently rename or remove a public item. +- Pre‑1.0 semver: `0.x.Y` is a patch‑level additive bump, `0.X.y` is a + minor bump that may include breaking changes. Conventional‑commit + types (`feat`, `fix`, `refactor`, …) mirror this convention. +- Publish documentation changes *before* bumping the crates.io version + (crates.io does not allow re‑publishing docs without a version bump). + +### Composability + +- Const‑generic `D` on every core type (`DelaunayTriangulation`, + `Tds`, `Cell`, `Vertex`, `Point`). + No runtime dimension. +- Per‑simplex data is stack‑allocated (`[T; D]` coordinates, + `SmallBuffer`). The + triangulation's topology is stored in `SlotMap` — heap‑backed by + necessity, not by accident. +- Feature flags isolate optional dependency weight. Default builds stay + dep‑minimal. Known flags: `dense-slotmap` (default), + `count-allocations`, `bench`, `bench-logging`, `test-debug`, + `slow-tests`. + +### Idiomatic Rust as a proxy for mathematical clarity + +- `#![forbid(unsafe_code)]` is a hard constraint, not a guideline. +- `const fn` for pure‑math helpers (`sign_to_orientation`, + `sign_to_insphere`, coordinate conversions) where the inputs allow. + Do not twist mutating APIs into `const fn` for its own sake. +- `Result<_, _Error>` for every fallible operation. Panics are reserved + for documented, debug‑only precondition violations; library code in + `src/` must not panic on user input. +- Borrow by default (`&T`, `&mut T`, `&[T]`); return borrowed views where + possible. `FacetView`, `AdjacencyIndex`, and the `cells()`/`vertices()` + iterators are examples. +- Type and function names match the textbook vocabulary: `Triangulation`, + `Vertex`, `Cell`, `Facet`, `Ridge`, `InSphere`, `Orientation`, + `insphere`, `circumcenter`, `circumradius`. Avoid Rust‑ecosystem + abstractions that obscure the math. +- Use `tracing::{debug,info,warn,error}!` for all runtime diagnostics. + Never `eprintln!` / `println!` outside examples and benches. + +### Scientific notation in docs + +- Unicode math (×, ≤, ≥, ∈, Σ, ², `2^-50`, …) is welcome in doc + comments — readability trumps ASCII‑only preference. +- Reference literature via `REFERENCES.md` numbered citations. +- State invariants mathematically where possible (e.g. + `χ(S^d) = 1 + (−1)^d`) rather than prose‑only. + +### Performance within scope + +- Performance is a design goal but strictly subordinate to the principles + above. Never trade correctness, stability, or clarity for speed; if + the two conflict, re‑scope the problem rather than compromise the + invariant. +- **In scope**: d‑dimensional Delaunay triangulations for small‑to‑medium + dimensions (typically 2 ≤ D ≤ 7), single‑threaded in‑memory + construction, `SlotMap`‑backed topology, Hilbert‑ordered insertion. +- **Out of scope**: massively parallel / GPU meshing, out‑of‑core + triangulations, sparse sampling, dynamic remeshing at scale. Those + belong to specialised tools (CGAL, TetGen, Gmsh). +- Within scope, prefer: + - allocation‑free hot paths (`SmallBuffer`, stack arrays, iterators) + - Shewchuk‑style f64 fast filters with `core::hint::cold_path()` on + exact‑arithmetic fallbacks (see `src/geometry/predicates.rs`) + - `const fn` where the inputs allow + - typed flip/insertion budgets rather than heuristic timeouts +- Validate any performance claim against one of the benchmark suites in + `benches/` (`ci_performance_suite`, `large_scale_performance`, + `profiling_suite`, `cold_path_predicates`, `microbenchmarks`, + `circumsphere_containment`, `topology_guarantee_construction`) before + relying on it. + +### Testing mirrors the principles + +- Unit tests cover known values, error paths, and dimension‑generic + correctness. Dimension‑generic tests **must cover D=2 through D=5** + whenever feasible; use `pastey` macros to generate per‑dimension tests + (see `src/core/cell.rs`, `src/core/tds.rs` for patterns). +- Proptests under `tests/proptest_*.rs` cover algebraic and + topological invariants — round‑trips, Euler characteristic, orientation + sign agreement, SoS consistency — not just "does it not panic". +- Adversarial inputs (near‑boundary points, cospherical sets, degenerate + simplices, large coordinates) accompany well‑conditioned inputs in + both tests and benchmarks. +- When a public API has two paths for the same question (fast filter + + exact fallback, or two alternative predicates), a proptest verifies + they agree on the domain where all are defined. + +--- + +## Testing + +For *what* tests cover and why, see **Design Principles → Testing mirrors the principles** above. +For detailed rules (proptest conventions, adversarial inputs, etc.), see `docs/dev/testing.md`. Typical commands: @@ -203,8 +380,6 @@ just test-all just examples ``` -See `docs/dev/testing.md` for full testing guidance. - --- ## Documentation Maintenance @@ -217,16 +392,12 @@ See `docs/dev/testing.md` for full testing guidance. ## Agent Behavior Expectations -Agents should: - -- Prefer small, focused patches -- Follow Rust idioms and borrowing conventions -- Avoid introducing allocations unless necessary -- Avoid panics in library code -- Search documentation under `docs/` when unsure - -If multiple solutions exist, prefer the one that: +The invariants in **Design Principles** above are authoritative; this +section only lists expectations that are not already codified there. -1. Preserves API stability -2. Maintains generic const‑dimension architecture -3. Keeps code simple and maintainable +- Prefer small, focused patches. +- Search `docs/` and `docs/dev/` before inventing new conventions. +- When a design question is ambiguous, default to the trade‑off ordering + in Design Principles (numerical correctness → topological correctness + → API stability → composability → idiomatic Rust → performance). +- Keep code simple and maintainable when multiple correct solutions exist. diff --git a/CITATION.cff b/CITATION.cff index b65d6e2f..d03edb4f 100644 --- a/CITATION.cff +++ b/CITATION.cff @@ -2,7 +2,7 @@ cff-version: 1.2.0 message: "If you use this software, please cite it as below." type: software title: "delaunay: A d-dimensional Delaunay triangulation library" -version: 0.7.4 +version: 0.7.5 doi: 10.5281/zenodo.16931097 date-released: 2026-03-26 url: "https://github.com/acgetchell/delaunay" diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index ff074180..cd9a6abb 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -153,7 +153,7 @@ Before you begin, ensure you have: The project uses: - **Edition**: Rust 2024 -- **MSRV**: Rust 1.94.0 +- **MSRV**: Rust 1.95.0 - **Linting**: Strict clippy pedantic mode - **Testing**: Standard `#[test]` with comprehensive coverage - **Benchmarking**: Criterion with allocation tracking @@ -164,7 +164,7 @@ The project uses: When you enter the project directory, `rustup` will automatically: -- **Install the correct Rust version** (1.94.0) if you don't have it +- **Install the correct Rust version** (1.95.0) if you don't have it - **Switch to the pinned version** for this project - **Install required components** (clippy, rustfmt, rust-docs, rust-src) - **Add cross-compilation targets** for supported platforms @@ -179,7 +179,7 @@ When you enter the project directory, `rustup` will automatically: **First time in the project?** You'll see: ```text -info: syncing channel updates for '1.94.0-' +info: syncing channel updates for '1.95.0-' info: downloading component 'cargo' info: downloading component 'clippy' ... @@ -191,7 +191,7 @@ This is normal and only happens once. After that, the correct toolchain is used ```bash rustup show -# Should show: active toolchain: 1.94.0- (overridden by '/path/to/delaunay/rust-toolchain.toml') +# Should show: active toolchain: 1.95.0- (overridden by '/path/to/delaunay/rust-toolchain.toml') ``` ### Python Development Environment diff --git a/Cargo.lock b/Cargo.lock index b42f853e..7b53eb63 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -430,9 +430,9 @@ dependencies = [ [[package]] name = "la-stack" -version = "0.4.0" +version = "0.4.1" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "29642c18844c8b5eb0374363b0f7d0bf1977384db2b5d5496e2bcbad249c58bb" +checksum = "938f8e88a7405e62e3f32dbf8e5a438ddd25812c7e401123ae255b08598a6a3b" dependencies = [ "num-bigint", "num-rational", diff --git a/Cargo.toml b/Cargo.toml index 6b496b2d..e46246f1 100644 --- a/Cargo.toml +++ b/Cargo.toml @@ -2,7 +2,7 @@ name = "delaunay" version = "0.7.5" edition = "2024" -rust-version = "1.94" +rust-version = "1.95" authors = [ "Adam Getchell " ] homepage = "https://github.com/acgetchell/delaunay" documentation = "https://docs.rs/delaunay" @@ -18,7 +18,7 @@ readme = "README.md" [dependencies] allocation-counter = { version = "0.8.1", optional = true } # for memory profiling arc-swap = "1.9.1" -la-stack = { version = "0.4.0", features = [ "exact" ] } +la-stack = { version = "0.4.1", features = [ "exact" ] } tracing = "0.1.44" rustc-hash = "2.1.2" # Fast non-cryptographic hashing for performance smallvec = { version = "1.15.1", features = [ "serde" ] } # Stack allocation for small collections @@ -86,6 +86,11 @@ name = "large_scale_performance" path = "benches/large_scale_performance.rs" harness = false +[[bench]] +name = "cold_path_predicates" +path = "benches/cold_path_predicates.rs" +harness = false + [lints.rust] unsafe_code = "forbid" dead_code = "warn" diff --git a/benches/ci_performance_suite.rs b/benches/ci_performance_suite.rs index 488ab290..b650f5ab 100644 --- a/benches/ci_performance_suite.rs +++ b/benches/ci_performance_suite.rs @@ -141,15 +141,11 @@ fn find_seed_and_vertices( } fn bench_logging_enabled() -> bool { - std::env::var("DELAUNAY_BENCH_LOG") - .map(|value| value != "0") - .unwrap_or(false) + std::env::var("DELAUNAY_BENCH_LOG").is_ok_and(|value| value != "0") } fn bench_discover_seeds_enabled() -> bool { - std::env::var("DELAUNAY_BENCH_DISCOVER_SEEDS") - .map(|value| value != "0") - .unwrap_or(false) + std::env::var("DELAUNAY_BENCH_DISCOVER_SEEDS").is_ok_and(|value| value != "0") } fn bench_seed_search_limit() -> usize { diff --git a/benches/cold_path_predicates.rs b/benches/cold_path_predicates.rs new file mode 100644 index 00000000..1f4ae5e1 --- /dev/null +++ b/benches/cold_path_predicates.rs @@ -0,0 +1,254 @@ +//! Microbenchmark for `core::hint::cold_path` adoption in geometric predicates. +//! +//! This benchmark exercises the hot Stage-1 path of the [`insphere`] / [`insphere_lifted`] +//! predicates (and implicitly the orientation predicate they both invoke) across +//! 2D–5D. A small "near-boundary" group is included to guard against regressions +//! when `cold_path` is added to Stage-2 / Stage-3 branches. +//! +//! ## Stage anatomy +//! +//! Both `insphere_from_matrix` and `orientation_from_matrix` are structured as: +//! +//! 1. Stage 1 (hot): f64 fast filter with Shewchuk-style errbound. +//! 2. Stage 2 (cold): exact sign via Bareiss — only reached for ambiguous f64 +//! results or D ≥ 5. +//! 3. Stage 3 (very cold): non-finite fallback. +//! +//! Random, well-separated inputs hit Stage 1 almost exclusively. The +//! `near_boundary` group constructs test points very close to the circumsphere +//! boundary to exercise Stage 2. +//! +//! ## Usage +//! +//! ```bash +//! cargo bench --bench cold_path_predicates +//! ``` +//! +//! To save a baseline before applying `cold_path()` and compare afterwards: +//! +//! ```bash +//! cargo bench --bench cold_path_predicates -- --save-baseline pre +//! # (edit src/geometry/predicates.rs) +//! cargo bench --bench cold_path_predicates -- --baseline pre +//! ``` + +use criterion::{BenchmarkId, Criterion, Throughput, criterion_group, criterion_main}; +use delaunay::geometry::util::generate_random_points_seeded; +use delaunay::prelude::query::*; +use std::hint::black_box; + +/// Deterministic seed for query-point generation in the hot path. +const HOT_SEED: u64 = 0xC01D_BEEF_0000_CAFE_u64; +/// Deterministic seed for query-point generation in the near-boundary group. +const NEAR_BOUNDARY_SEED: u64 = 0x0000_0000_0000_BEEF_u64; +/// Number of queries per hot-path benchmark. Keep above the size where +/// per-iteration overhead dominates so Stage-1 improvements are visible. +const HOT_QUERIES: usize = 10_000; +/// Number of queries per near-boundary benchmark. Kept smaller because +/// Stage 2 is slower per call. +const NEAR_BOUNDARY_QUERIES: usize = 1_000; + +/// Standard D-dimensional simplex: origin + unit basis vectors. +fn standard_simplex() -> Vec> { + let mut pts = Vec::with_capacity(D + 1); + pts.push(Point::new([0.0; D])); + for i in 0..D { + let mut coords = [0.0; D]; + coords[i] = 1.0; + pts.push(Point::new(coords)); + } + pts +} + +/// Generate well-separated hot-path query points for dimension `D`. +/// +/// Uses the range `[-10, 10]` against a unit simplex so that the Shewchuk +/// errbound comfortably resolves the sign in Stage 1. +fn hot_queries() -> Vec> { + generate_random_points_seeded(HOT_QUERIES, (-10.0, 10.0), HOT_SEED) + .expect("failed to generate hot-path query points") +} + +/// Generate near-boundary query points for dimension `D`. +/// +/// Uses a narrow range centered on the standard simplex so many queries land +/// within the Stage-1 errbound window and spill into Stage 2. +fn near_boundary_queries() -> Vec> { + // Centered near the circumsphere radius of the standard simplex (~0.5 for + // the D = 3 unit case); the exact value is unimportant — we just want a + // high rate of errbound-ambiguous inputs. + generate_random_points_seeded(NEAR_BOUNDARY_QUERIES, (0.40, 0.60), NEAR_BOUNDARY_SEED) + .expect("failed to generate near-boundary query points") +} + +/// Run `insphere` across `queries` against `simplex`, black-boxing each result. +fn run_insphere(simplex: &[Point], queries: &[Point]) { + for q in queries { + black_box(insphere(black_box(simplex), black_box(*q)).unwrap()); + } +} + +/// Run `insphere_lifted` across `queries` against `simplex`, black-boxing each result. +fn run_insphere_lifted(simplex: &[Point], queries: &[Point]) { + for q in queries { + black_box(insphere_lifted(black_box(simplex), black_box(*q)).unwrap()); + } +} + +/// Benchmark the Stage-1 hot path for `insphere` and `insphere_lifted`. +fn bench_hot_path(c: &mut Criterion) { + let mut group = c.benchmark_group("predicates/hot"); + group.throughput(Throughput::Elements(HOT_QUERIES as u64)); + + // 2D + { + let simplex = standard_simplex::<2>(); + let queries = hot_queries::<2>(); + group.bench_with_input( + BenchmarkId::new("insphere_2d", HOT_QUERIES), + &(), + |b, ()| { + b.iter(|| run_insphere(black_box(&simplex), black_box(&queries))); + }, + ); + group.bench_with_input( + BenchmarkId::new("insphere_lifted_2d", HOT_QUERIES), + &(), + |b, ()| { + b.iter(|| run_insphere_lifted(black_box(&simplex), black_box(&queries))); + }, + ); + } + + // 3D + { + let simplex = standard_simplex::<3>(); + let queries = hot_queries::<3>(); + group.bench_with_input( + BenchmarkId::new("insphere_3d", HOT_QUERIES), + &(), + |b, ()| { + b.iter(|| run_insphere(black_box(&simplex), black_box(&queries))); + }, + ); + group.bench_with_input( + BenchmarkId::new("insphere_lifted_3d", HOT_QUERIES), + &(), + |b, ()| { + b.iter(|| run_insphere_lifted(black_box(&simplex), black_box(&queries))); + }, + ); + } + + // 4D + { + let simplex = standard_simplex::<4>(); + let queries = hot_queries::<4>(); + group.bench_with_input( + BenchmarkId::new("insphere_4d", HOT_QUERIES), + &(), + |b, ()| { + b.iter(|| run_insphere(black_box(&simplex), black_box(&queries))); + }, + ); + group.bench_with_input( + BenchmarkId::new("insphere_lifted_4d", HOT_QUERIES), + &(), + |b, ()| { + b.iter(|| run_insphere_lifted(black_box(&simplex), black_box(&queries))); + }, + ); + } + + // 5D — Stage 1 in `insphere_from_matrix` is skipped because `det_errbound()` + // returns None for D ≥ 5; all queries go straight to Stage 2. This is kept + // here as a Stage-2-dominant reference group rather than a hot path. + { + let simplex = standard_simplex::<5>(); + let queries = hot_queries::<5>(); + group.bench_with_input( + BenchmarkId::new("insphere_5d", HOT_QUERIES), + &(), + |b, ()| { + b.iter(|| run_insphere(black_box(&simplex), black_box(&queries))); + }, + ); + group.bench_with_input( + BenchmarkId::new("insphere_lifted_5d", HOT_QUERIES), + &(), + |b, ()| { + b.iter(|| run_insphere_lifted(black_box(&simplex), black_box(&queries))); + }, + ); + } + + group.finish(); +} + +/// Benchmark the Stage-2 cold path via near-boundary queries (2D–4D). +fn bench_near_boundary(c: &mut Criterion) { + let mut group = c.benchmark_group("predicates/near_boundary"); + group.throughput(Throughput::Elements(NEAR_BOUNDARY_QUERIES as u64)); + + { + let simplex = standard_simplex::<2>(); + let queries = near_boundary_queries::<2>(); + group.bench_with_input( + BenchmarkId::new("insphere_2d", NEAR_BOUNDARY_QUERIES), + &(), + |b, ()| { + b.iter(|| run_insphere(black_box(&simplex), black_box(&queries))); + }, + ); + group.bench_with_input( + BenchmarkId::new("insphere_lifted_2d", NEAR_BOUNDARY_QUERIES), + &(), + |b, ()| { + b.iter(|| run_insphere_lifted(black_box(&simplex), black_box(&queries))); + }, + ); + } + + { + let simplex = standard_simplex::<3>(); + let queries = near_boundary_queries::<3>(); + group.bench_with_input( + BenchmarkId::new("insphere_3d", NEAR_BOUNDARY_QUERIES), + &(), + |b, ()| { + b.iter(|| run_insphere(black_box(&simplex), black_box(&queries))); + }, + ); + group.bench_with_input( + BenchmarkId::new("insphere_lifted_3d", NEAR_BOUNDARY_QUERIES), + &(), + |b, ()| { + b.iter(|| run_insphere_lifted(black_box(&simplex), black_box(&queries))); + }, + ); + } + + { + let simplex = standard_simplex::<4>(); + let queries = near_boundary_queries::<4>(); + group.bench_with_input( + BenchmarkId::new("insphere_4d", NEAR_BOUNDARY_QUERIES), + &(), + |b, ()| { + b.iter(|| run_insphere(black_box(&simplex), black_box(&queries))); + }, + ); + group.bench_with_input( + BenchmarkId::new("insphere_lifted_4d", NEAR_BOUNDARY_QUERIES), + &(), + |b, ()| { + b.iter(|| run_insphere_lifted(black_box(&simplex), black_box(&queries))); + }, + ); + } + + group.finish(); +} + +criterion_group!(benches, bench_hot_path, bench_near_boundary); +criterion_main!(benches); diff --git a/benches/large_scale_performance.rs b/benches/large_scale_performance.rs index 51d36f1d..5624e1ce 100644 --- a/benches/large_scale_performance.rs +++ b/benches/large_scale_performance.rs @@ -243,7 +243,7 @@ fn bench_construction(c: &mut Criterion, dimension_name: &str, n // Adjust sample size for heavy cases to bound execution time if (D == 4 && n_points >= 5000) || D == 5 { group.sample_size(10); - group.measurement_time(Duration::from_secs(120)); + group.measurement_time(Duration::from_mins(2)); } group.bench_function("construct", |b| { @@ -312,7 +312,7 @@ fn bench_validation(c: &mut Criterion, dimension_name: &str, n_p // Adjust sample size for large cases and 5D if n_points >= 5000 || D == 5 { group.sample_size(10); - group.measurement_time(Duration::from_secs(120)); + group.measurement_time(Duration::from_mins(2)); } // Pre-generate triangulation for validation benchmarks @@ -352,7 +352,7 @@ fn bench_neighbor_queries( // Adjust sample size for very heavy cases (5D or large 4D) if D == 5 || (D == 4 && n_points >= 5000) { group.sample_size(10); - group.measurement_time(Duration::from_secs(120)); + group.measurement_time(Duration::from_mins(2)); } let seed = seed_for_case::(n_points); @@ -396,7 +396,7 @@ fn bench_vertex_iteration( // Adjust sample size for very heavy cases (5D or large 4D) if D == 5 || (D == 4 && n_points >= 5000) { group.sample_size(10); - group.measurement_time(Duration::from_secs(120)); + group.measurement_time(Duration::from_mins(2)); } let seed = seed_for_case::(n_points); @@ -431,7 +431,7 @@ fn bench_cell_iteration(c: &mut Criterion, dimension_name: &str, // Adjust sample size for very heavy cases (5D or large 4D) if D == 5 || (D == 4 && n_points >= 5000) { group.sample_size(10); - group.measurement_time(Duration::from_secs(120)); + group.measurement_time(Duration::from_mins(2)); } let seed = seed_for_case::(n_points); diff --git a/clippy.toml b/clippy.toml index 14401759..ba6c55f9 100644 --- a/clippy.toml +++ b/clippy.toml @@ -2,7 +2,7 @@ # This ensures consistent configuration across all developers and IDEs # Minimum Supported Rust Version - matches Cargo.toml and rust-toolchain.toml -msrv = "1.94" +msrv = "1.95" # Note: Lint levels are controlled via command-line flags in AGENTS.md: # cargo clippy --workspace --all-targets --all-features -- -D warnings -W clippy::pedantic -W clippy::nursery -W clippy::cargo diff --git a/docs/dev/commands.md b/docs/dev/commands.md index 0859b281..bbf98bf2 100644 --- a/docs/dev/commands.md +++ b/docs/dev/commands.md @@ -6,6 +6,26 @@ Agents must run appropriate checks after modifying code. --- +## Contents + +- [Core Workflow](#core-workflow) +- [Justfile Usage](#justfile-usage) +- [Formatting](#formatting) +- [Linting](#linting) +- [Documentation Validation](#documentation-validation) +- [Full CI Validation](#full-ci-validation) +- [Examples](#examples) +- [Spell Checking](#spell-checking) +- [TOML Formatting](#toml-formatting) +- [Shell Script Validation](#shell-script-validation) +- [JSON Validation](#json-validation) +- [GitHub Actions Validation](#github-actions-validation) +- [Recommended Command Matrix](#recommended-command-matrix) +- [CI Expectations](#ci-expectations) +- [Changelog](#changelog) + +--- + ## Core Workflow Typical development loop: diff --git a/docs/dev/debug_env_vars.md b/docs/dev/debug_env_vars.md index 33fb307a..6bc845f4 100644 --- a/docs/dev/debug_env_vars.md +++ b/docs/dev/debug_env_vars.md @@ -13,6 +13,24 @@ in release builds. --- +## Contents + +- [Construction & Insertion](#construction--insertion) +- [Point Location](#point-location) +- [Conflict Region](#conflict-region) +- [Cavity & Hull](#cavity--hull) +- [Orientation](#orientation) +- [Neighbor Wiring](#neighbor-wiring) +- [Flip Repair](#flip-repair) +- [Predicates & Validation](#predicates--validation) +- [Point Generation](#point-generation) +- [Large-Scale Debug Test Harness](#large-scale-debug-test-harness) +- [Proptest Configuration](#proptest-configuration) +- [Benchmarks](#benchmarks) +- [Miscellaneous](#miscellaneous) + +--- + ## Construction & Insertion | Variable | Activation | Module | Description | diff --git a/docs/dev/rust.md b/docs/dev/rust.md index 42c4451c..2e3acddb 100644 --- a/docs/dev/rust.md +++ b/docs/dev/rust.md @@ -6,6 +6,35 @@ Agents must follow these rules when modifying or adding Rust code. --- +## Contents + +- [Core Principles](#core-principles) +- [Safety](#safety) +- [Dimension Generic Architecture](#dimension-generic-architecture) +- [Borrowing and Ownership](#borrowing-and-ownership) +- [Error Handling](#error-handling) +- [Panic Policy](#panic-policy) +- [Error Types](#error-types) + - [Orthogonal variants](#orthogonal-variants) + - [Struct‑with‑named‑fields throughout](#structwithnamedfields-throughout) + - [Preserve typed sources — no boxing, no `dyn Error`](#preserve-typed-sources--no-boxing-no-dyn-error) + - [Do not stringify; carry typed context instead](#do-not-stringify-carry-typed-context-instead) + - [Derive `Clone, Debug, Error, PartialEq, Eq`](#derive-clone-debug-error-partialeq-eq) +- [Imports](#imports) +- [Module Layout](#module-layout) +- [Prelude Design](#prelude-design) +- [Documentation](#documentation) +- [Integration Tests](#integration-tests) +- [Testing Expectations](#testing-expectations) +- [Performance](#performance) +- [External Dependencies](#external-dependencies) +- [Formatting and Lints](#formatting-and-lints) +- [API Stability](#api-stability) +- [Logging and Diagnostics](#logging-and-diagnostics) +- [Preferred Patch Style](#preferred-patch-style) + +--- + ## Core Principles This project is a **scientific computational geometry library**. @@ -226,13 +255,178 @@ Avoid large centralized error enums. Example: ```rust -#[derive(Debug, thiserror::Error)] +#[derive(Clone, Debug, thiserror::Error, PartialEq, Eq)] +#[non_exhaustive] pub enum InsertError { #[error("duplicate vertex")] DuplicateVertex, } ``` +The sub‑sections below spell out the conventions that keep error values +**debuggable, composable, and stable**. They apply to every new error enum +and to edits of existing ones. + +### Orthogonal variants + +Every variant represents a **distinct failure mode**. Two variants must not +overlap in meaning: if a caller can't decide which one to match on, the +taxonomy is wrong. + +When the same underlying condition occurs in two different contexts +(e.g. primary failure vs. failure during fallback), model it with +**separate variants that each carry the full typed context**, not with a +single variant and a free‑form `context: String` field. + +Good: + +```rust +pub enum DelaunayizeError { + TopologyRepairFailed { + source: PlManifoldRepairError, + }, + TopologyRepairFailedWithRebuild { + source: PlManifoldRepairError, + rebuild_error: DelaunayTriangulationConstructionError, + }, + DelaunayRepairFailed { + source: DelaunayRepairError, + }, + DelaunayRepairFailedWithRebuild { + source: DelaunayRepairError, + rebuild_error: DelaunayTriangulationConstructionError, + }, +} +``` + +Each pair `Failed` / `FailedWithRebuild` is **orthogonal**: the caller +always knows whether a fallback was attempted, and if so which specific +rebuild error was produced. + +### Struct‑with‑named‑fields throughout + +Prefer **struct variants with named fields** over positional (tuple) variants, +even for single‑field carriers. Named fields: + +- document the semantics of each payload at the declaration site, +- keep `Display` format strings readable (`{source}`, `{rebuild_error}`), +- let downstream code pattern‑match by field name without caring about + positional order, +- remain additive: adding a new field is a compile‑error surface that + forces callers to consider it. + +Prefer: + +```rust +#[error("Invalid facet index {index} for cell with {facet_count} facets")] +InvalidFacetIndex { + index: u8, + facet_count: usize, +}, +``` + +Avoid: + +```rust +#[error("Invalid facet index {0} for cell with {1} facets")] +InvalidFacetIndex(u8, usize), +``` + +### Preserve typed sources — no boxing, no `dyn Error` + +Source and "secondary" errors must be stored **by value as typed enums**, +not as `Box`, not as `anyhow::Error`, and not stringified into a +`message: String` field. The whole point of the taxonomy is that consumers +can pattern-match the full structured error, while [`Error::source`] +exposes whichever field is annotated as the primary source. + +- Use `#[source]` (and `#[from]` where the conversion is unambiguous) on + the typed field so `thiserror` wires up the source chain. +- Use `Box` only when the **typed** payload would make the enum + unbalanced in size (e.g. `NonConvergent` carries a fat diagnostics + struct); the inner type is still fully typed. +- Never replace a typed error with a `String` just because the enum lived + in a different crate — that erases variant and source information. + +```rust +// Good: typed rebuild error preserved by value; primary source chain intact. +TopologyRepairFailedWithRebuild { + #[source] + source: PlManifoldRepairError, + rebuild_error: DelaunayTriangulationConstructionError, +}, +``` + +```rust +// Bad: stringification erases the typed variant. +TopologyRepairFailedWithRebuild { + source: PlManifoldRepairError, + rebuild_message: String, +}, +``` + +### Do not stringify; carry typed context instead + +Free‑form `message: String` fields are only acceptable when the context is +genuinely unstructured prose (rare). In practice, **most** "context" is +structured — indices, counts, keys, UUIDs, other enums — and belongs in +named fields of a struct variant. + +Prefer: + +```rust +#[error("Ridge indices ({omit_a}, {omit_b}) out of bounds for cell {cell_key:?} with {vertex_count} vertices")] +InvalidRidgeIndex { + cell_key: CellKey, + omit_a: u8, + omit_b: u8, + vertex_count: usize, +}, +``` + +Avoid: + +```rust +#[error("Ridge indices out of bounds: {message}")] +InvalidRidgeIndex { + message: String, +}, +``` + +Structured payloads support: + +- test assertions via `assert_eq!` / `matches!` without string parsing, +- diagnostic tools that filter or aggregate by field, +- localization and richer `Display` implementations without rewriting + call‑sites. + +### Derive `Clone, Debug, Error, PartialEq, Eq` + +All error enums should derive the standard set: + +```rust +#[derive(Clone, Debug, thiserror::Error, PartialEq, Eq)] +#[non_exhaustive] +pub enum FooError { ... } +``` + +- `Clone` — lets callers attach the error to multiple diagnostics paths + and lets tests construct expected values once and compare them. +- `Debug` — required for `Error`. +- `thiserror::Error` — wires up `Display` and `source()`. +- `PartialEq, Eq` — deriveable whenever all payload types are `Eq` + (integers, strings, UUIDs, keys, other `Eq` enums, `Arc` / + `Box` where `T: Eq`). All error enums in this crate satisfy + this today. Skip these only when a payload genuinely cannot be `Eq` + (e.g. `f64`, `io::Error`, `Box`) — none of which belong in + error values anyway. +- `#[non_exhaustive]` — new variants must remain additive; downstream + matches need a `_` arm. + +Use `assert_eq!` for fixed‑shape variants in tests; prefer `matches!` for +"just check the variant" when the payload contains long free‑form strings +or nondeterministic samples. + --- ## Imports diff --git a/docs/dev/testing.md b/docs/dev/testing.md index dd0ba05f..85f8efa4 100644 --- a/docs/dev/testing.md +++ b/docs/dev/testing.md @@ -6,6 +6,30 @@ Agents must follow these expectations when adding or modifying Rust code. --- +## Contents + +- [Testing Philosophy](#testing-philosophy) +- [Test Types](#test-types) + - [Unit Tests](#unit-tests) + - [Integration Tests](#integration-tests) + - [Property Tests](#property-tests) +- [Floating-Point Comparisons](#floating-point-comparisons) +- [Degenerate Geometry](#degenerate-geometry) +- [Dimension Coverage (2D–5D)](#dimension-coverage-2d5d) +- [Deterministic Randomness](#deterministic-randomness) +- [Error Handling in Tests](#error-handling-in-tests) +- [Triangulation Validation](#triangulation-validation) +- [Core Geometry Invariants](#core-geometry-invariants) +- [Triangulation Validity Checklist](#triangulation-validity-checklist) +- [Test Commands](#test-commands) +- [Documentation Tests](#documentation-tests) +- [Performance-Sensitive Tests](#performance-sensitive-tests) +- [CI Expectations](#ci-expectations) +- [Test Module Organization](#test-module-organization) +- [Preferred Test Style](#preferred-test-style) + +--- + ## Testing Philosophy This project is a **scientific computational geometry library**. diff --git a/examples/convex_hull_3d_100_points.rs b/examples/convex_hull_3d_100_points.rs index 77022580..b924a805 100644 --- a/examples/convex_hull_3d_100_points.rs +++ b/examples/convex_hull_3d_100_points.rs @@ -209,7 +209,7 @@ fn extract_and_analyze_convex_hull(dt: &DelaunayTriangulation BistellarMove for ConstK { /// let err = FlipError::UnsupportedDimension { dimension: 1 }; /// assert!(matches!(err, FlipError::UnsupportedDimension { .. })); /// ``` -#[derive(Clone, Debug, Error)] +#[derive(Clone, Debug, Error, PartialEq, Eq)] #[non_exhaustive] pub enum FlipError { /// Flips are not supported for this dimension. @@ -1357,7 +1357,7 @@ impl fmt::Display for DelaunayRepairDiagnostics { /// }; /// assert!(matches!(err, DelaunayRepairError::InvalidTopology { .. })); /// ``` -#[derive(Clone, Debug, Error)] +#[derive(Clone, Debug, Error, PartialEq, Eq)] #[non_exhaustive] pub enum DelaunayRepairError { /// Repair did not converge within the flip budget. @@ -3252,25 +3252,25 @@ impl RepairDiagnostics { } } - fn record_inserted_simplex_skip(&mut self, sample: String) { + fn record_inserted_simplex_skip(&mut self, sample: impl FnOnce() -> String) { self.inserted_simplex_skips = self.inserted_simplex_skips.saturating_add(1); if self.inserted_simplex_sample.is_none() { - self.inserted_simplex_sample = Some(sample); + self.inserted_simplex_sample = Some(sample()); } } - fn record_invalid_ridge_multiplicity_skip(&mut self, sample: String) { + fn record_invalid_ridge_multiplicity_skip(&mut self, sample: impl FnOnce() -> String) { self.invalid_ridge_multiplicity_skips = self.invalid_ridge_multiplicity_skips.saturating_add(1); if self.invalid_ridge_multiplicity_sample.is_none() { - self.invalid_ridge_multiplicity_sample = Some(sample); + self.invalid_ridge_multiplicity_sample = Some(sample()); } } - fn record_missing_cell_skip(&mut self, sample: String) { + fn record_missing_cell_skip(&mut self, sample: impl FnOnce() -> String) { self.missing_cell_skips = self.missing_cell_skips.saturating_add(1); if self.missing_cell_sample.is_none() { - self.missing_cell_sample = Some(sample); + self.missing_cell_sample = Some(sample()); } } } @@ -3756,22 +3756,20 @@ where ) => { match &err { FlipError::InvalidRidgeMultiplicity { found } => { - diagnostics.record_invalid_ridge_multiplicity_skip(format!( - "ridge={ridge:?} multiplicity={found}" - )); + diagnostics.record_invalid_ridge_multiplicity_skip(|| { + format!("ridge={ridge:?} multiplicity={found}") + }); if repair_ridge_debug_enabled() { debug_ridge_context(tds, ridge, Some(*found)); } } - FlipError::InvalidRidgeAdjacency { .. } => { - if repair_ridge_debug_enabled() { - debug_ridge_context(tds, ridge, None); - } + FlipError::InvalidRidgeAdjacency { .. } if repair_ridge_debug_enabled() => { + debug_ridge_context(tds, ridge, None); } FlipError::MissingCell { cell_key } => { - diagnostics.record_missing_cell_skip(format!( - "ridge={ridge:?} missing_cell={cell_key:?}" - )); + diagnostics.record_missing_cell_skip(|| { + format!("ridge={ridge:?} missing_cell={cell_key:?}") + }); } _ => {} } @@ -3835,30 +3833,37 @@ where return Err(non_convergent_error(max_flips, stats, diagnostics, config)); } + // Shared trace tail for apply-k=3 skip arms below. + let log_apply_skip = |err: &FlipError| { + if repair_trace_enabled() { + tracing::debug!("[repair] skip k=3 flip (ridge={ridge:?}) reason={err}"); + tracing::debug!( + "[repair] skip k=3 flip context removed_face={:?} inserted_face={:?} removed_cells={:?}", + context.removed_face_vertices, + context.inserted_face_vertices, + context.removed_cells, + ); + } + }; let info = match apply_bistellar_flip_k3(tds, &context) { Ok(info) => info, + Err(err) if let FlipError::InsertedSimplexAlreadyExists { .. } = &err => { + diagnostics.record_inserted_simplex_skip(|| { + format!( + "ridge={ridge:?} removed_face={:?} inserted_face={:?}", + context.removed_face_vertices, context.inserted_face_vertices + ) + }); + log_apply_skip(&err); + return Ok(true); + } Err( err @ (FlipError::DegenerateCell | FlipError::DuplicateCell | FlipError::NonManifoldFacet - | FlipError::InsertedSimplexAlreadyExists { .. } | FlipError::CellCreation(_)), ) => { - if let FlipError::InsertedSimplexAlreadyExists { .. } = &err { - diagnostics.record_inserted_simplex_skip(format!( - "ridge={ridge:?} removed_face={:?} inserted_face={:?}", - context.removed_face_vertices, context.inserted_face_vertices - )); - } - if repair_trace_enabled() { - tracing::debug!("[repair] skip k=3 flip (ridge={ridge:?}) reason={err}"); - tracing::debug!( - "[repair] skip k=3 flip context removed_face={:?} inserted_face={:?} removed_cells={:?}", - context.removed_face_vertices, - context.inserted_face_vertices, - context.removed_cells, - ); - } + log_apply_skip(&err); return Ok(true); } Err(e) => return Err(e.into()), @@ -3916,21 +3921,26 @@ where queues.edge_queued.remove(&key); stats.facets_checked += 1; + // Shared trace tail for build-k=2-edge skip arms below. + let log_build_skip = |err: &FlipError| { + if repair_trace_enabled() { + tracing::debug!("[repair] skip inverse k=2 edge (edge={edge:?}) reason={err}"); + } + }; let context = match build_k2_flip_context_from_edge(tds, edge) { Ok(ctx) => ctx, + Err(ref err) if let FlipError::MissingCell { cell_key } = err => { + diagnostics + .record_missing_cell_skip(|| format!("edge={edge:?} missing_cell={cell_key:?}")); + log_build_skip(err); + return Ok(true); + } Err( - err @ (FlipError::InvalidEdgeMultiplicity { .. } + ref err @ (FlipError::InvalidEdgeMultiplicity { .. } | FlipError::InvalidEdgeAdjacency { .. } - | FlipError::MissingCell { .. } | FlipError::MissingVertex { .. }), ) => { - if let FlipError::MissingCell { cell_key } = &err { - diagnostics - .record_missing_cell_skip(format!("edge={edge:?} missing_cell={cell_key:?}")); - } - if repair_trace_enabled() { - tracing::debug!("[repair] skip inverse k=2 edge (edge={edge:?}) reason={err}"); - } + log_build_skip(err); return Ok(true); } Err(e) => return Err(e.into()), @@ -4006,30 +4016,37 @@ where return Err(non_convergent_error(max_flips, stats, diagnostics, config)); } + // Shared trace tail for apply-inverse-k=2 skip arms below. + let log_apply_skip = |err: &FlipError| { + if repair_trace_enabled() { + tracing::debug!("[repair] skip inverse k=2 flip (edge={edge:?}) reason={err}"); + tracing::debug!( + "[repair] skip inverse k=2 flip context removed_face={:?} inserted_face={:?} removed_cells={:?}", + context.removed_face_vertices, + context.inserted_face_vertices, + context.removed_cells, + ); + } + }; let info = match apply_bistellar_flip_dynamic(tds, D, &context) { Ok(info) => info, + Err(err) if let FlipError::InsertedSimplexAlreadyExists { .. } = &err => { + diagnostics.record_inserted_simplex_skip(|| { + format!( + "edge={edge:?} removed_face={:?} inserted_face={:?}", + context.removed_face_vertices, context.inserted_face_vertices + ) + }); + log_apply_skip(&err); + return Ok(true); + } Err( err @ (FlipError::DegenerateCell | FlipError::DuplicateCell | FlipError::NonManifoldFacet - | FlipError::InsertedSimplexAlreadyExists { .. } | FlipError::CellCreation(_)), ) => { - if let FlipError::InsertedSimplexAlreadyExists { .. } = &err { - diagnostics.record_inserted_simplex_skip(format!( - "edge={edge:?} removed_face={:?} inserted_face={:?}", - context.removed_face_vertices, context.inserted_face_vertices - )); - } - if repair_trace_enabled() { - tracing::debug!("[repair] skip inverse k=2 flip (edge={edge:?}) reason={err}"); - tracing::debug!( - "[repair] skip inverse k=2 flip context removed_face={:?} inserted_face={:?} removed_cells={:?}", - context.removed_face_vertices, - context.inserted_face_vertices, - context.removed_cells, - ); - } + log_apply_skip(&err); return Ok(true); } Err(e) => return Err(e.into()), @@ -4087,24 +4104,29 @@ where queues.triangle_queued.remove(&key); stats.facets_checked += 1; + // Shared trace tail for build-k=3-triangle skip arms below. + let log_build_skip = |err: &FlipError| { + if repair_trace_enabled() { + tracing::debug!( + "[repair] skip inverse k=3 triangle (triangle={triangle:?}) reason={err}" + ); + } + }; let context = match build_k3_flip_context_from_triangle(tds, triangle) { Ok(ctx) => ctx, + Err(ref err) if let FlipError::MissingCell { cell_key } = err => { + diagnostics.record_missing_cell_skip(|| { + format!("triangle={triangle:?} missing_cell={cell_key:?}") + }); + log_build_skip(err); + return Ok(true); + } Err( - err @ (FlipError::InvalidTriangleMultiplicity { .. } + ref err @ (FlipError::InvalidTriangleMultiplicity { .. } | FlipError::InvalidTriangleAdjacency { .. } - | FlipError::MissingCell { .. } | FlipError::MissingVertex { .. }), ) => { - if let FlipError::MissingCell { cell_key } = &err { - diagnostics.record_missing_cell_skip(format!( - "triangle={triangle:?} missing_cell={cell_key:?}" - )); - } - if repair_trace_enabled() { - tracing::debug!( - "[repair] skip inverse k=3 triangle (triangle={triangle:?}) reason={err}" - ); - } + log_build_skip(err); return Ok(true); } Err(e) => return Err(e.into()), @@ -4169,32 +4191,37 @@ where return Err(non_convergent_error(max_flips, stats, diagnostics, config)); } + // Shared trace tail for apply-inverse-k=3 skip arms below. + let log_apply_skip = |err: &FlipError| { + if repair_trace_enabled() { + tracing::debug!("[repair] skip inverse k=3 flip (triangle={triangle:?}) reason={err}"); + tracing::debug!( + "[repair] skip inverse k=3 flip context removed_face={:?} inserted_face={:?} removed_cells={:?}", + context.removed_face_vertices, + context.inserted_face_vertices, + context.removed_cells, + ); + } + }; let info = match apply_bistellar_flip_dynamic(tds, D - 1, &context) { Ok(info) => info, + Err(err) if let FlipError::InsertedSimplexAlreadyExists { .. } = &err => { + diagnostics.record_inserted_simplex_skip(|| { + format!( + "triangle={triangle:?} removed_face={:?} inserted_face={:?}", + context.removed_face_vertices, context.inserted_face_vertices + ) + }); + log_apply_skip(&err); + return Ok(true); + } Err( err @ (FlipError::DegenerateCell | FlipError::DuplicateCell | FlipError::NonManifoldFacet - | FlipError::InsertedSimplexAlreadyExists { .. } | FlipError::CellCreation(_)), ) => { - if let FlipError::InsertedSimplexAlreadyExists { .. } = &err { - diagnostics.record_inserted_simplex_skip(format!( - "triangle={triangle:?} removed_face={:?} inserted_face={:?}", - context.removed_face_vertices, context.inserted_face_vertices - )); - } - if repair_trace_enabled() { - tracing::debug!( - "[repair] skip inverse k=3 flip (triangle={triangle:?}) reason={err}" - ); - tracing::debug!( - "[repair] skip inverse k=3 flip context removed_face={:?} inserted_face={:?} removed_cells={:?}", - context.removed_face_vertices, - context.inserted_face_vertices, - context.removed_cells, - ); - } + log_apply_skip(&err); return Ok(true); } Err(e) => return Err(e.into()), @@ -4256,22 +4283,27 @@ where }; stats.facets_checked += 1; + // Shared trace tail for build-k=2-facet skip arms below. + let log_build_skip = |err: &FlipError| { + if repair_trace_enabled() { + tracing::debug!("[repair] skip k=2 facet (facet={facet:?}) reason={err}"); + } + }; let context = match build_k2_flip_context(tds, facet) { Ok(ctx) => ctx, + Err(ref err) if let FlipError::MissingCell { cell_key } = err => { + diagnostics + .record_missing_cell_skip(|| format!("facet={facet:?} missing_cell={cell_key:?}")); + log_build_skip(err); + return Ok(true); + } Err( - err @ (FlipError::BoundaryFacet { .. } - | FlipError::MissingCell { .. } + ref err @ (FlipError::BoundaryFacet { .. } | FlipError::MissingNeighbor { .. } | FlipError::InvalidFacetAdjacency { .. } | FlipError::InvalidFacetIndex { .. }), ) => { - if let FlipError::MissingCell { cell_key } = &err { - diagnostics - .record_missing_cell_skip(format!("facet={facet:?} missing_cell={cell_key:?}")); - } - if repair_trace_enabled() { - tracing::debug!("[repair] skip k=2 facet (facet={facet:?}) reason={err}"); - } + log_build_skip(err); return Ok(true); } Err(e) => return Err(e.into()), @@ -4329,40 +4361,47 @@ where return Err(non_convergent_error(max_flips, stats, diagnostics, config)); } + // Shared trace tail for apply-k=2-facet skip arms below. + let log_apply_skip = |err: &FlipError| { + if std::env::var_os("DELAUNAY_REPAIR_DEBUG_FACETS").is_some() { + tracing::debug!( + facet = ?facet, + reason = %err, + removed_face = ?context.removed_face_vertices, + inserted_face = ?context.inserted_face_vertices, + removed_cells = ?context.removed_cells, + "[repair] skip k=2 flip" + ); + } + if repair_trace_enabled() { + tracing::debug!("[repair] skip k=2 flip (facet={facet:?}) reason={err}"); + tracing::debug!( + "[repair] skip k=2 flip context removed_face={:?} inserted_face={:?} removed_cells={:?}", + context.removed_face_vertices, + context.inserted_face_vertices, + context.removed_cells, + ); + } + }; let info = match apply_bistellar_flip_k2(tds, &context) { Ok(info) => info, + Err(err) if let FlipError::InsertedSimplexAlreadyExists { .. } = &err => { + diagnostics.record_inserted_simplex_skip(|| { + format!( + "facet={facet:?} removed_face={:?} inserted_face={:?}", + context.removed_face_vertices, context.inserted_face_vertices + ) + }); + log_apply_skip(&err); + return Ok(true); + } Err( err @ (FlipError::DegenerateCell | FlipError::DuplicateCell | FlipError::NonManifoldFacet - | FlipError::InsertedSimplexAlreadyExists { .. } | FlipError::CellCreation(_)), ) => { - if std::env::var_os("DELAUNAY_REPAIR_DEBUG_FACETS").is_some() { - tracing::debug!( - facet = ?facet, - reason = %err, - removed_face = ?context.removed_face_vertices, - inserted_face = ?context.inserted_face_vertices, - removed_cells = ?context.removed_cells, - "[repair] skip k=2 flip" - ); - } - if let FlipError::InsertedSimplexAlreadyExists { .. } = &err { - diagnostics.record_inserted_simplex_skip(format!( - "facet={facet:?} removed_face={:?} inserted_face={:?}", - context.removed_face_vertices, context.inserted_face_vertices - )); - } - if repair_trace_enabled() { - tracing::debug!("[repair] skip k=2 flip (facet={facet:?}) reason={err}"); - tracing::debug!( - "[repair] skip k=2 flip context removed_face={:?} inserted_face={:?} removed_cells={:?}", - context.removed_face_vertices, - context.inserted_face_vertices, - context.removed_cells, - ); - } + log_apply_skip(&err); return Ok(true); } Err(e) => return Err(e.into()), @@ -5073,6 +5112,7 @@ mod tests { use crate::triangulation::delaunay::DelaunayTriangulation; use crate::vertex; use rand::{RngExt, SeedableRng, rngs::StdRng}; + use std::sync::atomic::{AtomicUsize, Ordering}; fn init_tracing() { static INIT: std::sync::Once = std::sync::Once::new(); @@ -5512,6 +5552,62 @@ mod tests { assert_eq!(diagnostics.cycle_detections, 2); assert_eq!(diagnostics.cycle_samples, vec![10]); } + + #[test] + fn test_skip_recording_is_lazy() { + let mut diagnostics = RepairDiagnostics::default(); + let call_count = AtomicUsize::new(0); + + // First call: sample slot is None, closure must be invoked. + diagnostics.record_inserted_simplex_skip(|| { + call_count.fetch_add(1, Ordering::Relaxed); + "first".to_owned() + }); + assert_eq!(diagnostics.inserted_simplex_skips, 1); + assert_eq!( + diagnostics.inserted_simplex_sample.as_deref(), + Some("first") + ); + assert_eq!(call_count.load(Ordering::Relaxed), 1); + + // Second call: sample already set, closure must NOT be invoked. + diagnostics.record_inserted_simplex_skip(|| { + call_count.fetch_add(1, Ordering::Relaxed); + "second".to_owned() + }); + assert_eq!(diagnostics.inserted_simplex_skips, 2); + assert_eq!( + diagnostics.inserted_simplex_sample.as_deref(), + Some("first") + ); + assert_eq!(call_count.load(Ordering::Relaxed), 1); + + // Same contract for ridge-multiplicity and missing-cell helpers. + let ridge_calls = AtomicUsize::new(0); + diagnostics.record_invalid_ridge_multiplicity_skip(|| { + ridge_calls.fetch_add(1, Ordering::Relaxed); + "ridge".to_owned() + }); + diagnostics.record_invalid_ridge_multiplicity_skip(|| { + ridge_calls.fetch_add(1, Ordering::Relaxed); + "ridge2".to_owned() + }); + assert_eq!(diagnostics.invalid_ridge_multiplicity_skips, 2); + assert_eq!(ridge_calls.load(Ordering::Relaxed), 1); + + let cell_calls = AtomicUsize::new(0); + diagnostics.record_missing_cell_skip(|| { + cell_calls.fetch_add(1, Ordering::Relaxed); + "cell".to_owned() + }); + diagnostics.record_missing_cell_skip(|| { + cell_calls.fetch_add(1, Ordering::Relaxed); + "cell2".to_owned() + }); + assert_eq!(diagnostics.missing_cell_skips, 2); + assert_eq!(cell_calls.load(Ordering::Relaxed), 1); + } + #[derive(Debug, Clone, PartialEq, Eq)] struct TopologySnapshot { vertex_uuids: Vec, @@ -7086,6 +7182,56 @@ mod tests { assert!(tds.is_valid().is_ok()); } + #[test] + fn test_flip_error_partial_eq() { + let unsupported_1 = FlipError::UnsupportedDimension { dimension: 1 }; + let unsupported_1_copy = FlipError::UnsupportedDimension { dimension: 1 }; + let unsupported_2 = FlipError::UnsupportedDimension { dimension: 2 }; + assert_eq!(unsupported_1, unsupported_1_copy); + assert_ne!(unsupported_1, unsupported_2); + + assert_ne!(FlipError::DegenerateCell, FlipError::DuplicateCell); + assert_eq!(FlipError::NonManifoldFacet, FlipError::NonManifoldFacet); + + let ridge_4 = FlipError::InvalidRidgeMultiplicity { found: 4 }; + let ridge_4_copy = FlipError::InvalidRidgeMultiplicity { found: 4 }; + let ridge_5 = FlipError::InvalidRidgeMultiplicity { found: 5 }; + assert_eq!(ridge_4, ridge_4_copy); + assert_ne!(ridge_4, ridge_5); + } + + #[test] + fn test_delaunay_repair_error_partial_eq() { + use crate::core::triangulation::TopologyGuarantee; + + let post_test = DelaunayRepairError::PostconditionFailed { + message: "test".to_string(), + }; + let post_test_copy = DelaunayRepairError::PostconditionFailed { + message: "test".to_string(), + }; + let post_other = DelaunayRepairError::PostconditionFailed { + message: "other".to_string(), + }; + assert_eq!(post_test, post_test_copy); + assert_ne!(post_test, post_other); + + let topo_err = DelaunayRepairError::InvalidTopology { + required: TopologyGuarantee::PLManifold, + found: TopologyGuarantee::Pseudomanifold, + message: "test", + }; + let topo_err_copy = DelaunayRepairError::InvalidTopology { + required: TopologyGuarantee::PLManifold, + found: TopologyGuarantee::Pseudomanifold, + message: "test", + }; + assert_eq!(topo_err, topo_err_copy); + + // Different variants are never equal. + assert_ne!(post_test, topo_err); + } + #[test] fn test_repair_queue_k2_local_seed() { init_tracing(); diff --git a/src/geometry/predicates.rs b/src/geometry/predicates.rs index 56cdbd62..076c824e 100644 --- a/src/geometry/predicates.rs +++ b/src/geometry/predicates.rs @@ -15,6 +15,7 @@ use crate::geometry::util::{ squared_norm, }; use crate::prelude::CircumcenterError; +use core::hint::cold_path; use num_traits::Float; /// Convert an exact determinant sign (from `det_sign_exact`) to an [`Orientation`]. @@ -87,7 +88,10 @@ pub(crate) fn insphere_from_matrix( } // Stage 2: exact sign via Bareiss — reached for ambiguous f64 results - // (D ≤ 4) or always for D ≥ 5. + // (D ≤ 4) or always for D ≥ 5. `cold_path()` nudges the optimizer to + // keep Stage 1 lean; for D ≤ 4 with well-separated inputs, the vast + // majority of calls return before reaching this point. + cold_path(); let exact_is_safe = det_direct.is_some_and(f64::is_finite) || (0..k).all(|i| (0..k).all(|j| matrix.get(i, j).is_some_and(f64::is_finite))); if exact_is_safe && let Ok(sign) = matrix.det_sign_exact() { @@ -96,6 +100,7 @@ pub(crate) fn insphere_from_matrix( // Stage 3: sign is unresolvable (non-finite entries prevent exact // arithmetic from running). + cold_path(); InSphere::BOUNDARY } @@ -129,7 +134,9 @@ pub(crate) fn orientation_from_matrix( } // Stage 2: exact sign via Bareiss — reached for ambiguous f64 results - // (D ≤ 4) or always for D ≥ 5. + // (D ≤ 4) or always for D ≥ 5. See `insphere_from_matrix` for why this + // is annotated cold. + cold_path(); let exact_is_safe = det_direct.is_some_and(f64::is_finite) || (0..k).all(|i| (0..k).all(|j| matrix.get(i, j).is_some_and(f64::is_finite))); if exact_is_safe && let Ok(sign) = matrix.det_sign_exact() { @@ -137,6 +144,7 @@ pub(crate) fn orientation_from_matrix( } // Stage 3: sign is unresolvable (same reasoning as insphere_from_matrix). + cold_path(); Orientation::DEGENERATE } diff --git a/src/geometry/robust_predicates.rs b/src/geometry/robust_predicates.rs index 935eb577..dffc06fa 100644 --- a/src/geometry/robust_predicates.rs +++ b/src/geometry/robust_predicates.rs @@ -15,6 +15,7 @@ use crate::geometry::point::Point; use crate::geometry::traits::coordinate::{ Coordinate, CoordinateConversionError, CoordinateScalar, }; +use core::hint::cold_path; use std::sync::LazyLock; static STRICT_INSPHERE_CONSISTENCY: LazyLock = @@ -192,11 +193,14 @@ where // Strategy 3: Geometric + SoS fallback — only reached when exact-sign // computation itself failed (e.g. unsupported matrix size for D ≥ 6). + // `cold_path()` nudges the optimizer to keep Strategies 1–2 lean; the + // vast majority of calls return before reaching this point. // // First try insphere_distance (circumcenter/radius based — no matrix // determinant needed, works at any dimension). This handles the // non-degenerate cases correctly. Only if the result is BOUNDARY // (truly degenerate) do we apply SoS tie-breaking. + cold_path(); if let Ok(geometric_result) = super::predicates::insphere_distance(simplex_points, *test_point) && geometric_result != InSphere::BOUNDARY { @@ -305,6 +309,9 @@ where // Get simplex orientation for correct interpretation. let orientation = robust_orientation(simplex_points)?; if matches!(orientation, Orientation::DEGENERATE) { + // DEGENERATE simplices are rare in well-conditioned inputs; the + // BOUNDARY early-return is a cold path. + cold_path(); return Ok(InSphere::BOUNDARY); } let orient_sign: i8 = if matches!(orientation, Orientation::POSITIVE) { diff --git a/src/geometry/util/circumsphere.rs b/src/geometry/util/circumsphere.rs index 3987d3ce..4cb6fdeb 100644 --- a/src/geometry/util/circumsphere.rs +++ b/src/geometry/util/circumsphere.rs @@ -10,6 +10,7 @@ use super::norms::{hypot, squared_norm}; use crate::geometry::matrix::matrix_set; use crate::geometry::point::Point; use crate::geometry::traits::coordinate::{Coordinate, CoordinateScalar}; +use core::hint::cold_path; use la_stack::{DEFAULT_PIVOT_TOL, LaError, Vector as LaVector}; // Re-export error type @@ -157,6 +158,10 @@ where })? .into_array(), Err(LaError::Singular { .. }) => { + // Exact-arithmetic fallback: LU rejected the system as + // near-singular, so we pay for BigRational Gaussian elimination. + // This path is cold — well-conditioned simplices return above. + cold_path(); #[cfg(debug_assertions)] if std::env::var_os("DELAUNAY_DEBUG_LU_FALLBACK").is_some() { tracing::debug!("circumcenter<{D}>: LU near-singular, using solve_exact_f64"); @@ -169,6 +174,7 @@ where .into_array() } Err(e) => { + cold_path(); return Err(CircumcenterError::MatrixInversionFailed { details: format!("LU factorization failed: {e}"), }); @@ -344,6 +350,19 @@ mod tests { assert_relative_eq!(center.coords()[1], 0.75, epsilon = 1e-10); } + #[test] + fn test_circumradius_with_center_empty_point_set() { + // Hits the `points.is_empty()` early-return branch in + // `circumradius_with_center` (previously only exercised by + // `circumcenter`). + let points: Vec> = Vec::new(); + let center = Point::new([0.0, 0.0, 0.0]); + match circumradius_with_center(&points, ¢er) { + Err(CircumcenterError::EmptyPointSet) => {} + other => panic!("expected EmptyPointSet, got {other:?}"), + } + } + #[test] fn predicates_circumradius_2d() { let points = vec![ diff --git a/src/triangulation/delaunayize.rs b/src/triangulation/delaunayize.rs index 64984705..8601fe00 100644 --- a/src/triangulation/delaunayize.rs +++ b/src/triangulation/delaunayize.rs @@ -58,7 +58,9 @@ use crate::core::traits::data_type::DataType; use crate::core::vertex::Vertex; use crate::geometry::kernel::{ExactPredicates, Kernel}; use crate::geometry::traits::coordinate::CoordinateScalar; -use crate::triangulation::delaunay::{DelaunayRepairHeuristicConfig, DelaunayTriangulation}; +use crate::triangulation::delaunay::{ + DelaunayRepairHeuristicConfig, DelaunayTriangulation, DelaunayTriangulationConstructionError, +}; use thiserror::Error; // ============================================================================= @@ -159,6 +161,19 @@ pub struct DelaunayizeOutcome { /// - **Delaunay repair** failed (step 2), with optional context about a /// fallback rebuild attempt. /// +/// # Orthogonality +/// +/// The four variants are mutually exclusive by failure mode: +/// - Topology repair, fallback not attempted -> [`TopologyRepairFailed`](Self::TopologyRepairFailed). +/// - Topology repair, fallback also failed -> [`TopologyRepairFailedWithRebuild`](Self::TopologyRepairFailedWithRebuild). +/// - Delaunay repair, fallback not attempted -> [`DelaunayRepairFailed`](Self::DelaunayRepairFailed). +/// - Delaunay repair, fallback also failed -> [`DelaunayRepairFailedWithRebuild`](Self::DelaunayRepairFailedWithRebuild). +/// +/// The `*WithRebuild` variants preserve **both** the primary repair error and +/// the secondary construction error as typed values (no stringification), +/// so consumers can inspect both errors via pattern matching; the primary +/// repair error is exposed via [`Error::source`](std::error::Error::source). +/// /// # Examples /// /// ```rust @@ -172,36 +187,53 @@ pub struct DelaunayizeOutcome { /// found: TopologyGuarantee::Pseudomanifold, /// message: "requires manifold", /// }, -/// context: "fallback not enabled".to_string(), /// }; /// assert!(err.to_string().contains("Delaunay repair failed")); /// ``` -#[derive(Clone, Debug, Error)] +#[derive(Clone, Debug, Error, PartialEq, Eq)] #[non_exhaustive] pub enum DelaunayizeError { - /// PL-manifold topology repair failed. - #[error("Topology repair failed: {0}")] - TopologyRepairFailed(#[from] PlManifoldRepairError), + /// PL-manifold topology repair failed; no fallback rebuild was attempted + /// (fallback disabled, or the caller's config did not request one). + #[error("Topology repair failed: {source}")] + TopologyRepairFailed { + /// The underlying topology-repair error. + #[from] + #[source] + source: PlManifoldRepairError, + }, - /// Delaunay flip repair failed (and fallback was not enabled or also failed). - #[error("Delaunay repair failed ({context}): {source}")] + /// PL-manifold topology repair failed **and** the fallback vertex-set + /// rebuild also failed. Both errors are preserved as typed values. + #[error("Topology repair failed ({source}); fallback rebuild also failed: {rebuild_error}")] + TopologyRepairFailedWithRebuild { + /// The underlying topology-repair error that triggered the fallback. + #[source] + source: PlManifoldRepairError, + /// The construction error from the subsequent vertex-set rebuild attempt. + rebuild_error: DelaunayTriangulationConstructionError, + }, + + /// Delaunay flip repair failed; no fallback rebuild was attempted + /// (fallback disabled, or the caller's config did not request one). + #[error("Delaunay repair failed: {source}")] DelaunayRepairFailed { /// The underlying flip-repair error. + #[from] #[source] source: DelaunayRepairError, - /// Operational context (e.g. "fallback not enabled" or - /// "fallback rebuild also failed: ..."). - context: String, }, -} -impl From for DelaunayizeError { - fn from(err: DelaunayRepairError) -> Self { - Self::DelaunayRepairFailed { - source: err, - context: "fallback not enabled".to_string(), - } - } + /// Delaunay flip repair failed **and** the fallback vertex-set rebuild + /// also failed. Both errors are preserved as typed values. + #[error("Delaunay repair failed ({source}); fallback rebuild also failed: {rebuild_error}")] + DelaunayRepairFailedWithRebuild { + /// The underlying flip-repair error that triggered the fallback. + #[source] + source: DelaunayRepairError, + /// The construction error from the subsequent vertex-set rebuild attempt. + rebuild_error: DelaunayTriangulationConstructionError, + }, } // ============================================================================= @@ -222,11 +254,20 @@ impl From for DelaunayizeError { /// # Errors /// /// Returns [`DelaunayizeError`] if: -/// - Topology repair fails ([`TopologyRepairFailed`](DelaunayizeError::TopologyRepairFailed)). -/// - Delaunay flip repair fails and fallback is disabled +/// - Topology repair fails and no fallback rebuild was attempted +/// ([`TopologyRepairFailed`](DelaunayizeError::TopologyRepairFailed)). +/// - Topology repair fails **and** the fallback vertex-set rebuild also +/// fails +/// ([`TopologyRepairFailedWithRebuild`](DelaunayizeError::TopologyRepairFailedWithRebuild)). +/// - Delaunay flip repair fails and no fallback rebuild was attempted /// ([`DelaunayRepairFailed`](DelaunayizeError::DelaunayRepairFailed)). -/// - Fallback rebuild also fails (reported via -/// [`DelaunayRepairFailed`](DelaunayizeError::DelaunayRepairFailed) with context). +/// - Delaunay flip repair fails **and** the fallback vertex-set rebuild also +/// fails +/// ([`DelaunayRepairFailedWithRebuild`](DelaunayizeError::DelaunayRepairFailedWithRebuild)). +/// +/// The `*WithRebuild` variants preserve both errors as typed fields so +/// consumers can inspect both typed errors; +/// [`Error::source`](std::error::Error::source) exposes the primary repair error. /// /// # Examples /// @@ -245,6 +286,10 @@ impl From for DelaunayizeError { /// let outcome = delaunayize_by_flips(&mut dt, DelaunayizeConfig::default()).unwrap(); /// assert!(outcome.topology_repair.succeeded); /// ``` +#[expect( + clippy::result_large_err, + reason = "DelaunayizeError preserves typed source and rebuild_error values on the *WithRebuild variants (no boxing) so callers can pattern-match both errors while Error::source exposes the primary repair error; this is a cold error path." +)] pub fn delaunayize_by_flips( dt: &mut DelaunayTriangulation, config: DelaunayizeConfig, @@ -273,14 +318,11 @@ where max_iterations: config.topology_max_iterations, max_cells_removed: config.topology_max_cells_removed, }; - let topology_stats = match repair_facet_oversharing( - &mut dt.as_triangulation_mut().tds, - &pl_config, - ) { - Ok(stats) => stats, - Err(topo_err) => { - if let Some(ref verts) = fallback_vertices { - // Topology repair failed but fallback is enabled — try rebuilding. + let topology_stats = + match repair_facet_oversharing(&mut dt.as_triangulation_mut().tds, &pl_config) { + Ok(stats) => stats, + // Topology repair failed but fallback is enabled — try rebuilding. + Err(topo_err) if let Some(ref verts) = fallback_vertices => { match DelaunayTriangulation::with_kernel(&dt.as_triangulation().kernel, verts) { Ok(rebuilt) => { *dt = rebuilt; @@ -291,21 +333,15 @@ where }); } Err(rebuild_err) => { - return Err(DelaunayizeError::TopologyRepairFailed( - PlManifoldRepairError::Tds( - crate::core::tds::TdsError::InconsistentDataStructure { - message: format!( - "topology repair failed ({topo_err}); fallback rebuild also failed: {rebuild_err}" - ), - }, - ), - )); + return Err(DelaunayizeError::TopologyRepairFailedWithRebuild { + source: topo_err, + rebuild_error: rebuild_err, + }); } } } - return Err(topo_err.into()); - } - }; + Err(topo_err) => return Err(topo_err.into()), + }; // Step 2: Flip-based Delaunay repair. let delaunay_result = if let Some(max_flips) = config.delaunay_max_flips { @@ -344,9 +380,9 @@ where used_fallback_rebuild: true, }) } - Err(rebuild_err) => Err(DelaunayizeError::DelaunayRepairFailed { + Err(rebuild_err) => Err(DelaunayizeError::DelaunayRepairFailedWithRebuild { source: repair_err, - context: format!("fallback rebuild also failed: {rebuild_err}"), + rebuild_error: rebuild_err, }), } } else { @@ -384,6 +420,7 @@ mod tests { assert_eq!(config.topology_max_iterations, 64); assert_eq!(config.topology_max_cells_removed, 10_000); assert!(!config.fallback_rebuild); + assert!(config.delaunay_max_flips.is_none()); } // ============================================================================= diff --git a/tests/delaunayize_workflow.rs b/tests/delaunayize_workflow.rs index 274a8725..cb8a77fb 100644 --- a/tests/delaunayize_workflow.rs +++ b/tests/delaunayize_workflow.rs @@ -8,9 +8,13 @@ //! - Repeat-run determinism for outcome stats //! - Multi-dimensional coverage (2D–3D) +use delaunay::core::algorithms::flips::DelaunayRepairError; +use delaunay::core::triangulation::TriangulationConstructionError; use delaunay::prelude::triangulation::delaunayize::*; use delaunay::prelude::triangulation::flips::BistellarFlips; +use delaunay::triangulation::delaunay::DelaunayTriangulationConstructionError; use delaunay::triangulation::flips::FacetHandle; +use std::error::Error; // ============================================================================= // HELPER FUNCTIONS @@ -311,39 +315,244 @@ fn test_flip_breaks_delaunay_then_delaunayize_restores() { // ERROR VARIANT TESTS // ============================================================================= -/// Verify that `DelaunayizeError::TopologyRepairFailed` is constructible and -/// displays correctly (via the `From` impl). +/// Verify that `DelaunayizeError::TopologyRepairFailed` is constructible via +/// the `From` impl and displays correctly. #[test] fn test_error_display_topology_repair_failed() { - use delaunay::triangulation::delaunayize::{DelaunayizeError, PlManifoldRepairError}; - let inner = PlManifoldRepairError::NoProgress { over_shared_facets: 3, iterations: 5, cells_removed: 10, }; - let err: DelaunayizeError = inner.into(); + let err: DelaunayizeError = inner.clone().into(); let msg = err.to_string(); assert!(msg.contains("Topology repair failed"), "{msg}"); assert!(msg.contains("3 over-shared facets"), "{msg}"); + + // Typed source is preserved end-to-end — no stringification. + assert_eq!( + err, + DelaunayizeError::TopologyRepairFailed { source: inner } + ); } -/// Verify that `DelaunayizeError::DelaunayRepairFailed` preserves the source error. +/// Verify that `DelaunayizeError::DelaunayRepairFailed` preserves the typed +/// source error via the `From` impl. #[test] fn test_error_display_delaunay_repair_failed() { - use delaunay::core::algorithms::flips::DelaunayRepairError; - use delaunay::triangulation::delaunayize::DelaunayizeError; - #[allow(unused_imports)] - use std::error::Error; - let inner = DelaunayRepairError::PostconditionFailed { message: "test postcondition".to_string(), }; - let err: DelaunayizeError = inner.into(); + let err: DelaunayizeError = inner.clone().into(); let msg = err.to_string(); assert!(msg.contains("Delaunay repair failed"), "{msg}"); assert!(msg.contains("test postcondition"), "{msg}"); - assert!(msg.contains("fallback not enabled"), "{msg}"); + + // Typed source is preserved end-to-end — no stringification. + assert_eq!( + err, + DelaunayizeError::DelaunayRepairFailed { source: inner } + ); +} + +/// Verify that `DelaunayizeError::TopologyRepairFailedWithRebuild` preserves +/// **both** the typed [`PlManifoldRepairError`] source and the typed +/// [`DelaunayTriangulationConstructionError`] rebuild error, and exposes +/// the primary source via [`Error::source`]. +/// +/// Regression guard: an earlier version of the fallback-rebuild-failure arm +/// stringified the topology error into a `TdsError::InconsistentDataStructure`, +/// which erased the typed variant and the source chain. +#[test] +fn test_error_display_topology_repair_with_rebuild() { + let topo_err = PlManifoldRepairError::NoProgress { + over_shared_facets: 3, + iterations: 5, + cells_removed: 10, + }; + let rebuild_err: DelaunayTriangulationConstructionError = + TriangulationConstructionError::GeometricDegeneracy { + message: "synthetic rebuild degeneracy".to_string(), + } + .into(); + let err = DelaunayizeError::TopologyRepairFailedWithRebuild { + source: topo_err.clone(), + rebuild_error: rebuild_err.clone(), + }; + + // Display carries both the primary topology failure and the rebuild error. + let msg = err.to_string(); + assert!(msg.contains("Topology repair failed"), "{msg}"); + assert!(msg.contains("3 over-shared facets"), "{msg}"); + assert!(msg.contains("fallback rebuild also failed"), "{msg}"); + assert!(msg.contains("synthetic rebuild degeneracy"), "{msg}"); + + // Both the typed source and rebuild error are preserved — no stringification. + assert_eq!( + err, + DelaunayizeError::TopologyRepairFailedWithRebuild { + source: topo_err, + rebuild_error: rebuild_err, + } + ); + + // Error::source() exposes the primary topology error so consumers can walk + // the source chain instead of pattern-matching. + let source = err + .source() + .expect("source() must be Some for the with-rebuild variant"); + assert!( + source.to_string().contains("3 over-shared facets"), + "source display should match the underlying PlManifoldRepairError: {source}" + ); +} + +/// Verify that `DelaunayizeError::DelaunayRepairFailedWithRebuild` preserves +/// **both** the typed [`DelaunayRepairError`] source and the typed +/// [`DelaunayTriangulationConstructionError`] rebuild error. +#[test] +fn test_error_display_delaunay_repair_with_rebuild() { + let rebuild_err: DelaunayTriangulationConstructionError = + TriangulationConstructionError::GeometricDegeneracy { + message: "synthetic rebuild degeneracy".to_string(), + } + .into(); + let source = DelaunayRepairError::PostconditionFailed { + message: "synthetic postcondition".to_string(), + }; + let err = DelaunayizeError::DelaunayRepairFailedWithRebuild { + source: source.clone(), + rebuild_error: rebuild_err.clone(), + }; + + let msg = err.to_string(); + assert!(msg.contains("Delaunay repair failed"), "{msg}"); + assert!(msg.contains("synthetic postcondition"), "{msg}"); + assert!(msg.contains("fallback rebuild also failed"), "{msg}"); + assert!(msg.contains("synthetic rebuild degeneracy"), "{msg}"); + + // Both the typed source and rebuild error are preserved — no stringification. + assert_eq!( + err, + DelaunayizeError::DelaunayRepairFailedWithRebuild { + source, + rebuild_error: rebuild_err, + } + ); + + let source = err + .source() + .expect("source() must be Some for the with-rebuild variant"); + assert!( + source.to_string().contains("synthetic postcondition"), + "source display should match the underlying DelaunayRepairError: {source}" + ); +} + +// ============================================================================= +// EXPLICIT FLIP BUDGET TESTS +// ============================================================================= + +/// Verify that `delaunayize_by_flips` works with an explicit `delaunay_max_flips` +/// budget, which routes through `repair_delaunay_with_flips_advanced` instead +/// of `repair_delaunay_with_flips`. +#[test] +fn test_delaunayize_with_explicit_flip_budget_3d() { + init_tracing(); + let vertices = vec![ + vertex!([0.0, 0.0, 0.0]), + vertex!([1.0, 0.0, 0.0]), + vertex!([0.0, 1.0, 0.0]), + vertex!([0.0, 0.0, 1.0]), + vertex!([0.5, 0.5, 0.5]), + ]; + let mut dt: DelaunayTriangulation<_, (), (), 3> = + DelaunayTriangulation::new(&vertices).unwrap(); + + let config = DelaunayizeConfig { + delaunay_max_flips: Some(1000), + ..DelaunayizeConfig::default() + }; + let outcome = delaunayize_by_flips(&mut dt, config).unwrap(); + assert!(outcome.topology_repair.succeeded); + assert!(!outcome.used_fallback_rebuild); + assert!(dt.validate().is_ok()); +} + +/// Verify that `delaunayize_by_flips` handles both `delaunay_max_flips` and +/// `fallback_rebuild` together on valid input. +#[test] +fn test_delaunayize_with_flip_budget_and_fallback_2d() { + init_tracing(); + let vertices = vec![ + vertex!([0.0, 0.0]), + vertex!([1.0, 0.0]), + vertex!([0.0, 1.0]), + vertex!([1.0, 1.0]), + vertex!([0.5, 0.5]), + ]; + let mut dt: DelaunayTriangulation<_, (), (), 2> = + DelaunayTriangulation::new(&vertices).unwrap(); + + let config = DelaunayizeConfig { + delaunay_max_flips: Some(500), + fallback_rebuild: true, + ..DelaunayizeConfig::default() + }; + let outcome = delaunayize_by_flips(&mut dt, config).unwrap(); + assert!(outcome.topology_repair.succeeded); + // Already valid — fallback should not be triggered. + assert!(!outcome.used_fallback_rebuild); + assert!(dt.validate().is_ok()); +} + +/// Apply a k=2 flip to break the Delaunay property, then verify +/// `delaunayize_by_flips` with an explicit flip budget restores it. +#[test] +fn test_flip_breaks_then_delaunayize_with_budget_restores_3d() { + init_tracing(); + let vertices = vec![ + vertex!([0.0, 0.0, 0.0]), + vertex!([1.0, 0.0, 0.0]), + vertex!([0.0, 1.0, 0.0]), + vertex!([0.0, 0.0, 1.0]), + vertex!([0.5, 0.5, 0.5]), + ]; + let mut dt: DelaunayTriangulation<_, (), (), 3> = + DelaunayTriangulation::new(&vertices).unwrap(); + assert!(dt.validate().is_ok()); + + // Collect candidate interior facets. + let mut candidate_facets = Vec::new(); + for (ck, cell) in dt.cells() { + if let Some(neighbors) = cell.neighbors() { + for (i, n) in neighbors.iter().enumerate() { + if let (Some(_), Ok(idx)) = (n, u8::try_from(i)) { + candidate_facets.push(FacetHandle::new(ck, idx)); + } + } + } + } + + let mut flipped = false; + for facet in candidate_facets { + if dt.flip_k2(facet).is_ok() { + flipped = true; + break; + } + } + + if !flipped { + return; + } + + let config = DelaunayizeConfig { + delaunay_max_flips: Some(1000), + ..DelaunayizeConfig::default() + }; + let outcome = delaunayize_by_flips(&mut dt, config).unwrap(); + assert!(outcome.topology_repair.succeeded); + assert!(dt.validate().is_ok()); } // =============================================================================