feat: instrument large-scale 4D debugging and widen local repair seeds#339
feat: instrument large-scale 4D debugging and widen local repair seeds#339acgetchell merged 11 commits intomainfrom
Conversation
- Thread cavity-touched cells through insertion as `repair_seed_cells`
so post-insertion local Delaunay repair widens its frontier beyond
the inserted vertex star; cells shrunk out of the conflict region
during cavity reduction now participate in the next repair pass.
- Accumulate ridge-fan extras across every fan in a conflict region
before returning `RidgeFan`, letting one cavity-reduction step
shrink all detected fans at once instead of peeling them iteration
by iteration.
- Add release-visible diagnostic hooks routed through `tracing::debug!`:
`DELAUNAY_BULK_PROGRESS_EVERY` for periodic batch-construction
progress, `DELAUNAY_DEBUG_RETRYABLE_SKIP` for retryable
conflict-region skip traces, `DELAUNAY_DEBUG_CAVITY_REDUCTION_ONCE`
for the first cavity-reduction chain, `DELAUNAY_DEBUG_RIDGE_FAN_ONCE`
for the first detected ridge fan, and
`DELAUNAY_REPAIR_DEBUG_POSTCONDITION_FACET` /
`DELAUNAY_REPAIR_DEBUG_RIDGE_MIN_MULTIPLICITY` for repair
postcondition debugging.
- Thread `last_applied_flip` through repair postcondition verification
so unresolved k=2 facet and ridge snapshots can relate the violating
local star to the immediately preceding flip.
- Replace `ConflictError::InternalInconsistency { context: String }`
with a typed `InternalInconsistencySite` enum carrying structured
indices and counts, so callers can `matches!` on specific sites
instead of parsing prose.
- Generalize the large-scale incremental prefix bisect over `const D`,
add a 4D counterpart targeting the seeded 500-point repro
(`0xD225_B8A0_7E27_4AE6`), and expose it via
`just debug-large-scale-4d-incremental-bisect`.
- Switch the large-scale debug just recipes to `--release` and
document the 2026-04-23 re-verification: historical 35-point 3D and
100-point 4D correctness repros from #306/#307 now pass, while a
500-point 4D seed still fails all shuffled retries with
`Ridge fan detected: 4 facets share ridge with 3 vertices`.
- Default the large-scale debug harness tracing filter to `debug` when
any of the new release-visible env vars are present so library-side
`tracing::debug!` events surface without extra `RUST_LOG` wiring.
- Broaden `test_perturbation_retry_and_exhaustion_4d` and
`test_perturbation_retry_seeded_branch_4d` to iterate over 50 seeds
so the retry-path assertions stay robust to insertion-path
improvements that make individual well-conditioned seeds less likely
to trigger retries.
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 19 minutes and 8 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yml Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
WalkthroughAdds release-mode observability and parameterized large-scale debug recipes; records re-verification of historical 3D/4D seeded correctness repros as fixed; threads cavity-derived repair seeds through insertion; implements D≥4 local-repair flip budgets with bounded escalation; replaces many prints with structured tracing and adds a 4D 500-point regression test. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Runner as Runner/just
participant Harness as Large-scale Harness
participant Tri as Triangulation.insert
participant Locate as ConflictRegion/Locate
participant Repair as Repair Engine (flips)
participant Tracing as Tracing
Runner->>Harness: start release-mode bulk run (env vars)
Harness->>Tri: insert vertex (bulk_index)
Tri->>Locate: extract conflict region / cavity
alt cavity shaping found
Locate-->>Tri: cavity + repair_seed_cells
Tri->>Repair: local repair attempt (uses repair_seed_cells, budget::<D>())
Repair-->>Tri: RepairAttemptOutcome (stats, last_applied_flip)
Tri->>Locate: replay postcondition (with last_applied_flip context)
alt D ≥ 4 and repair stalls
Tri->>Repair: escalated rerun (larger budget, full-TDS seed)
Repair-->>Tri: escalated outcome
end
else simple insertion
Locate-->>Tri: proceed commit
end
Tri->>Tracing: emit per-insert progress/retry/ridge-fan dump
Tri-->>Harness: DetailedInsertionResult (outcome, stats, repair_seed_cells)
Harness-->>Runner: report progress/results
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related issues
Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Up to standards ✅🟢 Issues
|
| Metric | Results |
|---|---|
| Complexity | -10 |
🟢 Coverage 50.92% diff coverage · -0.52% coverage variation
Metric Results Coverage variation ✅ -0.52% coverage variation (-1.00%) Diff coverage ✅ 50.92% diff coverage Coverage variation details
Coverable lines Covered lines Coverage Common ancestor commit (68deb62) 14396 8994 62.48% Head commit (a73e117) 15301 (+905) 9480 (+486) 61.96% (-0.52%) Coverage variation is the difference between the coverage for the head and common ancestor commits of the pull request branch:
<coverage of head commit> - <coverage of common ancestor commit>Diff coverage details
Coverable lines Covered lines Diff coverage Pull request (#339) 1251 637 50.92% Diff coverage is the percentage of lines that are covered by tests out of the coverable lines that the pull request added or modified:
<covered lines added or modified>/<coverable lines added or modified> * 100%
NEW Get contextual insights on your PRs based on Codacy's metrics, along with PR and Jira context, without leaving GitHub. Enable AI reviewer
TIP This summary will be updated as you push new changes.
There was a problem hiding this comment.
Actionable comments posted: 6
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
tests/large_scale_debug.rs (1)
1154-1158:⚠️ Potential issue | 🟡 MinorReplace hardcoded boundary check with a dimension-aware lower bound.
Line 1154 uses
> 4, which can run an invalidminimal_prefix - 1probe for higher dimensions. The guard should track the same lower bound used by the search (D + 1).Suggested fix
- if minimal_prefix > 4 { + if minimal_prefix > D + 1 { assert!( run_probe(minimal_prefix - 1).is_some_and(|result| result.is_ok()), "internal bisect inconsistency: prefix {} should pass", minimal_prefix - 1 ); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/large_scale_debug.rs` around lines 1154 - 1158, The boundary check uses a hardcoded 4 and can underflow for higher dimensions; replace the condition `if minimal_prefix > 4` with a dimension-aware lower bound check using the same lower bound as the search (e.g., `if minimal_prefix > D + 1`) so that the subsequent probe `run_probe(minimal_prefix - 1)` only runs when `minimal_prefix - 1` is valid; update the guard around `run_probe` (referencing minimal_prefix, run_probe, and D) accordingly.src/triangulation/delaunay.rs (1)
5501-5523:⚠️ Potential issue | 🟠 MajorValidate ridge links against the full widened repair frontier
extra_seed_cellslets the repair pass mutate cells outside the inserted vertex star, but the post-repair safety net still validates onlyadjacent_cells(vertex_key). That means a ridge-link break introduced in one of the extra seed cells can now slip through this path and surface only later.Suggested fix
- if topology.requires_ridge_links() { - let local_cells: Vec<CellKey> = self.tri.adjacent_cells(vertex_key).collect(); - if !local_cells.is_empty() - && let Err(err) = validate_ridge_links_for_cells(&self.tri.tds, &local_cells) - { + if topology.requires_ridge_links() { + let mut validation_cells = if seed_cells.is_empty() { + self.tri.adjacent_cells(vertex_key).collect::<Vec<_>>() + } else { + seed_cells.clone() + }; + validation_cells.retain(|&ck| self.tri.tds.contains_cell(ck)); + if !validation_cells.is_empty() + && let Err(err) = + validate_ridge_links_for_cells(&self.tri.tds, &validation_cells) + { return Err(InsertionError::TopologyValidationFailed { message: "Topology invalid after Delaunay repair".to_string(), source: Box::new(TriangulationValidationError::from(err)), }); } }As per coding guidelines, "Every mutating operation must preserve invariants checked by Tds::is_valid (Level 1-3) and DelaunayTriangulation::is_valid (Level 4), or fail explicitly".
Also applies to: 5598-5607
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/triangulation/delaunay.rs` around lines 5501 - 5523, The post-repair validation in maybe_repair_after_insertion_capped currently checks ridge-links only against adjacent_cells(vertex_key); instead, gather the widened repair frontier you computed (seed_cells from collect_local_repair_seed_cells and any hint_seed) and run the ridge-link/invariant validation against that full set so mutations in extra_seed_cells are covered; update the validation call site(s) in maybe_repair_after_insertion_capped (and the analogous block at 5598-5607) to use the merged seed frontier rather than just adjacent_cells(vertex_key), and ensure any helper used (e.g., validate_ridge_links, Tds::is_valid checks, or DelaunayTriangulation::is_valid Level 4 checks) is invoked over that frontier and fails explicitly if invariants do not hold.src/core/algorithms/flips.rs (2)
3687-3695:⚠️ Potential issue | 🟠 MajorPass
last_applied_flipinto ridge postcondition replay too.
verify_repair_postcondition_locallynow threads predecessor context into the k=2 verifier only. An unresolved k=3 postcondition failure still has no way to attach the preceding flip, so the new ridge diagnostics stay incomplete for the exact failure mode this PR is targeting.Suggested wiring
verify_postcondition_k3_ridges( tds, kernel, topology_model, &mut queues.ridge_queue, &config, &mut diagnostics, mode, + last_applied_flip, )?;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/algorithms/flips.rs` around lines 3687 - 3695, The ridge k=3 postcondition verifier call (verify_postcondition_k3_ridges) is missing the predecessor flip context, so ridge diagnostics cannot include the preceding flip; update the call site where verify_postcondition_k3_ridges(...) is invoked to accept and forward the last_applied_flip (the same value threaded into verify_repair_postcondition_locally and the k=2 verifier), adjust the verify_postcondition_k3_ridges signature and any downstream uses to accept an Option/Reference to last_applied_flip, and ensure the diagnostics collected inside verify_postcondition_k3_ridges include that last_applied_flip when reporting failures so ridge diagnostics become complete.
4364-4404:⚠️ Potential issue | 🟡 MinorSnapshot removed simplices, not just removed cell keys.
LastAppliedFlip::from_infokeepsremoved_cells, but those keys are dead immediately after the flip. The downstream predecessor summaries therefore cannot recover the removed cavity geometry and will only report those cells asmissing, which makes the new debug context much less useful.Suggested direction
struct LastAppliedFlip { k_move: usize, removed_face_vertices: SmallBuffer<VertexKey, MAX_PRACTICAL_DIMENSION_SIZE>, inserted_face_vertices: SmallBuffer<VertexKey, MAX_PRACTICAL_DIMENSION_SIZE>, removed_cells: CellKeyBuffer, new_cells: CellKeyBuffer, + removed_cell_vertices: + Vec<SmallBuffer<VertexKey, MAX_PRACTICAL_DIMENSION_SIZE>>, }Capture
removed_cell_verticesbeforetds.remove_cells_by_keys(removed_cells)and use that snapshot in the predecessor summaries.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/algorithms/flips.rs` around lines 4364 - 4404, The LastAppliedFlip currently stores only removed cell keys which become invalid after tds.remove_cells_by_keys; update LastAppliedFlip to also capture the actual vertex lists of removed simplices so predecessor summaries can reconstruct the removed cavity: add a new field (e.g. removed_cell_vertices) to the LastAppliedFlip struct to hold per-removed-cell vertex buffers, initialize it in LastAppliedFlip::new (empty) and populate it in LastAppliedFlip::from_info by snapshotting each removed cell's vertex list (from the FlipInfo or by reading the TDS before removal) before the keys are deleted; keep removed_cells/new_cells as-is for compatibility but ensure predecessor summary code uses removed_cell_vertices for geometry reconstruction.
🧹 Nitpick comments (2)
src/core/algorithms/locate.rs (1)
1686-1782: LGTM on the multi-fan accumulation — minor observability nit.The cross-fan accumulation (lines 1690–1755) correctly deduplicates
extra_cellsviaridge_fan_seen_cellsand then returns a single aggregatedRidgeFanerror at line 1776–1782, matching the new doc-comment contract on the variant. One small observation: becauseridge_mapis aFastHashMap, the "first" fan recorded infirst_ridge_fan(and thereforefacet_count/ridge_vertex_countin the returned error) is whichever ridge the hashmap iteration visits first, which is not stable across runs for the same input. The doc comment at lines 215–222 acknowledges "the first fan that the boundary walk observed", so this is only worth noting if you want error-message reproducibility for the debug harness — if you do, picking the fan with the largestfacet_count(or the smallest ridge-hash) would give a deterministic witness.src/core/triangulation.rs (1)
10478-10515: Use a fixed adversarial 4D fixture here instead of a random seed search.These tests still pass only if some random seed happens to trigger a retry/skip. As the happy path improves, that can turn into intermittent CI failures even when the retry logic is still correct. Wiring them to the dedicated 4D repro/minimized input would make this coverage deterministic.
Also applies to: 10527-10567
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/triangulation.rs` around lines 10478 - 10515, Replace the probabilistic random-seed-based test_perturbation_retry_and_exhaustion_4d with a deterministic adversarial 4D fixture: stop generating points via crate::geometry::util::generate_random_points_seeded and instead load or construct the known minimal 4D point set that reliably triggers the perturbation retry/skip path, then insert those points into the Triangulation<AdaptiveKernel<f64>, (), (), 4> using Triangulation::insert_with_statistics and assert that stats.used_perturbation()/stats.success() or stats.skipped() with stats.attempts > 1 occurs; make the same change for the analogous test at the second occurrence (around the region referenced in the review) so both tests use fixed adversarial fixtures rather than a seed search.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/dev/debug_env_vars.md`:
- Around line 39-41: The listed debug environment variable rows
(DELAUNAY_BULK_PROGRESS_EVERY, DELAUNAY_DEBUG_CAVITY_REDUCTION_ONCE,
DELAUNAY_DEBUG_RETRYABLE_SKIP) are actually release-visible diagnostics; update
the docs so they are not under the “debug-only unless noted” claim by either
adding a per-row note (e.g., “release-visible”) in the three table rows or
adding a section-level note clarifying these are available in release builds;
reference the symbols DELAUNAY_BULK_PROGRESS_EVERY,
DELAUNAY_DEBUG_CAVITY_REDUCTION_ONCE, and DELAUNAY_DEBUG_RETRYABLE_SKIP and
apply the same change to the other occurrences mentioned (lines 60-60 and
89-93).
In `@docs/KNOWN_ISSUES_4D.md`:
- Around line 12-13: The top-level summary for seed 0x9B7786C999C56A16 is
inconsistent with the detailed attempt entries; update the summary to explicitly
state which attempt it describes (e.g., "final retry 1 inserts 100/100 vertices
with zero skips, validation ~15.4s") or change it to reflect the initial attempt
results (e.g., "attempt 0: inserted=86, skipped=14; retry 1: inserted=100,
skipped=0") so the entry for seed 0x9B7786C999C56A16 no longer contradicts the
per-attempt lines.
In `@docs/TODO.md`:
- Around line 22-24: Update the document header "As of" date to match the new
recheck entries by replacing the existing timestamp in the top-level "As of"
(the document header string "As of ...") with 2026-04-23, and then scan the file
for any other stale "As of" or header timestamps (including the sections that
currently reference 2026-04-23 in the body such as the blocks around the
previous notes) and make them consistent (replace any older dates in those
header/summary lines with 2026-04-23 so the header and the status entries
match).
In `@src/core/algorithms/locate.rs`:
- Around line 281-349: Add the new public types ConflictError and
InternalInconsistencySite to the crate prelude re-exports so callers can
pattern-match on ConflictError::InternalInconsistency; specifically, update the
pub mod prelude block in src/lib.rs to include ConflictError and
InternalInconsistencySite alongside LocateError/LocateResult/LocateFallback, and
also add them to the pub mod algorithms submodule re-exports in the prelude
(where other locate/algorithms types are re-exported) so both symbols are
available via crate::prelude::*.
In `@src/core/triangulation.rs`:
- Around line 4358-4360: repair_seed_cells is initialized from the original
conflict_cells but later the code deletes many of those cells, so
repair_seed_cells ends up containing dead CellKey entries; change the
initialization to filter out any cells that are scheduled/actually deleted so
only live/shrunk-out cells remain. Concretely, after creating repair_seed_cells
from conflict_cells, remove entries present in the deletion set (or recompute
repair_seed_cells as conflict_cells.iter().filter(|k|
!deleted_set.contains(k)).collect()), using the same deletion marker/collection
used when deleting the conflict region so references like repair_seed_cells and
conflict_cells refer only to live cells.
In `@src/triangulation/delaunay.rs`:
- Around line 3166-3176: The progress counters are currently offset by the
initial simplex: when creating BatchProgressState (the batch_progress
initialization in delaunay.rs) you must exclude the D+1 simplex vertices from
the measured batch and reset the processed counter to start from zero; change
total_vertices from vertices.len() to vertices.len().saturating_sub(D + 1) (or
vertices.len() - (D + 1) if safe) and set last_processed to 0 instead of D + 1,
and apply the same two changes to the other BatchProgressState initializations
referenced (around the other occurrences at the noted ranges) so
throughput/progress reflect only post-simplex insertions.
---
Outside diff comments:
In `@src/core/algorithms/flips.rs`:
- Around line 3687-3695: The ridge k=3 postcondition verifier call
(verify_postcondition_k3_ridges) is missing the predecessor flip context, so
ridge diagnostics cannot include the preceding flip; update the call site where
verify_postcondition_k3_ridges(...) is invoked to accept and forward the
last_applied_flip (the same value threaded into
verify_repair_postcondition_locally and the k=2 verifier), adjust the
verify_postcondition_k3_ridges signature and any downstream uses to accept an
Option/Reference to last_applied_flip, and ensure the diagnostics collected
inside verify_postcondition_k3_ridges include that last_applied_flip when
reporting failures so ridge diagnostics become complete.
- Around line 4364-4404: The LastAppliedFlip currently stores only removed cell
keys which become invalid after tds.remove_cells_by_keys; update LastAppliedFlip
to also capture the actual vertex lists of removed simplices so predecessor
summaries can reconstruct the removed cavity: add a new field (e.g.
removed_cell_vertices) to the LastAppliedFlip struct to hold per-removed-cell
vertex buffers, initialize it in LastAppliedFlip::new (empty) and populate it in
LastAppliedFlip::from_info by snapshotting each removed cell's vertex list (from
the FlipInfo or by reading the TDS before removal) before the keys are deleted;
keep removed_cells/new_cells as-is for compatibility but ensure predecessor
summary code uses removed_cell_vertices for geometry reconstruction.
In `@src/triangulation/delaunay.rs`:
- Around line 5501-5523: The post-repair validation in
maybe_repair_after_insertion_capped currently checks ridge-links only against
adjacent_cells(vertex_key); instead, gather the widened repair frontier you
computed (seed_cells from collect_local_repair_seed_cells and any hint_seed) and
run the ridge-link/invariant validation against that full set so mutations in
extra_seed_cells are covered; update the validation call site(s) in
maybe_repair_after_insertion_capped (and the analogous block at 5598-5607) to
use the merged seed frontier rather than just adjacent_cells(vertex_key), and
ensure any helper used (e.g., validate_ridge_links, Tds::is_valid checks, or
DelaunayTriangulation::is_valid Level 4 checks) is invoked over that frontier
and fails explicitly if invariants do not hold.
In `@tests/large_scale_debug.rs`:
- Around line 1154-1158: The boundary check uses a hardcoded 4 and can underflow
for higher dimensions; replace the condition `if minimal_prefix > 4` with a
dimension-aware lower bound check using the same lower bound as the search
(e.g., `if minimal_prefix > D + 1`) so that the subsequent probe
`run_probe(minimal_prefix - 1)` only runs when `minimal_prefix - 1` is valid;
update the guard around `run_probe` (referencing minimal_prefix, run_probe, and
D) accordingly.
---
Nitpick comments:
In `@src/core/triangulation.rs`:
- Around line 10478-10515: Replace the probabilistic random-seed-based
test_perturbation_retry_and_exhaustion_4d with a deterministic adversarial 4D
fixture: stop generating points via
crate::geometry::util::generate_random_points_seeded and instead load or
construct the known minimal 4D point set that reliably triggers the perturbation
retry/skip path, then insert those points into the
Triangulation<AdaptiveKernel<f64>, (), (), 4> using
Triangulation::insert_with_statistics and assert that
stats.used_perturbation()/stats.success() or stats.skipped() with stats.attempts
> 1 occurs; make the same change for the analogous test at the second occurrence
(around the region referenced in the review) so both tests use fixed adversarial
fixtures rather than a seed search.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
Run ID: 58368313-6bce-49ad-aaeb-165587d52d04
📒 Files selected for processing (11)
docs/KNOWN_ISSUES_4D.mddocs/TODO.mddocs/dev/debug_env_vars.mdjustfilesrc/core/algorithms/flips.rssrc/core/algorithms/incremental_insertion.rssrc/core/algorithms/locate.rssrc/core/triangulation.rssrc/triangulation/delaunay.rstests/README.mdtests/large_scale_debug.rs
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## main #339 +/- ##
==========================================
- Coverage 62.49% 61.97% -0.52%
==========================================
Files 53 53
Lines 14389 15294 +905
==========================================
+ Hits 8992 9479 +487
- Misses 5397 5815 +418
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
- Raise the D≥4 per-insertion repair budget, add a rate-limited escalation pass, and widen local post-repair validation so the 500-point #204 repro converges without skipped vertices. - Preserve removed-cell snapshots and predecessor context in flip diagnostics, drop stale repair seeds after cavity reduction, and re-export locate conflict diagnostics from the prelude. - Replace committed `eprintln!` diagnostics in production, tests, and benches with `tracing`, using `test-debug` and `bench-logging` gates and keeping logs out of Criterion hot loops. - Document the #204 investigation, refresh the 4D known-issues and TODO notes, and record the repository logging policy plus release-visible debug environment variables.
There was a problem hiding this comment.
Actionable comments posted: 6
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
src/core/util/deduplication.rs (1)
126-132:⚠️ Potential issue | 🟠 MajorAvoid panicking on negative
epsilonin this publicsrc/API.
assert!on user input can abort callers. Please return an explicit error (preferred) or use a non-panicking fallback path.Possible non-panicking stopgap (non-breaking signature)
pub fn dedup_vertices_epsilon<T, U, const D: usize>( vertices: &[Vertex<T, U, D>], epsilon: T, ) -> Vec<Vertex<T, U, D>> where T: CoordinateScalar, U: DataType, { - if epsilon < T::zero() { - tracing::error!("dedup_vertices_epsilon received negative epsilon; enforcing contract"); - } - assert!( - epsilon >= T::zero(), - "dedup_vertices_epsilon expects non-negative epsilon", - ); + if epsilon < T::zero() { + tracing::warn!( + epsilon = ?epsilon, + "dedup_vertices_epsilon received negative epsilon; returning input unchanged" + ); + return vertices.to_vec(); + }As per coding guidelines
src/**/*.rs: “UseResult<_, _Error>for every fallible operation; ... library code insrc/must not panic on user input”.benches/large_scale_performance.rs (1)
131-138:⚠️ Potential issue | 🟡 MinorFirst
bench_info!may fire inside a Criterion-measured closure.
get_memory_usage()is called from theconstruction_memory_deltabenchmark body (line 329), and itsUNIT_LOGGED.call_onceemits abench_info!the first time it runs. Whenbench-loggingis enabled andBENCH_PRINT_MEMis unset, the very first measured iteration pays thetracing::info!cost — violating the "never log inside Criterion-measured closures" guideline.Consider priming the banner outside the measured path, e.g. call
get_memory_usage()(or a dedicatedlog_memory_unit_once()) once from thecriterion_group!configblock alongsideinit_tracing(), or from eachbench_*setup beforeb.iter*(...).As per coding guidelines: "Never log inside hot benchmark loops or Criterion-measured closures; emit setup/summary diagnostics outside the measured path".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@benches/large_scale_performance.rs` around lines 131 - 138, First bench_info! call can occur inside Criterion-measured closures via get_memory_usage() (because of UNIT_LOGGED.call_once), causing logging overhead in the first iteration; prime that call outside the measured path by invoking get_memory_usage() (or add a small helper log_memory_unit_once()) from the benchmark setup phase—for example in the criterion_group! config block alongside init_tracing() or at the start of the construction_memory_delta bench before b.iter—so UNIT_LOGGED is executed during setup and no logging happens inside the measured closure.
♻️ Duplicate comments (1)
docs/KNOWN_ISSUES_4D.md (1)
190-196:⚠️ Potential issue | 🟡 MinorStale command comment contradicts the 2026-04-23 re-verification.
The header
# 4D 500-point seeded repro (all shuffled retries still fail)contradicts the updated status earlier in the same document: lines 16–20 and the "Current recheck" at lines 73–79 both state that seed0xD225B8A07E274AE6now inserts 500/500 vertices with zero skips on the first attempt (no perturbation retries triggered) and passes Level 1–4 validation. Update the comment so readers don't conclude the repro is still broken.📝 Proposed wording fix
-# 4D 500-point seeded repro (all shuffled retries still fail) +# 4D 500-point seeded repro (historical retry-collapse; now passes on first attempt, see re-verification above) DELAUNAY_BULK_PROGRESS_EVERY=50 \ DELAUNAY_LARGE_DEBUG_N_4D=500 \ DELAUNAY_LARGE_DEBUG_CASE_SEED_4D=0xD225B8A07E274AE6 \ DELAUNAY_LARGE_DEBUG_ALLOW_SKIPS=1 \ cargo test --release --test large_scale_debug debug_large_scale_4d \ -- --ignored --nocapture🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/KNOWN_ISSUES_4D.md` around lines 190 - 196, The header "4D 500-point seeded repro (all shuffled retries still fail)" is stale and contradicts the re-verification; update that comment to reflect that seed 0xD225B8A07E274AE6 now inserts 500/500 vertices with zero skips on first attempt and passes Level 1–4 validation (include the recheck date 2026-04-23 or similar), e.g. change the header to "4D 500-point seeded case — now passes (verified 2026-04-23)" and adjust the brief body to note the successful insertion and no perturbation retries for the seed instead of claiming it still fails.
🧹 Nitpick comments (2)
tests/delaunay_edge_cases.rs (2)
189-238: Call-site double gating is redundant.Each invocation pairs an outer
#[cfg(feature = "test-debug")]with a macro that is itself only defined under that cfg. The outer#[cfg]can be removed — when the feature is off, the macro is undefined and the call would fail to compile, which is why you already pair with#[cfg(not(feature = "test-debug"))] let _ = &stats;. The cleanest shape is to keep only thenot(...)arm's suppression and drop the positive#[cfg]around eachtest_debug_*!call, since the macro body is already a no-op pattern when undefined… but that requires the macro to be defined unconditionally. Simpler: make the macros unconditional (expand to{}when the feature is off) so you can drop every call-site gate.Proposed simplification
-#[cfg(feature = "test-debug")] macro_rules! test_debug_info { ($($arg:tt)*) => {{ #[cfg(feature = "test-debug")] - { - tracing::info!($($arg)*); - } + { tracing::info!($($arg)*); } }}; }Then at call sites you can drop both the
#[cfg(feature = "test-debug")]wrapper and the matching#[cfg(not(...))] let _ = &stats;lines.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/delaunay_edge_cases.rs` around lines 189 - 238, The call-sites repeatedly wrap test_debug_info! and test_debug_warn! in #[cfg(feature = "test-debug")] even though the macros themselves should be made unconditional; change the macro definitions for test_debug_info! and test_debug_warn! (and any sibling test_debug_* macros) to always exist — when the feature is off they should expand to an empty no-op block — then remove the outer #[cfg(feature = "test-debug")] guards and the corresponding #[cfg(not(feature = "test-debug"))] let _ = &... suppression lines around the macro invocations (e.g., the usages after repair_delaunay_with_flips() and is_valid()), leaving only plain calls to the macros so the code compiles regardless of the feature flag.
33-51: Double-gating is redundant but harmless.
test_debug_info!/test_debug_warn!are defined only under#[cfg(feature = "test-debug")], and their bodies additionally re-gate with the samecfg. The inner gate is dead code when the feature is off (the macro itself won't exist). Consider dropping the inner#[cfg(...)]to simplify, since every call site already adds its own outer#[cfg(feature = "test-debug")]guard.Proposed simplification
#[cfg(feature = "test-debug")] macro_rules! test_debug_info { ($($arg:tt)*) => {{ - #[cfg(feature = "test-debug")] - { - tracing::info!($($arg)*); - } + tracing::info!($($arg)*); }}; } #[cfg(feature = "test-debug")] macro_rules! test_debug_warn { ($($arg:tt)*) => {{ - #[cfg(feature = "test-debug")] - { - tracing::warn!($($arg)*); - } + tracing::warn!($($arg)*); }}; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/delaunay_edge_cases.rs` around lines 33 - 51, The macros test_debug_info! and test_debug_warn! are double-gated: they are declared only under #[cfg(feature = "test-debug")] and their bodies also contain the same #[cfg(...)], which is redundant; remove the inner #[cfg(feature = "test-debug")] blocks from both macro definitions so the macros themselves handle gating while keeping all call sites' existing outer cfg guards unchanged (update test_debug_info! and test_debug_warn! macro bodies to directly call tracing::info!/tracing::warn! without the inner cfg).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/KNOWN_ISSUES_4D.md`:
- Around line 210-214: Update the 4D bullet in KNOWN_ISSUES_4D.md to remove or
correct the claim that "seeded 500-point and larger batch runs can still fail
after all shuffled retries" and the instruction to debug the "Ridge fan
detected" path; instead, state that the 500-point repro (seed
0xD225B8A07E274AE6) now inserts 500/500 with no
RidgeFan/DisconnectedBoundary/k=2 retryable-skip traces, and re-focus the
recommendation on the remaining 3000-point scale/observability concern and using
incremental insertion for predictable large-N progress. Ensure you update the
exact bullet text that mentions "Ridge fan detected", "RidgeFan", and
"DisconnectedBoundary" so it no longer implies the 500-point failure is still
present.
In `@src/core/algorithms/flips.rs`:
- Around line 1794-1804: FlipInfo was extended with a new public field
removed_cell_vertices which is a breaking, exhaustive-match change; make this
non-public by moving the snapshot into a private/internal type or change the
field visibility to pub(crate) (or remove pub) and/or attach it behind an
internal-only container (e.g., FlipRepairDiagnostics or Option<...> with
pub(crate) visibility) so external users of FlipInfo are not affected; update
references that currently use FlipInfo::removed_cell_vertices to use the new
internal accessor/struct or crate-local field and ensure the public API surface
(FlipInfo) remains stable and exhaustively matchable.
- Around line 417-428: The code currently maps removed_cells to
removed_cell_vertices using unwrap_or_default, which silently masks a missing
cell; change this to explicitly fail: when tds.get_cell(cell_key) returns None,
return Err(FlipError::MissingCell { cell_key }) from the surrounding function
instead of producing a default/empty snapshot. Locate the block building
removed_cell_vertices (uses removed_cells, tds.get_cell, and collects into
removed_cell_vertices) and replace the map+collect with an iterator that checks
get_cell, returns the collected vertex lists on success, and bails out with
FlipError::MissingCell { cell_key } on any missing entry so the flip preserves
invariants.
In `@src/core/triangulation.rs`:
- Around line 316-335: Change log_cavity_reduction_event to accept the event as
a lazy closure (e.g. impl FnOnce() -> String or &dyn Fn() -> String) instead of
an already-formatted String so formatting/allocation only happens when enabled
is true; inside log_cavity_reduction_event call the closure to obtain the event
string after the enabled check and before the tracing::debug call (keep building
conflict_preview and other fields as-is). Update all call sites (those that
currently do format!(...) before calling log_cavity_reduction_event) to pass a
zero-arg closure like || format!(...) so the format! runs only when the helper
invokes the closure.
- Around line 4358-4360: The code currently clones conflict_cells into
repair_seed_cells up front (repair_seed_cells = conflict_cells.clone()), causing
an unnecessary allocation; instead initialize repair_seed_cells empty and only
push/append cells when a shrink step actually removes them from conflict_cells
(i.e., in the shrink branches that currently remove cells later around the
cavity-reduction logic). Locate uses of repair_seed_cells and the shrink
branches (the logic that strips the final conflict region around the cavity
reduction and the branches handling expanded vs removed cells) and modify each
branch so it records into repair_seed_cells only the specific cell identifiers
that are removed from conflict_cells during that branch, rather than copying the
whole conflict set at the start.
In `@src/lib.rs`:
- Around line 862-865: Add the #[non_exhaustive] attribute to the ConflictError
enum declaration in src/core/algorithms/locate.rs so it can't be exhaustively
matched by downstream code; locate the enum named ConflictError
(crate::core::algorithms::locate::ConflictError) and prepend #[non_exhaustive]
immediately above its enum keyword, keeping the existing variants and visibility
unchanged so the public API exported in crate::prelude::* remains the same but
future variants won't be a breaking change.
---
Outside diff comments:
In `@benches/large_scale_performance.rs`:
- Around line 131-138: First bench_info! call can occur inside
Criterion-measured closures via get_memory_usage() (because of
UNIT_LOGGED.call_once), causing logging overhead in the first iteration; prime
that call outside the measured path by invoking get_memory_usage() (or add a
small helper log_memory_unit_once()) from the benchmark setup phase—for example
in the criterion_group! config block alongside init_tracing() or at the start of
the construction_memory_delta bench before b.iter—so UNIT_LOGGED is executed
during setup and no logging happens inside the measured closure.
---
Duplicate comments:
In `@docs/KNOWN_ISSUES_4D.md`:
- Around line 190-196: The header "4D 500-point seeded repro (all shuffled
retries still fail)" is stale and contradicts the re-verification; update that
comment to reflect that seed 0xD225B8A07E274AE6 now inserts 500/500 vertices
with zero skips on first attempt and passes Level 1–4 validation (include the
recheck date 2026-04-23 or similar), e.g. change the header to "4D 500-point
seeded case — now passes (verified 2026-04-23)" and adjust the brief body to
note the successful insertion and no perturbation retries for the seed instead
of claiming it still fails.
---
Nitpick comments:
In `@tests/delaunay_edge_cases.rs`:
- Around line 189-238: The call-sites repeatedly wrap test_debug_info! and
test_debug_warn! in #[cfg(feature = "test-debug")] even though the macros
themselves should be made unconditional; change the macro definitions for
test_debug_info! and test_debug_warn! (and any sibling test_debug_* macros) to
always exist — when the feature is off they should expand to an empty no-op
block — then remove the outer #[cfg(feature = "test-debug")] guards and the
corresponding #[cfg(not(feature = "test-debug"))] let _ = &... suppression lines
around the macro invocations (e.g., the usages after
repair_delaunay_with_flips() and is_valid()), leaving only plain calls to the
macros so the code compiles regardless of the feature flag.
- Around line 33-51: The macros test_debug_info! and test_debug_warn! are
double-gated: they are declared only under #[cfg(feature = "test-debug")] and
their bodies also contain the same #[cfg(...)], which is redundant; remove the
inner #[cfg(feature = "test-debug")] blocks from both macro definitions so the
macros themselves handle gating while keeping all call sites' existing outer cfg
guards unchanged (update test_debug_info! and test_debug_warn! macro bodies to
directly call tracing::info!/tracing::warn! without the inner cfg).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
Run ID: 8fe4e040-bda7-44b7-ae72-1a08353bcc8b
📒 Files selected for processing (23)
AGENTS.mdbenches/large_scale_performance.rsbenches/microbenchmarks.rsbenches/profiling_suite.rsdocs/KNOWN_ISSUES_4D.mddocs/TODO.mddocs/archive/issue_204_investigation.mddocs/dev/debug_env_vars.mddocs/dev/rust.mdsrc/core/algorithms/flips.rssrc/core/triangulation.rssrc/core/util/deduplication.rssrc/geometry/util/measures.rssrc/geometry/util/point_generation.rssrc/geometry/util/triangulation_generation.rssrc/lib.rssrc/triangulation/delaunay.rstests/delaunay_edge_cases.rstests/delaunay_repair_fallback.rstests/large_scale_debug.rstests/proptest_delaunay_triangulation.rstests/regressions.rstests/storage_backend_compatibility.rs
✅ Files skipped from review due to trivial changes (1)
- AGENTS.md
🚧 Files skipped from review as they are similar to previous changes (2)
- tests/large_scale_debug.rs
- src/triangulation/delaunay.rs
Refactor flip snapshotting and cavity-reduction bookkeeping to ensure diagnostic reliability and accurate repair-seed collection. Update documentation and justfile recipes to reflect fixed historical repros and transition to monitoring active scalability investigations for 3D, 4D, and 5D datasets. - Move removed-cell vertex capturing into fallible internal helpers - Implement lazy evaluation for cavity-reduction diagnostic logs - Harden vertex deduplication with fallible epsilon validation - Update 4D known issues to reflect 100-point and 500-point fixes - Simplify the large-scale debug harness CLI and documentation Refs: #204, #340, #341, #342
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
src/core/util/deduplication.rs (1)
128-168:⚠️ Potential issue | 🟡 MinorNaN
epsilonsilently bypasses both the warn path and the typed-error path.
epsilon < T::zero()isfalsewhenepsilonis NaN (floats are part ofCoordinateScalar), so a NaN epsilon reachesdedup_vertices_epsilon_nonnegative, whereepsilon_sqis NaN and everydist_sq < epsilon_sqcheck isfalse. Net effect: the function returns the input unchanged without emitting the warning, andtry_dedup_vertices_epsilonreturnsOkinstead of a typed error — inconsistent with the negative-epsilon contract both docs advertise.Consider treating non-finite epsilon the same way as negative epsilon (warn + identity for
dedup_vertices_epsilon,Err(DeduplicationError::NegativeEpsilon)— or a dedicatedNonFiniteEpsilonvariant — for the fallible path). At minimum, document the NaN/infinity behavior in both doc blocks (105-107and152-154) so callers aren't surprised.💡 Sketch
pub enum DeduplicationError { /// Epsilon must be non-negative for distance-based deduplication. #[error("epsilon must be non-negative")] NegativeEpsilon, + /// Epsilon must be a finite value (not NaN or infinite). + #[error("epsilon must be finite")] + NonFiniteEpsilon, }- if epsilon < T::zero() { + if !epsilon.is_finite() || epsilon < T::zero() { tracing::warn!( epsilon = ?epsilon, - "dedup_vertices_epsilon received negative epsilon; returning input unchanged" + "dedup_vertices_epsilon received invalid epsilon (negative or non-finite); returning input unchanged" ); return vertices.to_vec(); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/util/deduplication.rs` around lines 128 - 168, The current checks in dedup_vertices_epsilon and try_dedup_vertices_epsilon only catch negative epsilons but let NaN/inf through; update both functions to treat non-finite epsilon the same as negative epsilon (i.e., test for non-finite via the scalar's is_finite/is_nan helpers for T or an equivalent check) so dedup_vertices_epsilon emits the same warning and returns the unchanged input, and try_dedup_vertices_epsilon returns Err(DeduplicationError::NegativeEpsilon) (or add a new DeduplicationError::NonFiniteEpsilon variant if you prefer a distinct error), and ensure dedup_vertices_epsilon_nonnegative is only called when epsilon is finite and non-negative; also update the doc comments on both functions to document NaN/Infinity behavior.src/triangulation/delaunay.rs (2)
3505-3515:⚠️ Potential issue | 🟡 MinorSuccessful rescue paths bypass the bulk progress logger.
These
continuebranches skip the sharedlog_bulk_progress_if_due(...)call. Any insertion recovered by the D<4 global fallback or the D≥4 escalation disappears from the progress stream, and if the last vertex takes one of these paths you lose the finalprocessed == total_verticessample entirely.Also applies to: 3521-3538, 3749-3759, 3765-3782
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/triangulation/delaunay.rs` around lines 3505 - 3515, The rescue branches (e.g., after calling Self::try_d_lt4_global_repair_fallback and Self::canonicalize_after_bulk_repair, and the D≥4 escalation paths) are returning via continue before the shared log_bulk_progress_if_due(...) call, causing recovered insertions to be omitted from progress reports; ensure each rescue/early-continue path invokes the same log_bulk_progress_if_due(...) call (or a small helper that records the final processed==total_vertices sample) immediately before continuing so every successful fallback/escalation still emits the progress update.
2499-2532:⚠️ Potential issue | 🟡 MinorAttempt 0 still has no retry-boundary telemetry.
The initial build path only logs a result on failure, and logs nothing at all on success. That means the “original order” attempt is invisible to the new release-mode retry tracing, even though the shuffled attempts emit explicit start/result pairs.
Also applies to: 2659-2710
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/triangulation/delaunay.rs` around lines 2499 - 2532, The initial (attempt 0) path doesn't emit a start/result telemetry pair; update the logic around Self::build_with_kernel_inner_seeded so you log the attempt start (call log_construction_retry_result(0, None, 0_u64, "started", None, None) or equivalent) before invoking build_with_kernel_inner_seeded, and ensure you emit a final result for attempt 0 on both success (log_construction_retry_result with "succeeded" and no last_error) and failure (already logged) so the original-order attempt is visible to the retry tracing; keep existing debug tracing (tracing::debug! with attempt = 0) behavior intact.src/core/algorithms/flips.rs (1)
4525-4587:⚠️ Potential issue | 🟠 MajorScope the debug rate limits to a repair attempt, not the entire process.
RIDGE_DEBUG_EMITTEDandPOSTCONDITION_FACET_DEBUG_EMITTEDare global atomics, so attempt 1 can consume the only snapshot and then attempt 2 runs after rollback with no diagnostic output. The same cross-talk happens between concurrent repairs/tests in the same process. Move this budget into per-attempt state (or reset it per repair invocation) so the final failing attempt always gets a chance to emit.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/algorithms/flips.rs` around lines 4525 - 4587, The global atomics RIDGE_DEBUG_EMITTED and POSTCONDITION_FACET_DEBUG_EMITTED must be moved out of process-global scope and made per-repair-attempt (or reset at the start of each repair) so separate attempts and concurrent repairs do not consume each other's diagnostic budget; replace those statics with counters stored on the repair attempt context (e.g., a field on the RepairAttempt/RepairSession struct) and update should_emit_ridge_debug and should_emit_postcondition_facet_debug to read/update the per-attempt counters (and keep ridge_debug_limit(), ridge_debug_min_multiplicity(), and postcondition_facet_debug_enabled() behavior unchanged) so each invocation gets its own rate limit budget.
🧹 Nitpick comments (1)
src/triangulation/delaunay.rs (1)
6732-6754: Expand this const-generic budget test to D=2 through D=5.
local_repair_flip_budget::<D>is exercised across the supported dimension range, but this test only checks the 3D/4D split. A small macro-driven test over 2D–5D would lock down the generic behavior much better.As per coding guidelines, "Unit tests must cover known values, error paths, and dimension-generic correctness; dimension-generic tests MUST cover D=2 through D=5 whenever feasible; use pastey macros for per-dimension tests".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/triangulation/delaunay.rs` around lines 6732 - 6754, The test test_local_repair_flip_budget_uses_dimension_specific_floor_and_factor only asserts behavior for D=3 and D=4; expand it to cover D=2..5 by replacing or augmenting the assertions with a small macro-driven loop (using the existing pastey-style macro conventions) that invokes local_repair_flip_budget::<D> for D in 2,3,4,5 and checks: for seed_count==0 the returned value equals the appropriate floor constant (LOCAL_REPAIR_FLIP_BUDGET_FLOOR_D_LT_4 for D<4, LOCAL_REPAIR_FLIP_BUDGET_FLOOR_D_GE_4 for D>=4), and for seed_count=10 the returned value equals seed_count*(D+1)*the corresponding factor constant (LOCAL_REPAIR_FLIP_BUDGET_FACTOR_D_LT_4 or LOCAL_REPAIR_FLIP_BUDGET_FACTOR_D_GE_4) floored against the floor constant (i.e. .max(floor)). Update or rename test_local_repair_flip_budget_uses_dimension_specific_floor_and_factor to reflect the expanded coverage.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/core/algorithms/flips.rs`:
- Around line 6672-6754: The tests only exercise D=2 and a hand-built D=3
scenario; update them to be dimension-generic for D=2..5 using the existing
pastey macros to generate cases so 4D and 5D are covered. For the snapshot test,
wrap the logic that constructs a Tds, calls insert_standard_simplex_vertices,
builds removed_cells/CellKeyBuffer, and asserts snapshot_removed_cell_vertices
results into a pastey-generated test macro for dimensions 2..5; for the
last-applied-flip test, generate per-dimension AppliedFlip instances (using
RemovedCellVertexSnapshot, AppliedFlip::<D>, FlipInfo,
LastAppliedFlip::from_applied_flip) via pastey for D=2..5 and assert k_move,
removed/inserted face vertex ordering, removed/new cells, and
removed_cell_vertex_lines behavior (including the missing-snapshot placeholder
case). Ensure you reference the same symbols used in the diff:
snapshot_removed_cell_vertices, insert_standard_simplex_vertices, Tds,
CellKeyBuffer, AppliedFlip::<D>, FlipInfo, LastAppliedFlip, and
RemovedCellVertexSnapshot so the new pastey tests mirror the existing assertions
across D=2..5.
In `@src/core/triangulation.rs`:
- Around line 4420-4441: The code currently flips CAVITY_REDUCTION_TRACE_EMITTED
before we know if the insertion hit a cavity-boundary error; move the atomic
swap so the one-shot is consumed only when we actually plan to emit a reduction
trace. Replace the pre-match let trace_cavity_reduction =
cavity_reduction_trace_enabled() && !CAVITY_REDUCTION_TRACE_EMITTED.swap(true,
Ordering::Relaxed); with a non-mutating enabled check (e.g. let trace_enabled =
cavity_reduction_trace_enabled();) and then inside the Err branch perform the
swap to set trace_cavity_reduction (e.g. let trace_cavity_reduction =
trace_enabled && !CAVITY_REDUCTION_TRACE_EMITTED.swap(true, Ordering::Relaxed);)
before calling log_cavity_reduction_event so initial_ok paths do not consume the
CAVITY_REDUCTION_TRACE_EMITTED budget.
In `@src/core/util/deduplication.rs`:
- Around line 421-427: The test
test_coords_within_epsilon_exact_boundary_logs_and_keeps_point currently only
asserts coords_within_epsilon(...) and does not verify any tracing output, so
either rename the test to remove "_logs_" or update it to capture and assert the
debug log; to fix, either rename the test to
test_coords_within_epsilon_exact_boundary_keeps_point (updating any references)
or annotate/wrap the test with a tracing test subscriber (for example use
tracing_test::traced_test or install a tracing_subscriber::fmt::Subscriber) and
add an assertion that the expected tracing::debug! message was emitted when
calling coords_within_epsilon at the exact boundary. Ensure any added tracing
test import/attribute is included in the test module.
In `@tests/large_scale_debug.rs`:
- Around line 252-270: The init_tracing() logic currently builds debug_env_vars
and promotes the default filter to "debug" only when any of those vars are set;
add the two missing env var names DELAUNAY_REPAIR_DEBUG_POSTCONDITION_FACET and
DELAUNAY_REPAIR_DEBUG_RIDGE_MIN_MULTIPLICITY to the debug_env_vars array so that
init_tracing() will also flip default_filter to "debug" when either of those
specific release-debug knobs is present.
---
Outside diff comments:
In `@src/core/algorithms/flips.rs`:
- Around line 4525-4587: The global atomics RIDGE_DEBUG_EMITTED and
POSTCONDITION_FACET_DEBUG_EMITTED must be moved out of process-global scope and
made per-repair-attempt (or reset at the start of each repair) so separate
attempts and concurrent repairs do not consume each other's diagnostic budget;
replace those statics with counters stored on the repair attempt context (e.g.,
a field on the RepairAttempt/RepairSession struct) and update
should_emit_ridge_debug and should_emit_postcondition_facet_debug to read/update
the per-attempt counters (and keep ridge_debug_limit(),
ridge_debug_min_multiplicity(), and postcondition_facet_debug_enabled() behavior
unchanged) so each invocation gets its own rate limit budget.
In `@src/core/util/deduplication.rs`:
- Around line 128-168: The current checks in dedup_vertices_epsilon and
try_dedup_vertices_epsilon only catch negative epsilons but let NaN/inf through;
update both functions to treat non-finite epsilon the same as negative epsilon
(i.e., test for non-finite via the scalar's is_finite/is_nan helpers for T or an
equivalent check) so dedup_vertices_epsilon emits the same warning and returns
the unchanged input, and try_dedup_vertices_epsilon returns
Err(DeduplicationError::NegativeEpsilon) (or add a new
DeduplicationError::NonFiniteEpsilon variant if you prefer a distinct error),
and ensure dedup_vertices_epsilon_nonnegative is only called when epsilon is
finite and non-negative; also update the doc comments on both functions to
document NaN/Infinity behavior.
In `@src/triangulation/delaunay.rs`:
- Around line 3505-3515: The rescue branches (e.g., after calling
Self::try_d_lt4_global_repair_fallback and Self::canonicalize_after_bulk_repair,
and the D≥4 escalation paths) are returning via continue before the shared
log_bulk_progress_if_due(...) call, causing recovered insertions to be omitted
from progress reports; ensure each rescue/early-continue path invokes the same
log_bulk_progress_if_due(...) call (or a small helper that records the final
processed==total_vertices sample) immediately before continuing so every
successful fallback/escalation still emits the progress update.
- Around line 2499-2532: The initial (attempt 0) path doesn't emit a
start/result telemetry pair; update the logic around
Self::build_with_kernel_inner_seeded so you log the attempt start (call
log_construction_retry_result(0, None, 0_u64, "started", None, None) or
equivalent) before invoking build_with_kernel_inner_seeded, and ensure you emit
a final result for attempt 0 on both success (log_construction_retry_result with
"succeeded" and no last_error) and failure (already logged) so the
original-order attempt is visible to the retry tracing; keep existing debug
tracing (tracing::debug! with attempt = 0) behavior intact.
---
Nitpick comments:
In `@src/triangulation/delaunay.rs`:
- Around line 6732-6754: The test
test_local_repair_flip_budget_uses_dimension_specific_floor_and_factor only
asserts behavior for D=3 and D=4; expand it to cover D=2..5 by replacing or
augmenting the assertions with a small macro-driven loop (using the existing
pastey-style macro conventions) that invokes local_repair_flip_budget::<D> for D
in 2,3,4,5 and checks: for seed_count==0 the returned value equals the
appropriate floor constant (LOCAL_REPAIR_FLIP_BUDGET_FLOOR_D_LT_4 for D<4,
LOCAL_REPAIR_FLIP_BUDGET_FLOOR_D_GE_4 for D>=4), and for seed_count=10 the
returned value equals seed_count*(D+1)*the corresponding factor constant
(LOCAL_REPAIR_FLIP_BUDGET_FACTOR_D_LT_4 or
LOCAL_REPAIR_FLIP_BUDGET_FACTOR_D_GE_4) floored against the floor constant (i.e.
.max(floor)). Update or rename
test_local_repair_flip_budget_uses_dimension_specific_floor_and_factor to
reflect the expanded coverage.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
Run ID: 7caf98a8-4824-4b8e-9b21-e7ba376f3dce
📒 Files selected for processing (13)
benches/large_scale_performance.rsdocs/KNOWN_ISSUES_4D.mdjustfilesrc/core/algorithms/flips.rssrc/core/algorithms/locate.rssrc/core/triangulation.rssrc/core/util/deduplication.rssrc/triangulation/delaunay.rstests/README.mdtests/delaunay_edge_cases.rstests/delaunay_repair_fallback.rstests/large_scale_debug.rstests/storage_backend_compatibility.rs
✅ Files skipped from review due to trivial changes (3)
- tests/README.md
- benches/large_scale_performance.rs
- tests/delaunay_edge_cases.rs
🚧 Files skipped from review as they are similar to previous changes (1)
- tests/storage_backend_compatibility.rs
Update the changelog post-processing script to convert indented ATX headings from commit bodies into bold prose. This ensures the generated CHANGELOG.md complies with Markdownlint rule MD023 (headings must start at column 0) while preserving the visual hierarchy and readability of historical commit summaries. Additionally, internal diagnostic state for Delaunay repair was moved from global atomics to a per-attempt structure to ensure reliable rate-limiting across concurrent threads. Refs: #204
There was a problem hiding this comment.
Actionable comments posted: 5
🧹 Nitpick comments (1)
src/core/triangulation.rs (1)
10721-10758: Anchor retry-path coverage to a deterministic adversarial repro.These tests still pass or fail based on whether one of 50 random 4D clouds happens to exercise the retry path. Since this PR already adds deterministic large-scale 4D repro tooling, at least one retry-path test should use a minimized repro from that corpus instead of relying on random well-conditioned inputs.
As per coding guidelines, adversarial inputs (near-boundary points, cospherical sets, degenerate simplices, large coordinates) must accompany well-conditioned inputs in tests and benchmarks.
Also applies to: 10769-10810
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/triangulation.rs` around lines 10721 - 10758, Replace the flaky randomized test_perturbation_retry_and_exhaustion_4d by anchoring it to the deterministic adversarial repro from the new 4D repro corpus: locate the minimized 4D failing input (or a helper that loads it) added in this PR and use that specific point set instead of generate_random_points_seeded; construct the same Triangulation<AdaptiveKernel<f64>,(),(),4>, insert the deterministic vertices via VertexBuilder::default().point(...).build().unwrap(), and assert that either stats.used_perturbation() && stats.success() or stats.skipped() && stats.attempts > 1 is observed (failing the test otherwise). Do the analogous replacement for the other test block referenced around lines 10769-10810 so both tests use deterministic adversarial repros rather than random seeds.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@CHANGELOG.md`:
- Around line 167-168: Fix the typo "deniest" -> "denies" in the changelog
generation source/postprocessor (not in CHANGELOG.md directly): locate the
changelog source or postprocessor rule that emits the sentence about the
"manifest lint policy" (search for the phrase "manifest lint policy" or the
erroneous "deniest"), correct the wording to "denies warnings", then regenerate
the changelog by running the documented pipeline (run the "just changelog"
command) so the corrected text is produced into CHANGELOG.md; do not hand-edit
CHANGELOG.md or the docs/archive/changelog/*.md files.
In `@src/core/triangulation.rs`:
- Around line 4770-4775: The repair_seed_cells buffer is append-only and can
contain duplicates; after the existing filtering (repair_seed_cells.retain(|ck|
!dead_conflict_cells.contains(ck))) deduplicate the collection in place before
returning it (use a FastHashSet/HashSet to track seen CellKey and retain first
occurrence or convert to a set then back to the desired container), ensuring the
function returns unique CellKey entries so downstream repair budgets aren't
wasted; keep references to the identifiers repair_seed_cells,
dead_conflict_cells, conflict_cells and the downstream consumer
remove_cells_by_keys in mind while making the change.
In `@src/triangulation/delaunay.rs`:
- Around line 2715-2722:
build_with_shuffled_retries_with_construction_statistics currently omits
attempt-0 telemetry in several paths; mirror the no-stats constructor by
emitting attempt-0 "started" before the first build attempt and emitting
"succeeded" or "failed" (including non-retryable early returns) via
log_construction_retry_result so stats-enabled callers always see attempt-0
events. Locate the places in
build_with_shuffled_retries_with_construction_statistics where the first attempt
can return early or succeed (including the early-return/non-retryable branches)
and add the appropriate log_construction_retry_result calls that pass the same
parameters as the no-stats path (use 0 for attempt index,
"started"/"succeeded"/"failed" strings, include Some(&last_error) or None and
last_stats.as_ref() as applicable). Also apply the same change to the other
similar spots noted (the other build-with-stats retry branches) so all attempt-0
outcomes are consistently logged.
- Around line 1344-1359: The progress metrics currently create overall_rate and
chunk_rate by calling safe_usize_to_scalar::<f64>(...) .unwrap_or(f64::NAN)
which produces NaN sentinels; change this to compute Option<f64> (e.g. map the
Option from safe_usize_to_scalar to an Option by dividing only when both
numerator and elapsed are valid) and pass those Option<f64> values into the
tracing::debug call (or omit the fields when None) instead of synthesizing NaN;
update the identifiers overall_rate and chunk_rate and the tracing::debug fields
total_rate_pts_per_s / recent_rate_pts_per_s to accept Option<f64> (or
conditionally include them) and ensure you base the elapsed guards on
elapsed.as_secs_f64().max(...) only when converting to Some(value).
In `@tests/large_scale_debug.rs`:
- Around line 50-58: The top-of-file runbook is missing the two
repair-postcondition debug environment knobs that are referenced in
init_tracing(); add entries for DELAUNAY_REPAIR_DEBUG_POSTCONDITION_FACET and
DELAUNAY_REPAIR_DEBUG_RIDGE_MIN_MULTIPLICITY to the comment block so operators
can discover them: update the usage block at the file header in
tests/large_scale_debug.rs to list these two variables (with a short hint of
their purpose), mirroring how the other DELAUNAY_* knobs are documented so they
appear alongside the existing DELAUNAY_DEBUG_* entries referenced by
init_tracing().
---
Nitpick comments:
In `@src/core/triangulation.rs`:
- Around line 10721-10758: Replace the flaky randomized
test_perturbation_retry_and_exhaustion_4d by anchoring it to the deterministic
adversarial repro from the new 4D repro corpus: locate the minimized 4D failing
input (or a helper that loads it) added in this PR and use that specific point
set instead of generate_random_points_seeded; construct the same
Triangulation<AdaptiveKernel<f64>,(),(),4>, insert the deterministic vertices
via VertexBuilder::default().point(...).build().unwrap(), and assert that either
stats.used_perturbation() && stats.success() or stats.skipped() &&
stats.attempts > 1 is observed (failing the test otherwise). Do the analogous
replacement for the other test block referenced around lines 10769-10810 so both
tests use deterministic adversarial repros rather than random seeds.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
Run ID: ffd44052-3100-4810-94f9-4cd80a492467
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (8)
CHANGELOG.mdscripts/postprocess_changelog.pyscripts/tests/test_postprocess_changelog.pysrc/core/algorithms/flips.rssrc/core/triangulation.rssrc/core/util/deduplication.rssrc/triangulation/delaunay.rstests/large_scale_debug.rs
🚧 Files skipped from review as they are similar to previous changes (1)
- src/core/util/deduplication.rs
Replace randomized seed sweeps with a deterministic 4D adversarial repro set to ensure retry paths remain covered. Deduplicate repair seeds during vertex insertion, instrument construction attempts with structured tracing, and document new debug environment variables for large-scale repair analysis. Refs: #204
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/triangulation/delaunay.rs (1)
2602-2605:⚠️ Potential issue | 🟡 MinorLog non-retryable shuffled failures before returning.
This early return skips the new retry-result event for
attempt > 0, so the no-stats constructor still has a blind spot exactly when a shuffled retry aborts on a deterministic error. That makes the new release-mode retry telemetry incomplete.Proposed fix
Err(err) => { if Self::is_non_retryable_construction_error(&err) { + let err_string = err.to_string(); + log_construction_retry_result( + attempt, + Some(attempt_seed), + perturbation_seed, + "failed", + Some(&err_string), + None, + ); return Err(err); } last_error = err.to_string(); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/triangulation/delaunay.rs` around lines 2602 - 2605, When encountering Err(err) in the constructor match arm (the branch using Self::is_non_retryable_construction_error(&err)), emit the retry-result / telemetry/log for shuffled retries before returning so attempt > 0 cases are recorded; specifically, for the Err(err) branch in the function handling retries (reference the Err(err) arm and Self::is_non_retryable_construction_error), call the same retry-result emission/logging path used elsewhere (including attempt and error details) and then return Err(err). Ensure the emitted event is produced only when attempt > 0 (shuffled retry) and include the error info so release-mode retry telemetry is complete.
♻️ Duplicate comments (1)
src/triangulation/delaunay.rs (1)
3440-3447:⚠️ Potential issue | 🟡 MinorDon’t reintroduce
NaNsentinels in traced/sample coordinates.These conversions still collapse
to_f64()failure intoNaN, so debug output andConstructionSkipSample.coordscan silently report bogus coordinates instead of “unavailable”. That ambiguity is especially awkward in the public statistics path.Proposed fix
- let coords = trace_insertion.then(|| { - vertex - .point() - .coords() - .iter() - .map(|c| c.to_f64().unwrap_or(f64::NAN)) - .collect::<Vec<f64>>() - }); + let coords = trace_insertion + .then(|| { + vertex + .point() + .coords() + .iter() + .map(|c| c.to_f64()) + .collect::<Option<Vec<_>>>() + }) + .flatten();- let coords: Vec<f64> = vertex - .point() - .coords() - .iter() - .map(|c| c.to_f64().unwrap_or(f64::NAN)) - .collect(); - construction_stats.record_skip_sample(ConstructionSkipSample { - index, - uuid: vertex.uuid(), - coords, - attempts: stats.attempts, - error: error.to_string(), - }); + if let Some(coords) = vertex + .point() + .coords() + .iter() + .map(|c| c.to_f64()) + .collect::<Option<Vec<_>>>() + { + construction_stats.record_skip_sample(ConstructionSkipSample { + index, + uuid: vertex.uuid(), + coords, + attempts: stats.attempts, + error: error.to_string(), + }); + }As per coding guidelines,
**/*.rs: “No f64 operation may silently lose sign information; avoid unwrap_or(NaN), unwrap_or(f64::INFINITY), or returning true on error”.Also applies to: 3712-3719, 3937-3948
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/triangulation/delaunay.rs` around lines 3440 - 3447, The current conversion for trace_insertion uses .map(|c| c.to_f64().unwrap_or(f64::NAN)) which hides conversion failures as NaN; instead change the inner mapping to preserve failure by collecting Option<f64> values: replace the .map(...).collect::<Vec<f64>>() with .map(|c| c.to_f64()).collect::<Option<Vec<f64>>>() (so the closure returns Option<Vec<f64>>) and propagate None when any coordinate conversion fails; apply the same pattern to the other occurrences you noted (the blocks around the vertex.point().coords() uses at 3712-3719 and 3937-3948) and ensure ConstructionSkipSample.coords and any debug output receive an explicit None/absent value rather than NaN.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/core/triangulation.rs`:
- Around line 10850-10867: The test test_perturbation_retry_and_exhaustion_4d
currently returns early when a retried insertion succeeds, so it may never
verify the exhaustion (SkippedDegeneracy) path; change the loop over
perturbation_retry_repro_points_4d() to record two flags (e.g., saw_retry when
stats.used_perturbation() && stats.success(), and saw_exhausted_skip when
stats.skipped() && stats.attempts > 1) instead of returning immediately,
continue scanning all points, and at the end assert both flags are true to
ensure both a retry and an exhausted skip were observed; update references to
Triangulation::insert_with_statistics, VertexBuilder, and the stats methods
(used_perturbation, success, skipped, attempts) to implement this.
In `@tests/large_scale_debug.rs`:
- Around line 94-100: The timeout branch logs via tracing::error! then
immediately calls std::process::abort(), which can drop buffered
tracing_subscriber output; modify the Err(RecvTimeoutError::Timeout) handler to
ensure the diagnostic reaches the terminal by either flushing or synchronously
writing the message before abort: e.g., after the tracing::error! call, flush
the tracing_subscriber writer (or call tracing_subscriber::fmt::writer() flush
if available) or mirror the same string to stderr using eprintln! or
std::io::stderr().write_all(...).unwrap() before invoking std::process::abort();
update the branch where std::process::abort() is called so the timeout message
is synchronously emitted.
---
Outside diff comments:
In `@src/triangulation/delaunay.rs`:
- Around line 2602-2605: When encountering Err(err) in the constructor match arm
(the branch using Self::is_non_retryable_construction_error(&err)), emit the
retry-result / telemetry/log for shuffled retries before returning so attempt >
0 cases are recorded; specifically, for the Err(err) branch in the function
handling retries (reference the Err(err) arm and
Self::is_non_retryable_construction_error), call the same retry-result
emission/logging path used elsewhere (including attempt and error details) and
then return Err(err). Ensure the emitted event is produced only when attempt > 0
(shuffled retry) and include the error info so release-mode retry telemetry is
complete.
---
Duplicate comments:
In `@src/triangulation/delaunay.rs`:
- Around line 3440-3447: The current conversion for trace_insertion uses
.map(|c| c.to_f64().unwrap_or(f64::NAN)) which hides conversion failures as NaN;
instead change the inner mapping to preserve failure by collecting Option<f64>
values: replace the .map(...).collect::<Vec<f64>>() with .map(|c|
c.to_f64()).collect::<Option<Vec<f64>>>() (so the closure returns
Option<Vec<f64>>) and propagate None when any coordinate conversion fails; apply
the same pattern to the other occurrences you noted (the blocks around the
vertex.point().coords() uses at 3712-3719 and 3937-3948) and ensure
ConstructionSkipSample.coords and any debug output receive an explicit
None/absent value rather than NaN.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
Run ID: 664f8dab-43b8-4700-87d7-3cd8e83b03a0
📒 Files selected for processing (7)
CHANGELOG.mdscripts/postprocess_changelog.pyscripts/tests/test_postprocess_changelog.pysrc/core/triangulation.rssrc/triangulation/delaunay.rstests/large_scale_debug.rstypos.toml
✅ Files skipped from review due to trivial changes (1)
- typos.toml
🚧 Files skipped from review as they are similar to previous changes (1)
- scripts/tests/test_postprocess_changelog.py
Introduce deterministic test hooks to force insertion retries, replacing unstable randomized seed sweeps in 4D fixtures. Refine construction tracing to avoid synthesizing NaN coordinates and ensure failure diagnostics include attempt metadata. Ensure wall-clock timeout messages are flushed to stderr before watchdog aborts during large-scale debugging. Refs: #204
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
src/triangulation/delaunay.rs (2)
4007-4013:⚠️ Potential issue | 🟡 MinorDon't turn coordinate-conversion failure into
[].
unwrap_or_default()makes “couldn't represent this vertex asf64” indistinguishable from an empty coordinate list, which weakens these skip samples as repro anchors. PreferOption<Vec<f64>>onConstructionSkipSample, or skip populatingcoordswhen conversion fails.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/triangulation/delaunay.rs` around lines 4007 - 4013, The code currently calls vertex_coords_f64(vertex).unwrap_or_default() and stores that into ConstructionSkipSample.coords, which loses the distinction between "conversion failed" and "empty coords"; change ConstructionSkipSample.coords to Option<Vec<f64>> (or make it optional) and populate it with Some(coords) when vertex_coords_f64(...) returns Ok/Some, otherwise set coords: None (or omit it) when conversion fails; update the construction_stats.record_skip_sample call to pass Some(coords) on success and None on failure, and adjust any consumers of ConstructionSkipSample to handle the Option<Vec<f64>>.
5986-6017:⚠️ Potential issue | 🟠 MajorThe ridge-link safety net still validates a smaller set than the repair can mutate.
validation_cellsonly covers the post-repair inserted-vertex star plus the originalseed_cells.repair_delaunay_with_flips_k2_k3can cascade flips into neighboring cells outside both sets, so a ridge-link violation introduced there can still commit unnoticed. Please reuse the postcondition verifier insrc/core/algorithms/flips.rsor have the repair API return its touched frontier instead of reconstructing a narrower local subset.As per coding guidelines, "Every mutating operation MUST preserve invariants checked by Tds::is_valid (Level 1-3) and DelaunayTriangulation::is_valid (Level 4), or fail explicitly rather than leave the triangulation in an inconsistent state."
Also applies to: 6070-6086
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/triangulation/delaunay.rs` around lines 5986 - 6017, The current ridge-link safety net only validates validation_cells (post-repair inserted-vertex star plus original seed_cells), which is narrower than the set that repair_delaunay_with_flips_k2_k3 can mutate and so can miss violations; fix by expanding the validated frontier to exactly the cells the repair actually touched: either (preferred) call/reuse the postcondition verifier from src/core/algorithms/flips.rs after repair_delaunay_with_flips_k2_k3 returns, or change repair_delaunay_with_flips_k2_k3 to return its touched frontier (or both) and use that returned slice instead of validation_cells when checking ridge-link invariants (reference validation_cells, repair_delaunay_with_flips_k2_k3 and the flips postcondition verifier).
♻️ Duplicate comments (1)
src/core/triangulation.rs (1)
10938-10945:⚠️ Potential issue | 🟠 MajorTighten the exhaustion assertion.
stats.skipped() && stats.attempts > 1still goes green for any multi-attempt skip, so this can pass without exercising the final retry-exhaustion branch. Captureoutcomehere and at least require the skip to happen onDEFAULT_PERTURBATION_RETRIES + 1; otherwise this regression test keeps a false-positive escape hatch.💡 Minimal tightening
- let (_outcome, stats) = exhaustion_tri + let (outcome, stats) = exhaustion_tri .insert_with_statistics(v, None, None) .unwrap(); - saw_exhausted_skip |= stats.skipped() && stats.attempts > 1; + saw_exhausted_skip |= matches!(outcome, InsertionOutcome::Skipped { .. }) + && stats.attempts == DEFAULT_PERTURBATION_RETRIES + 1;As per coding guidelines,
src/**/*.rs: Unit tests MUST cover known values, error paths, and dimension-generic correctness; dimension-generic tests MUST cover D=2 through D=5 whenever feasible.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/triangulation.rs` around lines 10938 - 10945, Capture the returned outcome from exhaustion_tri.insert_with_statistics (don't ignore the first tuple element) and tighten the assertion: assert that stats.skipped() is true AND stats.attempts == DEFAULT_PERTURBATION_RETRIES + 1, and also assert that the captured outcome value matches the expected final retry-exhaustion variant (match the outcome enum/variant returned by insert_with_statistics) so the test only passes when the final retry-exhaustion branch is exercised; locate symbols perturbation_retry_repro_points_4d, VertexBuilder, exhaustion_tri.insert_with_statistics, stats.skipped(), stats.attempts, DEFAULT_PERTURBATION_RETRIES, and outcome to implement this change.
🧹 Nitpick comments (1)
scripts/tests/test_postprocess_changelog.py (1)
378-411: Consider adding an idempotency test.Coverage for the new MD023 step is solid (opening ATX, closing ATX, column-0 preservation, and full-pipeline integration). One nice-to-have: a test that
_normalize_indented_heading(" **Title**")returns the input unchanged, and/or thatpostprocess()is a fixed point on already-normalized output. It's a cheap guard against future regex tweaks silently breaking idempotency.🧪 Optional test sketch
def test_normalize_is_idempotent(self) -> None: once = _normalize_indented_heading(" ## Correctness Fixes") assert _normalize_indented_heading(once) == once🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/tests/test_postprocess_changelog.py` around lines 378 - 411, Add an idempotency test for the MD023 normalization: add a test method in TestIndentedHeadingNormalization that asserts _normalize_indented_heading(" **Title**") returns the same string and/or that calling _normalize_indented_heading on the result of a previous call is a no-op (e.g., once = _normalize_indented_heading(" ## Correctness Fixes"); assert _normalize_indented_heading(once) == once), and optionally add a small full-pipeline test that writes an already-normalized CHANGELOG (with lines like " **Correctness Fixes**") to disk, runs postprocess(f), and asserts the file content is unchanged to ensure postprocess is a fixed point on normalized input.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/core/triangulation.rs`:
- Around line 334-336: The comment in triangulation.rs claims the large-scale
debug harness will auto-enable tracing::debug! when
DELAUNAY_DEBUG_CAVITY_REDUCTION_ONCE or DELAUNAY_DEBUG_RETRYABLE_SKIP are set,
but the harness (tests/large_scale_debug.rs around the env-var promotion logic)
only promotes a few other flags; update the harness so the env-var check that
promotes the default filter to debug also looks for
DELAUNAY_DEBUG_CAVITY_REDUCTION_ONCE and DELAUNAY_DEBUG_RETRYABLE_SKIP (or
alternatively soften the comment in triangulation.rs to remove the
automatic-wiring claim). Locate the env-var promotion block in
tests/large_scale_debug.rs and either add those two symbols to the list of flags
that trigger debug-level logging or edit the comment in triangulation.rs (the
tracing::debug! comment) to avoid promising automatic wiring.
---
Outside diff comments:
In `@src/triangulation/delaunay.rs`:
- Around line 4007-4013: The code currently calls
vertex_coords_f64(vertex).unwrap_or_default() and stores that into
ConstructionSkipSample.coords, which loses the distinction between "conversion
failed" and "empty coords"; change ConstructionSkipSample.coords to
Option<Vec<f64>> (or make it optional) and populate it with Some(coords) when
vertex_coords_f64(...) returns Ok/Some, otherwise set coords: None (or omit it)
when conversion fails; update the construction_stats.record_skip_sample call to
pass Some(coords) on success and None on failure, and adjust any consumers of
ConstructionSkipSample to handle the Option<Vec<f64>>.
- Around line 5986-6017: The current ridge-link safety net only validates
validation_cells (post-repair inserted-vertex star plus original seed_cells),
which is narrower than the set that repair_delaunay_with_flips_k2_k3 can mutate
and so can miss violations; fix by expanding the validated frontier to exactly
the cells the repair actually touched: either (preferred) call/reuse the
postcondition verifier from src/core/algorithms/flips.rs after
repair_delaunay_with_flips_k2_k3 returns, or change
repair_delaunay_with_flips_k2_k3 to return its touched frontier (or both) and
use that returned slice instead of validation_cells when checking ridge-link
invariants (reference validation_cells, repair_delaunay_with_flips_k2_k3 and the
flips postcondition verifier).
---
Duplicate comments:
In `@src/core/triangulation.rs`:
- Around line 10938-10945: Capture the returned outcome from
exhaustion_tri.insert_with_statistics (don't ignore the first tuple element) and
tighten the assertion: assert that stats.skipped() is true AND stats.attempts ==
DEFAULT_PERTURBATION_RETRIES + 1, and also assert that the captured outcome
value matches the expected final retry-exhaustion variant (match the outcome
enum/variant returned by insert_with_statistics) so the test only passes when
the final retry-exhaustion branch is exercised; locate symbols
perturbation_retry_repro_points_4d, VertexBuilder,
exhaustion_tri.insert_with_statistics, stats.skipped(), stats.attempts,
DEFAULT_PERTURBATION_RETRIES, and outcome to implement this change.
---
Nitpick comments:
In `@scripts/tests/test_postprocess_changelog.py`:
- Around line 378-411: Add an idempotency test for the MD023 normalization: add
a test method in TestIndentedHeadingNormalization that asserts
_normalize_indented_heading(" **Title**") returns the same string and/or that
calling _normalize_indented_heading on the result of a previous call is a no-op
(e.g., once = _normalize_indented_heading(" ## Correctness Fixes"); assert
_normalize_indented_heading(once) == once), and optionally add a small
full-pipeline test that writes an already-normalized CHANGELOG (with lines like
" **Correctness Fixes**") to disk, runs postprocess(f), and asserts the file
content is unchanged to ensure postprocess is a fixed point on normalized input.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
Run ID: eea4e1aa-963a-4e04-b93a-84f35756dd7f
📒 Files selected for processing (7)
CHANGELOG.mdscripts/postprocess_changelog.pyscripts/tests/test_postprocess_changelog.pysrc/core/triangulation.rssrc/triangulation/delaunay.rstests/large_scale_debug.rstypos.toml
✅ Files skipped from review due to trivial changes (2)
- typos.toml
- CHANGELOG.md
| /// Routed through `tracing::debug!`; enable with `RUST_LOG=debug` (the | ||
| /// large-scale debug harness wires this up automatically when | ||
| /// `DELAUNAY_DEBUG_CAVITY_REDUCTION_ONCE` is set). |
There was a problem hiding this comment.
The new debug hooks still won't surface in the large-scale harness by themselves.
These comments now promise automatic harness wiring, but tests/large_scale_debug.rs:262-292 still only promotes the default filter to debug for DELAUNAY_BULK_PROGRESS_EVERY, DELAUNAY_LARGE_DEBUG_PROGRESS_EVERY, DELAUNAY_DEBUG_SHUFFLE, and DELAUNAY_INSERT_TRACE. If a user sets only DELAUNAY_DEBUG_CAVITY_REDUCTION_ONCE or DELAUNAY_DEBUG_RETRYABLE_SKIP, the harness stays at warn and suppresses these tracing::debug! events. Please either update that env-var check or soften these docs.
Also applies to: 397-398
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/core/triangulation.rs` around lines 334 - 336, The comment in
triangulation.rs claims the large-scale debug harness will auto-enable
tracing::debug! when DELAUNAY_DEBUG_CAVITY_REDUCTION_ONCE or
DELAUNAY_DEBUG_RETRYABLE_SKIP are set, but the harness
(tests/large_scale_debug.rs around the env-var promotion logic) only promotes a
few other flags; update the harness so the env-var check that promotes the
default filter to debug also looks for DELAUNAY_DEBUG_CAVITY_REDUCTION_ONCE and
DELAUNAY_DEBUG_RETRYABLE_SKIP (or alternatively soften the comment in
triangulation.rs to remove the automatic-wiring claim). Locate the env-var
promotion block in tests/large_scale_debug.rs and either add those two symbols
to the list of flags that trigger debug-level logging or edit the comment in
triangulation.rs (the tracing::debug! comment) to avoid promising automatic
wiring.
Expand post-repair topology validation to check the entire triangulation whenever flips are performed. This ensures that manifold violations introduced by non-local repair sequences are caught even if they occur outside the original seed frontier. Additionally, update skip diagnostics to treat unrepresentable f64 coordinates as None rather than empty vectors. Refs: #204
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/triangulation/delaunay.rs (1)
3633-3724:⚠️ Potential issue | 🔴 CriticalDon't downgrade hard repair failures into the 4D soft-fail path.
The D≥4 branch currently treats every
DelaunayRepairErrorfromrepair_delaunay_local_single_passas “escalate or continue”. That means hard failures likeFlip(...)/InvalidTopologycan get logged, canonicalized, and then ignored while bulk insertion keeps mutating the same triangulation. Please restrict the soft-fail/escalation path toNonConvergent/PostconditionFailedand propagate all other variants immediately. Based on learnings,Repair paths must bound their work via explicit budgets (max_flips, max_iterations, max_cells_removed) and surface non-convergence as a typed error, never via logging.Also applies to: 3893-3985
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/triangulation/delaunay.rs` around lines 3633 - 3724, In the D≥4 Err(repair_err) branch, don't treat every DelaunayRepairError as eligible for escalation/soft-fail: inspect the repair_err enum (e.g., match on DelaunayRepairError variants) and only run try_local_repair_escalation_d_ge_4 / soft-fail handling for NonConvergent and PostconditionFailed; for all other variants (e.g., Flip(...), InvalidTopology, etc.) immediately propagate the error (return Err(repair_err)) instead of canonicalizing or extending soft_fail_seeds. Apply the same selective-variant check to the analogous block around try_local_repair_escalation_d_ge_4 in the other occurrence noted (the 3893-3985 region).
♻️ Duplicate comments (1)
src/core/triangulation.rs (1)
4558-4564:⚠️ Potential issue | 🟠 MajorKeep the disconnected-boundary trace payload lazy.
Line 4558 still collects
cells_to_addinto aVecbeforelog_cavity_reduction_eventcheckstrace_cavity_reduction, so everyDisconnectedBoundaryexpansion pays an allocation even whenDELAUNAY_DEBUG_CAVITY_REDUCTION_ONCEis unset. That leaves one hot-path branch outside the intended release-mode gating.⚡ Suggested fix
- let added: Vec<CellKey> = cells_to_add.iter().copied().collect(); log_cavity_reduction_event( trace_cavity_reduction, iterations, &conflict_cells, - || format!("disconnected_boundary_expand add_cells={added:?}"), + || { + let added: Vec<CellKey> = + cells_to_add.iter().copied().collect(); + format!("disconnected_boundary_expand add_cells={added:?}") + }, );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/triangulation.rs` around lines 4558 - 4564, The current code eagerly collects cells_to_add into added before calling log_cavity_reduction_event, causing an allocation on the hot path; instead, remove the precomputed added variable and make the payload lazy by moving the collection into the closure passed to log_cavity_reduction_event (the existing closure argument to log_cavity_reduction_event should perform the cells_to_add.iter().copied().collect() and format only when invoked), so the Vec allocation only happens if trace_cavity_reduction is enabled; update the call sites around log_cavity_reduction_event and the local cells_to_add usage accordingly.
🧹 Nitpick comments (2)
tests/large_scale_debug.rs (2)
918-929: LGTM — nice targeted unit test for the flush helper.Capturing into a
Vec<u8>cleanly exercises the synchronous-write contract without depending on a live stderr, and the em-dash in the expected string matches the helper's format string exactly, so accidental ASCII-dash drift will fail the assertion. Two small, optional follow-ups if you ever feel like extending coverage:
- Add a negative case using a writer that returns
io::Erroronwrite_all/flushto lock in thatwrite_timeout_abort_messagepropagates the error rather than panicking (important because the caller ininstall_runtime_caprelies on thatResultto decide whether to emit thetracing::debug!fallback).- Assert on the trailing newline explicitly (e.g.
assert!(message.ends_with('\n'))) to make intent obvious independent of the full string match.Neither is required.
♻️ Optional: add an error-propagation case
+#[test] +fn test_write_timeout_abort_message_propagates_writer_errors() { + struct FailingWriter; + impl std::io::Write for FailingWriter { + fn write(&mut self, _: &[u8]) -> std::io::Result<usize> { + Err(std::io::Error::new(std::io::ErrorKind::Other, "boom")) + } + fn flush(&mut self) -> std::io::Result<()> { + Ok(()) + } + } + let err = write_timeout_abort_message(FailingWriter, 1) + .expect_err("writer error must propagate"); + assert_eq!(err.kind(), std::io::ErrorKind::Other); +}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/large_scale_debug.rs` around lines 918 - 929, Add a negative test that verifies write_timeout_abort_message propagates write/flush errors: implement a test (e.g., test_write_timeout_abort_message_propagates_error) that constructs a custom writer which returns an io::Error from write_all and/or flush, call write_timeout_abort_message(&mut failing_writer, 17) and assert it returns Err; reference the helper function name write_timeout_abort_message and the runtime caller install_runtime_cap in the test comment to clarify why error propagation is required. Ensure the test uses io::ErrorKind (or a concrete io::Error) to match the propagated error rather than panicking.
82-88: Optional: message is emitted twice when a tracing subscriber is active.
tracing::error!and the synchronouswriteln!both target stderr in typical harness runs (the fmt subscriber useswith_test_writer, which writes to the captured test output / stderr), so operators will see the same timeout banner twice on abort. That's harmless — and duplication is arguably safer than the old "log-only, lose-on-abort" behavior — but if you'd prefer a single line, you can drop thetracing::error!(the synchronous writer already guarantees visibility) or route thetracingevent at a lower level strictly for structured-log consumers.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/large_scale_debug.rs` around lines 82 - 88, The timeout banner is being emitted twice because write_timeout_abort_message calls tracing::error! and also writes synchronously via writeln!/writer.flush; remove the tracing::error! invocation (leave the synchronous writeln!(writer, "{message}")? and writer.flush() in write_timeout_abort_message) so the banner is only written once to stderr, or alternatively change the tracing event to a lower/structured-only target if you prefer keeping a trace event instead of a console log.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/core/algorithms/flips.rs`:
- Around line 439-445: The snapshot of removed cells
(snapshot_removed_cell_vertices) is being taken after new cells are inserted and
neighbors rewired, which can leave the TDS half-mutated if snapshot fails; move
the call to snapshot_removed_cell_vertices(tds, removed_cells) so it runs before
any mutating operations (before calls that insert new cells or call
tds.remove_cells_by_keys / neighbor rewiring), and if
snapshot_removed_cell_vertices returns FlipError::MissingCell or any error,
propagate that error immediately without performing mutations so invariants
checked by Tds::is_valid and DelaunayTriangulation::is_valid remain preserved.
---
Outside diff comments:
In `@src/triangulation/delaunay.rs`:
- Around line 3633-3724: In the D≥4 Err(repair_err) branch, don't treat every
DelaunayRepairError as eligible for escalation/soft-fail: inspect the repair_err
enum (e.g., match on DelaunayRepairError variants) and only run
try_local_repair_escalation_d_ge_4 / soft-fail handling for NonConvergent and
PostconditionFailed; for all other variants (e.g., Flip(...), InvalidTopology,
etc.) immediately propagate the error (return Err(repair_err)) instead of
canonicalizing or extending soft_fail_seeds. Apply the same selective-variant
check to the analogous block around try_local_repair_escalation_d_ge_4 in the
other occurrence noted (the 3893-3985 region).
---
Duplicate comments:
In `@src/core/triangulation.rs`:
- Around line 4558-4564: The current code eagerly collects cells_to_add into
added before calling log_cavity_reduction_event, causing an allocation on the
hot path; instead, remove the precomputed added variable and make the payload
lazy by moving the collection into the closure passed to
log_cavity_reduction_event (the existing closure argument to
log_cavity_reduction_event should perform the
cells_to_add.iter().copied().collect() and format only when invoked), so the Vec
allocation only happens if trace_cavity_reduction is enabled; update the call
sites around log_cavity_reduction_event and the local cells_to_add usage
accordingly.
---
Nitpick comments:
In `@tests/large_scale_debug.rs`:
- Around line 918-929: Add a negative test that verifies
write_timeout_abort_message propagates write/flush errors: implement a test
(e.g., test_write_timeout_abort_message_propagates_error) that constructs a
custom writer which returns an io::Error from write_all and/or flush, call
write_timeout_abort_message(&mut failing_writer, 17) and assert it returns Err;
reference the helper function name write_timeout_abort_message and the runtime
caller install_runtime_cap in the test comment to clarify why error propagation
is required. Ensure the test uses io::ErrorKind (or a concrete io::Error) to
match the propagated error rather than panicking.
- Around line 82-88: The timeout banner is being emitted twice because
write_timeout_abort_message calls tracing::error! and also writes synchronously
via writeln!/writer.flush; remove the tracing::error! invocation (leave the
synchronous writeln!(writer, "{message}")? and writer.flush() in
write_timeout_abort_message) so the banner is only written once to stderr, or
alternatively change the tracing event to a lower/structured-only target if you
prefer keeping a trace event instead of a console log.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
Run ID: 16f74853-823c-4159-95ad-c53c44548e21
📒 Files selected for processing (5)
scripts/tests/test_postprocess_changelog.pysrc/core/algorithms/flips.rssrc/core/triangulation.rssrc/triangulation/delaunay.rstests/large_scale_debug.rs
Distinguish between recoverable soft-fail repair errors and fatal topology/flip failures during bulk construction to prevent entering invalid states. Snapshot removed cell vertices before mutation in the flip algorithm to ensure unexpected failures abort without leaving dangling replacement cells. Additionally, improve diagnostic reliability by propagating writer failures in timeout handlers and lazily evaluating cavity-reduction logs. Refs: #204
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/core/algorithms/locate.rs (1)
1696-1784:⚠️ Potential issue | 🟡 MinorError-precedence in
extract_cavity_boundaryis now hash-map-iteration-order dependent.The accumulation refactor
continues onfacet_count >= 3so all fans can be shrunk together, but the loop still early-returns for two other conditions encountered on a different ridge during the same scan:
- Line 1726 —
ConflictError::OpenBoundary(when some ridge hasfacet_count == 1)- Line 1767 —
ConflictError::InternalInconsistency(whenfacet_count == 2butsecond_facetisNone)Because
ridge_mapis aFastHashMap, the order in which these ridges are visited is non-deterministic. If a conflict region simultaneously contains a ridge fan and an open ridge (or the inconsistency case), which error is surfaced to the caller depends on hash iteration order, not on the geometric/structural priority you would normally want (e.g. open boundary → "this cell's facet is dangling, remove it" probably should beat "extra fan cells").Consider one of:
- Two-pass scan: in pass 1 detect any
OpenBoundary/InternalInconsistencyand return immediately; in pass 2 accumulate fan extras. Makes precedence deterministic and the contract crisp.- Document explicitly in the
RidgeFandocs (and the newOpenBoundarydocs) that when multiple structural errors coexist in a single conflict region, only one is surfaced and its identity is non-deterministic — the caller's cavity-reduction loop is expected to callextract_cavity_boundaryagain after each fix.Option 2 is the lighter change if the cavity-reduction loop already iterates to a fixed point, but the current behavior is worth being honest about either way.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/algorithms/locate.rs` around lines 1696 - 1784, The loop in extract_cavity_boundary currently mixes early-returns for OpenBoundary and InternalInconsistency with accumulation for ridge fans (facet_count >= 3), making which error is returned depend on ridge_map iteration order; fix by making error precedence deterministic: perform a first pass over ridge_map.values() that only checks for facet_count == 1 (return OpenBoundary with open_cell) and for facet_count == 2 with missing second_facet (return InternalInconsistency), and only if that pass finds no such errors perform a second pass that performs the existing fan-accumulation logic (handling facet_count >= 3 via collect_ridge_fan_extra_cells, populating ridge_fan_extra_cells and first_ridge_fan, and building adjacency entries for facet_count == 2 using info.second_facet); keep the same error construction (ConflictError variants) and calls to log_first_ridge_fan_dump and ridge_fan_seen_cells so behavior is preserved except for deterministic error precedence.
🧹 Nitpick comments (4)
src/core/algorithms/locate.rs (2)
1858-1957: Tests cover Display formatting and dedup behavior; consider one fan-aggregation case.The three new tests exercise
InternalInconsistencySiteDisplay, the "missing" markers in the formatting helpers, andcollect_ridge_fan_extra_cellsdeduplication — all good. The one gap is end-to-end coverage of the multi-fan accumulation path inextract_cavity_boundary(lines 1746–1784): the existingtest_extract_cavity_boundary_rejects_ridge_fan_2dbuilds a single fan around a shared vertex, so it doesn't exercise the case where two distinct ridges in the same conflict region each havefacet_count >= 3. A targeted test that constructs two disjoint ridge fans in one conflict region and asserts thatextra_cellscontains the union (and thatfacet_count/ridge_vertex_countcome from the first-encountered fan) would lock in the behavior the new doc onRidgeFan(lines 211–224) promises.Not a blocker — the unit tests on
collect_ridge_fan_extra_cellscover the per-fan dedup logic and the accumulation loop is mechanically simple — but worth adding before relying on this in the 4D 500-point repro.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/algorithms/locate.rs` around lines 1858 - 1957, Add a unit test that exercises the multi-fan accumulation path in extract_cavity_boundary by constructing a conflict region containing two distinct ridge fans (each with facet_count >= 3) so the code path that merges extra_facets runs; call extract_cavity_boundary (or drive the public behavior that triggers it), then assert that the returned extra_cells equals the union of both fans' cells (no duplicates) and that the recorded RidgeInfo/fan metadata (facet_count and ridge_vertex_count) corresponds to the first-encountered fan; reference RidgeFan, RidgeInfo, collect_ridge_fan_extra_cells and extract_cavity_boundary to locate the loop that accumulates fans and ensure the test verifies deduplication and first-fan metadata selection.
59-65: One-shot ridge-fan dump latches for the entire process lifetime.
RIDGE_FAN_DUMP_EMITTEDis a process-globalAtomicBoolwith no reset path, andRIDGE_FAN_DUMP_ENABLEDcaches the env var value on first read. The doc onlog_first_ridge_fan_dumpsays "first detected ridge fan in a run", but in test binaries, benches, or long-lived harnesses that exercise insertion across many seeds, only the very first fan in the entire process will ever be logged — subsequent seeds withDELAUNAY_DEBUG_RIDGE_FAN_ONCE=1will silently emit nothing, even though a fresh investigator would naturally expect "one dump per repro run".For the large-scale 4D bisect harness this is fine (fresh process per invocation), but consider either:
- Tightening the doc to say "first ridge fan in the current process" (or "across the entire test binary"), or
- Switching to a
OnceLock<()>/AtomicUsizekeyed somewhere that callers can reset between bisect iterations if multi-iteration usage emerges.No functional change needed if single-process usage is the only intended workflow; just make the latch semantics explicit in the doc.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/algorithms/locate.rs` around lines 59 - 65, The current one-shot behavior is process-global: RIDGE_FAN_DUMP_EMITTED (AtomicBool) and RIDGE_FAN_DUMP_ENABLED (OnceLock<bool>) latch forever, so only the first ridge fan in the whole process is logged; update the doc on log_first_ridge_fan_dump to explicitly state "first ridge fan in the current process / test binary" OR replace the latch scheme (e.g., use OnceLock<()> or an AtomicUsize token and provide a reset hook) so callers can reset between iterations; refer to RIDGE_FAN_DUMP_EMITTED, RIDGE_FAN_DUMP_ENABLED, and log_first_ridge_fan_dump when making the change.tests/large_scale_debug.rs (2)
111-115: Promote the flush-failure log level so it surfaces under the default filter.
init_tracing()defaults the filter towarnunless aDELAUNAY_*debug env var is set, so atracing::debug!here is silently dropped in the common case — exactly when an operator is most likely to be staring at the terminal wondering why the timeout message went missing. Since this fallback fires only when the synchronous stderr write actually fails, it's worth surfacing atwarn!(orerror!) so it's visible without extraRUST_LOGwiring.♻️ Proposed change
Err(RecvTimeoutError::Timeout) => { if let Err(err) = write_timeout_abort_message(io::stderr().lock(), max_secs) { - tracing::debug!(?err, "failed to flush timeout message before abort"); + tracing::warn!(?err, "failed to flush timeout message before abort"); } process::abort(); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/large_scale_debug.rs` around lines 111 - 115, The log level for the stderr-flush failure inside the RecvTimeoutError::Timeout branch is currently tracing::debug!, which is dropped by default; change that call to tracing::warn! (or tracing::error!) so failures in write_timeout_abort_message(...) are visible without requiring DELAUNAY_* debug env vars—locate the RecvTimeoutError::Timeout arm where write_timeout_abort_message(io::stderr().lock(), max_secs) is called and replace tracing::debug!(?err, "failed to flush timeout message before abort") with tracing::warn!(?err, "failed to flush timeout message before abort") (or tracing::error! if you prefer higher severity).
540-545: Minor:Option<[f64; D]>Debug output is nowSome([...])/None.After switching
SkipSample::coordstoOption<[f64; D]>, the existingcoords={:?}line will render ascoords=Some([…])for the common path andcoords=Nonewhen the underlyingcoords_availableflag was false. That's serviceable, but if you want the skip-sample replay anchors to stay copy-pasteable into the existingcoords=[…]format, consider unwrapping theOptionfor display (e.g.,coords.as_ref().map(|c| format!("{c:?}")).unwrap_or_else(|| "<unavailable>".into())). Optional cleanup.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/large_scale_debug.rs` around lines 540 - 545, The current println of skip samples prints Option<[f64; D]> as Some(...) / None; update the formatting for summary.skip_samples so SkipSample::coords is displayed as just the array when present and a clear placeholder when absent: in the print for s (where you reference s.index, s.uuid, s.attempts, s.coords, s.error) replace direct {:?} of s.coords with logic that maps the Option to a formatted string (e.g., s.coords.as_ref().map(|c| format!("{c:?}")).unwrap_or_else(|| "<unavailable>".into())) so the output becomes coords=[...] for available coords and coords=<unavailable> otherwise.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/core/algorithms/flips.rs`:
- Around line 3988-3992: The call to debug_ridge_context(...) runs heavy helpers
(collect_cells_around_ridge, cells_containing_vertices and building
global_cell_details) unconditionally before tracing::debug!, causing costly work
on every k=3 postcondition failure; change the call site to early-gate it with
the same runtime opt-in used by the repair-path callers (or move the opt-in
check into debug_ridge_context itself) so the expensive computations only run
when the ridge-debug env flag is enabled; update references to
debug_ridge_context, collect_cells_around_ridge, cells_containing_vertices and
the global_cell_details construction accordingly so the global scan is skipped
unless the debug flag is set.
In `@src/triangulation/delaunay.rs`:
- Around line 3448-3469: The Err branch of the match on escalation_result
currently converts every escalation error into
LocalRepairEscalationOutcome::FailedAlso, which lets hard failures be treated as
soft-fails; change that logic to only map recoverable escalation errors (e.g.,
NonConvergent and PostconditionFailed variants) to FailedAlso and to propagate
any other escalation errors outwards (return Err(escalation_err) / propagate the
error) so hard escalation failures abort the operation and do not continue
mutating the triangulation; update the match handling near "match
escalation_result" and the construction of
LocalRepairEscalationOutcome::FailedAlso accordingly (keep
canonicalize_after_bulk_repair/use of stats intact for the Ok branch).
- Around line 1501-1511: The helper vertex_coords_f64 currently converts coords
with ToPrimitive::to_f64 but can return non-finite values; change it to return
None if any coordinate fails conversion or is not finite. In function
vertex_coords_f64 (and referencing Vertex::point()/coords()), map each
coordinate to ToPrimitive::to_f64(), then short-circuit to None if the Option is
None or if the f64 is not finite (use is_finite), and only if all coordinates
succeed return Some(Vec<f64>); do not substitute NaN/INF or unwrap_or any
sentinel values.
- Around line 6115-6141: The current validate_ridge_links_after_repair
unconditionally collects all cell keys and validates the whole TDS whenever any
flips occurred, causing O(total_cells) work per repaired insertion; change it to
only perform a full-TDS scan when the repair function indicates a
reseed/fallback occurred (e.g., a boolean or enum returned from
repair_delaunay_with_flips_k2_k3), otherwise validate only the mutated frontier
returned by the repair call (thread the final mutated frontier out of
repair_delaunay_with_flips_k2_k3). Also ensure repair paths
(repair_delaunay_with_flips, repair_facet_oversharing, delaunayize_by_flips)
enforce explicit budgets (max_flips, max_iterations, max_cells_removed) and
return a typed non-convergence error so callers can decide to trigger a global
validation/rescue path instead of always scanning all cells.
---
Outside diff comments:
In `@src/core/algorithms/locate.rs`:
- Around line 1696-1784: The loop in extract_cavity_boundary currently mixes
early-returns for OpenBoundary and InternalInconsistency with accumulation for
ridge fans (facet_count >= 3), making which error is returned depend on
ridge_map iteration order; fix by making error precedence deterministic: perform
a first pass over ridge_map.values() that only checks for facet_count == 1
(return OpenBoundary with open_cell) and for facet_count == 2 with missing
second_facet (return InternalInconsistency), and only if that pass finds no such
errors perform a second pass that performs the existing fan-accumulation logic
(handling facet_count >= 3 via collect_ridge_fan_extra_cells, populating
ridge_fan_extra_cells and first_ridge_fan, and building adjacency entries for
facet_count == 2 using info.second_facet); keep the same error construction
(ConflictError variants) and calls to log_first_ridge_fan_dump and
ridge_fan_seen_cells so behavior is preserved except for deterministic error
precedence.
---
Nitpick comments:
In `@src/core/algorithms/locate.rs`:
- Around line 1858-1957: Add a unit test that exercises the multi-fan
accumulation path in extract_cavity_boundary by constructing a conflict region
containing two distinct ridge fans (each with facet_count >= 3) so the code path
that merges extra_facets runs; call extract_cavity_boundary (or drive the public
behavior that triggers it), then assert that the returned extra_cells equals the
union of both fans' cells (no duplicates) and that the recorded RidgeInfo/fan
metadata (facet_count and ridge_vertex_count) corresponds to the
first-encountered fan; reference RidgeFan, RidgeInfo,
collect_ridge_fan_extra_cells and extract_cavity_boundary to locate the loop
that accumulates fans and ensure the test verifies deduplication and first-fan
metadata selection.
- Around line 59-65: The current one-shot behavior is process-global:
RIDGE_FAN_DUMP_EMITTED (AtomicBool) and RIDGE_FAN_DUMP_ENABLED (OnceLock<bool>)
latch forever, so only the first ridge fan in the whole process is logged;
update the doc on log_first_ridge_fan_dump to explicitly state "first ridge fan
in the current process / test binary" OR replace the latch scheme (e.g., use
OnceLock<()> or an AtomicUsize token and provide a reset hook) so callers can
reset between iterations; refer to RIDGE_FAN_DUMP_EMITTED,
RIDGE_FAN_DUMP_ENABLED, and log_first_ridge_fan_dump when making the change.
In `@tests/large_scale_debug.rs`:
- Around line 111-115: The log level for the stderr-flush failure inside the
RecvTimeoutError::Timeout branch is currently tracing::debug!, which is dropped
by default; change that call to tracing::warn! (or tracing::error!) so failures
in write_timeout_abort_message(...) are visible without requiring DELAUNAY_*
debug env vars—locate the RecvTimeoutError::Timeout arm where
write_timeout_abort_message(io::stderr().lock(), max_secs) is called and replace
tracing::debug!(?err, "failed to flush timeout message before abort") with
tracing::warn!(?err, "failed to flush timeout message before abort") (or
tracing::error! if you prefer higher severity).
- Around line 540-545: The current println of skip samples prints Option<[f64;
D]> as Some(...) / None; update the formatting for summary.skip_samples so
SkipSample::coords is displayed as just the array when present and a clear
placeholder when absent: in the print for s (where you reference s.index,
s.uuid, s.attempts, s.coords, s.error) replace direct {:?} of s.coords with
logic that maps the Option to a formatted string (e.g.,
s.coords.as_ref().map(|c| format!("{c:?}")).unwrap_or_else(||
"<unavailable>".into())) so the output becomes coords=[...] for available coords
and coords=<unavailable> otherwise.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
Run ID: 5608f51a-03b9-42e5-a998-cf14878dda5f
📒 Files selected for processing (5)
src/core/algorithms/flips.rssrc/core/algorithms/locate.rssrc/core/triangulation.rssrc/triangulation/delaunay.rstests/large_scale_debug.rs
✅ Files skipped from review due to trivial changes (1)
- src/core/triangulation.rs
Refine Delaunay repair to track the specific set of cells created by flips
("touched cells") and distinguish between local repairs and full-TDS
reseeds. This allows topology validation to target only modified regions
whenever possible, rather than scanning the entire triangulation.
Additionally, harden internal consistency checks during cavity
extraction, filter non-finite coordinates in vertex diagnostics, and
ensure hard repair errors propagate during high-dimensional escalation
rather than being suppressed as soft failures.
Refs: #204
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
src/core/algorithms/flips.rs (2)
4417-4438:⚠️ Potential issue | 🟡 Minor
DELAUNAY_REPAIR_DEBUG_SUMMARYis still silent under the debug-level workflow.This hook is opt-in via
DELAUNAY_REPAIR_DEBUG_SUMMARY, but it emits withtracing::trace!. The new harness behavior only raises filtering todebug, so setting the env var alone will not surface the summary unless callers also opt into trace-level logs.As per coding guidelines "Use `tracing::{debug,info,warn,error}!` for committed diagnostics across production code, tests, and benchmarks, especially for library/runtime code, non-trivial test diagnostics, and debugging of numerical or topological invariants".Suggested fix
- tracing::trace!( + tracing::debug!(🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/algorithms/flips.rs` around lines 4417 - 4438, The summary log is gated by DELAUNAY_REPAIR_DEBUG_SUMMARY but currently uses tracing::trace!, so it remains hidden under the harness's debug-level filter; change the tracing::trace! invocation that emits the repair summary (the block referencing label, config.attempt, config.queue_order, stats.*, diagnostics.* and "repair summary") to tracing::debug! so the summary is visible when the env var is set under the debug-level workflow.
3143-3171:⚠️ Potential issue | 🟠 MajorMake the 2D seeded path fall back when every seed cell is stale.
Line 3145 still treats
Some(seed_cells)as a successful local seed even when every key has already been deleted, so the 2D attempt can exit without examining any facets. The D≥3 path now falls back to global seeding viaseed_repair_queues, so this leaves 2D repair behavior inconsistent and can turn stalerepair_seed_cellsinto a silent no-op.Suggested fix
- let mut last_applied_flip: Option<LastAppliedFlip> = None; - let mut touched_cells = CellKeyBuffer::new(); - let used_full_reseed = seed_cells.is_none(); + let mut last_applied_flip: Option<LastAppliedFlip> = None; + let mut touched_cells = CellKeyBuffer::new(); + let mut used_full_reseed = seed_cells.is_none(); @@ - if let Some(seeds) = seed_cells { + if let Some(seeds) = seed_cells { + let mut present = 0usize; for &cell_key in seeds { + if !tds.contains_cell(cell_key) { + continue; + } + present += 1; enqueue_cell_facets( tds, cell_key, @@ &mut stats, )?; } + if present == 0 && !seeds.is_empty() { + used_full_reseed = true; + for facet in AllFacetsIter::new(tds) { + let handle = FacetHandle::new(facet.cell_key(), facet.facet_index()); + enqueue_facet( + tds, + handle, + &mut queue, + &mut queued, + &mut facet_handles, + &mut stats, + ); + } + } } else {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/algorithms/flips.rs` around lines 3143 - 3171, When seed_cells is Some(...) we currently assume seeding succeeded even if every seed was stale; after the loop that calls enqueue_cell_facets for each seed, check whether any facets were actually enqueued (e.g., queue.is_empty() or queued/facet_handles unchanged) and if none were added, fall back to the global seeding path used in the else-branch: iterate AllFacetsIter::new(tds) and call enqueue_facet(...) with the same queue, queued, facet_handles and stats so 2D repair behaves like the D≥3 code path when all seeds are stale; also ensure used_full_reseed is set appropriately when you trigger this fallback.
🧹 Nitpick comments (1)
tests/large_scale_debug.rs (1)
201-228: Consider degrading tocoords: Noneon dimension mismatch instead of dropping the sample.When
coords_availableis true buttry_into::<[f64; D]>fails, the wholeSkipSampleis discarded viareturn None. That loses theindex/uuid/error— i.e., the very anchors used to replay the skipped vertex — for what is already a "this shouldn't happen" bookkeeping bug betweenConstructionStatisticsandD. Preserving the sample withcoords: Noneletsprint_insertion_summary(Lines 543-549) still showcoords=<unavailable>alongside the uuid/error.♻️ Proposed refactor
- .filter_map(|s| { + .map(|s| { let coords = if s.coords_available { - let Ok(coords) = s.coords.as_slice().try_into() else { - tracing::warn!( - index = s.index, - uuid = %s.uuid, - coords_len = s.coords.len(), - expected_dim = D, - "dropping skip sample due to coordinate dimension mismatch" - ); - return None; - }; - Some(coords) + match s.coords.as_slice().try_into() { + Ok(coords) => Some(coords), + Err(_) => { + tracing::warn!( + index = s.index, + uuid = %s.uuid, + coords_len = s.coords.len(), + expected_dim = D, + "skip sample coordinates dropped due to dimension mismatch" + ); + None + } + } } else { None }; - Some(SkipSample { + SkipSample { index: s.index, uuid: s.uuid, coords, attempts: s.attempts, error: s.error.clone(), - }) + } }) .collect();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/large_scale_debug.rs` around lines 201 - 228, The current filter_map drops an entire SkipSample when coords_available is true but s.coords.as_slice().try_into::<[f64; D]>() fails; instead, keep the SkipSample and set coords: None so index/uuid/error/attempts are preserved for replay and print_insertion_summary. In the closure over stats.skip_samples, replace the early return None on try_into failure with logic that logs the warning (tracing::warn! as is) and sets coords = None, then continue creating and returning Some(SkipSample { index: s.index, uuid: s.uuid, coords, attempts: s.attempts, error: s.error.clone() }). Ensure you still respect the case where coords_available is false (coords stays None) and that the rest of the fields are unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/triangulation/delaunay.rs`:
- Around line 3448-3459: The Ok(stats) branch for escalation currently only
calls self.canonicalize_after_bulk_repair() when stats.flips_performed > 0,
which can leave the TDS in a non-canonical state after a prior failed pass;
change the logic in the match arm handling escalation_result::Ok(stats) (the
LocalRepairEscalationOutcome::Succeeded path) to always call
self.canonicalize_after_bulk_repair() unconditionally (remove the
flips_performed check) and propagate any error (i.e., return Err or use the
existing ? operator) so canonicalization always runs after a successful
escalation.
- Around line 3490-3498: map_hard_repair_error currently maps every
DelaunayRepairError to TriangulationConstructionError::GeometricDegeneracy which
causes build_with_shuffled_retries to keep retrying deterministic hard failures;
change map_hard_repair_error to inspect the DelaunayRepairError and only map
true geometric degeneracies to
TriangulationConstructionError::GeometricDegeneracy, while mapping
deterministic/non-retryable repair failures to the explicit non-retryable
construction error variant (so build_with_shuffled_retries will abort
immediately); update the function handling of DelaunayRepairError cases in
map_hard_repair_error and return the appropriate
DelaunayTriangulationConstructionError variant for non-retryable faults instead
of always using GeometricDegeneracy.
---
Outside diff comments:
In `@src/core/algorithms/flips.rs`:
- Around line 4417-4438: The summary log is gated by
DELAUNAY_REPAIR_DEBUG_SUMMARY but currently uses tracing::trace!, so it remains
hidden under the harness's debug-level filter; change the tracing::trace!
invocation that emits the repair summary (the block referencing label,
config.attempt, config.queue_order, stats.*, diagnostics.* and "repair summary")
to tracing::debug! so the summary is visible when the env var is set under the
debug-level workflow.
- Around line 3143-3171: When seed_cells is Some(...) we currently assume
seeding succeeded even if every seed was stale; after the loop that calls
enqueue_cell_facets for each seed, check whether any facets were actually
enqueued (e.g., queue.is_empty() or queued/facet_handles unchanged) and if none
were added, fall back to the global seeding path used in the else-branch:
iterate AllFacetsIter::new(tds) and call enqueue_facet(...) with the same queue,
queued, facet_handles and stats so 2D repair behaves like the D≥3 code path when
all seeds are stale; also ensure used_full_reseed is set appropriately when you
trigger this fallback.
---
Nitpick comments:
In `@tests/large_scale_debug.rs`:
- Around line 201-228: The current filter_map drops an entire SkipSample when
coords_available is true but s.coords.as_slice().try_into::<[f64; D]>() fails;
instead, keep the SkipSample and set coords: None so index/uuid/error/attempts
are preserved for replay and print_insertion_summary. In the closure over
stats.skip_samples, replace the early return None on try_into failure with logic
that logs the warning (tracing::warn! as is) and sets coords = None, then
continue creating and returning Some(SkipSample { index: s.index, uuid: s.uuid,
coords, attempts: s.attempts, error: s.error.clone() }). Ensure you still
respect the case where coords_available is false (coords stays None) and that
the rest of the fields are unchanged.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
Run ID: a77defb9-6cbe-4662-93ad-6d9438258a9d
📒 Files selected for processing (4)
src/core/algorithms/flips.rssrc/core/algorithms/locate.rssrc/triangulation/delaunay.rstests/large_scale_debug.rs
✅ Files skipped from review due to trivial changes (1)
- src/core/algorithms/locate.rs
Optimize duplicate detection during repair by using a hash set for touched cells. Refine error handling to distinguish between retryable geometric degeneracies and fatal internal inconsistencies, ensuring that hard failures like orientation canonicalization or verification errors abort construction correctly. Additionally, expose a dedicated repair module in the prelude and fix a coordinate mismatch bug in large-scale debug diagnostics. Refs: #204
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/core/algorithms/flips.rs (1)
3177-3201:⚠️ Potential issue | 🟠 MajorDon't turn an explicit empty seed set into a global 2D repair.
On Lines 3188-3201,
Some(&[])falls back toAllFacetsIter::new(tds)because the guard is onlyqueue.is_empty(). That makes the 2D path inconsistent withseed_repair_queues(), where an empty slice means “do not reseed”, and it can unexpectedly run a whole-TDS repair from a caller that intentionally passed no local frontier.Suggested fix
- if queue.is_empty() { + if queue.is_empty() && !seeds.is_empty() { used_full_reseed = true; for facet in AllFacetsIter::new(tds) { let handle = FacetHandle::new(facet.cell_key(), facet.facet_index()); enqueue_facet( tds,As per coding guidelines "Repair paths (
repair_delaunay_with_flips,repair_facet_oversharing,delaunayize_by_flips) must bound their work via explicit budgets (max_flips,max_iterations,max_cells_removed) and surface non-convergence as a typed error — never by logging and proceeding".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/algorithms/flips.rs` around lines 3177 - 3201, The code treats Some(&[]) like None and falls back to a global reseed; change the guard so a full TDS reseed only happens when seed_cells is None (not when Some(empty)). Concretely, replace the current check that triggers AllFacetsIter::new(tds) (the block that calls enqueue_facet) with a condition that requires seed_cells.is_none() && queue.is_empty(), so callers who pass Some(&[]) do not trigger a whole-TDS repair; keep references to enqueue_cell_facets, enqueue_facet, AllFacetsIter::new, and ensure higher-level repair functions (repair_delaunay_with_flips / repair_facet_oversharing / delaunayize_by_flips) continue to enforce explicit budgets and return a non-convergence error rather than silently proceeding.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/core/algorithms/flips.rs`:
- Around line 2021-2040: The current flow only records new_cells via
record_touched_cells and blindly forwards outcome.touched_cells in
repair_run_from_attempt, so full-reseed runs keep a local frontier; modify
repair_run_from_attempt (or the code that builds the RepairAttemptOutcome) so
that if outcome.used_full_reseed is true you replace outcome.touched_cells with
a materialized snapshot of the triangulation’s full frontier (call
tds.cell_keys() and collect/clone those keys into the
DelaunayRepairRun.touched_cells) before constructing DelaunayRepairRun; ensure
you use the same CellKey type and preserve stats and used_full_reseed fields.
---
Outside diff comments:
In `@src/core/algorithms/flips.rs`:
- Around line 3177-3201: The code treats Some(&[]) like None and falls back to a
global reseed; change the guard so a full TDS reseed only happens when
seed_cells is None (not when Some(empty)). Concretely, replace the current check
that triggers AllFacetsIter::new(tds) (the block that calls enqueue_facet) with
a condition that requires seed_cells.is_none() && queue.is_empty(), so callers
who pass Some(&[]) do not trigger a whole-TDS repair; keep references to
enqueue_cell_facets, enqueue_facet, AllFacetsIter::new, and ensure higher-level
repair functions (repair_delaunay_with_flips / repair_facet_oversharing /
delaunayize_by_flips) continue to enforce explicit budgets and return a
non-convergence error rather than silently proceeding.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
Run ID: a59be2bd-2191-4fc4-992b-fceb137c11ad
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (8)
Cargo.tomlsrc/core/algorithms/flips.rssrc/lib.rssrc/triangulation/delaunay.rssrc/triangulation/delaunayize.rstests/delaunayize_workflow.rstests/large_scale_debug.rstests/triangulation_builder.rs
✅ Files skipped from review due to trivial changes (4)
- tests/delaunayize_workflow.rs
- tests/triangulation_builder.rs
- Cargo.toml
- src/triangulation/delaunayize.rs
🚧 Files skipped from review as they are similar to previous changes (1)
- src/lib.rs
…reseed Update Delaunay repair to correctly populate the touched cell buffer with every cell in the triangulation when a full reseed occurs. Previously, local repair attempts that fell back to a full reseed could return an incomplete frontier for post-repair validation. Additionally, remove the implicit fallback to full reseed when provided with empty local seeds, treating an empty seed slice as a bounded no-op. Refs: #204
Fixed, but Coderabbit is time limited so didn't re-run review.
repair_seed_cellsso post-insertion local Delaunay repair widens its frontier beyond the inserted vertex star; cells shrunk out of the conflict region during cavity reduction now participate in the next repair pass.RidgeFan, letting one cavity-reduction step shrink all detected fans at once instead of peeling them iteration by iteration.tracing::debug!:DELAUNAY_BULK_PROGRESS_EVERYfor periodic batch-construction progress,DELAUNAY_DEBUG_RETRYABLE_SKIPfor retryable conflict-region skip traces,DELAUNAY_DEBUG_CAVITY_REDUCTION_ONCEfor the first cavity-reduction chain,DELAUNAY_DEBUG_RIDGE_FAN_ONCEfor the first detected ridge fan, andDELAUNAY_REPAIR_DEBUG_POSTCONDITION_FACET/DELAUNAY_REPAIR_DEBUG_RIDGE_MIN_MULTIPLICITYfor repair postcondition debugging.last_applied_flipthrough repair postcondition verification so unresolved k=2 facet and ridge snapshots can relate the violating local star to the immediately preceding flip.ConflictError::InternalInconsistency { context: String }with a typedInternalInconsistencySiteenum carrying structured indices and counts, so callers canmatches!on specific sites instead of parsing prose.const D, add a 4D counterpart targeting the seeded 500-point repro (0xD225_B8A0_7E27_4AE6), and expose it viajust debug-large-scale-4d-incremental-bisect.--releaseand document the 2026-04-23 re-verification: historical 35-point 3D and 100-point 4D correctness repros from fix: 3D flip-repair cycles at 35+ vertices (conflict-region completeness) #306/fix: 4D negative-orientation cell during batch construction (88% vertex skip rate) #307 now pass, while a 500-point 4D seed still fails all shuffled retries withRidge fan detected: 4 facets share ridge with 3 vertices.debugwhen any of the new release-visible env vars are present so library-sidetracing::debug!events surface without extraRUST_LOGwiring.test_perturbation_retry_and_exhaustion_4dandtest_perturbation_retry_seeded_branch_4dto iterate over 50 seeds so the retry-path assertions stay robust to insertion-path improvements that make individual well-conditioned seeds less likely to trigger retries.