Conversation
The public transposition attribute on DimShuffle was renamed to _transposition in PyTensor v3, but the dropped input dimensions are already exposed directly via DimShuffle.drop.
PyTensor v3 no longer carries the device flag (it dropped GPU support via the legacy device knob), so the skip marker now errors out at collection time. The test runs fine on CPU which is the only supported backend now.
The pytensor.compile.function submodule was removed in PyTensor v3; the function is still re-exported from the top-level pytensor namespace.
The trace setup compiled function uses trust_input=True, which under PyTensor v3 strictly requires the storage to hold an ndarray for 0-d inputs. The ADVI init paths in init_nuts produce per-chain initial points by indexing the variational MultiTrace, which yields numpy scalars rather than 0-d ndarrays for scalar RVs. Wrap with np.asarray when constructing initial_points and likewise when bootstrapping the NDArray trace inside Approximation.sample. The BaseTrace.point return type is loosened to dict[str, Any] to make this contract explicit -- callers needing strict ndarrays must wrap.
PyTensor v3 emits a DeprecationWarning from RandomVariable.__call__ unless the caller passes return_next_rng=True. Distribution.dist now always invokes the underlying op via that opt-in API (so the warning never fires from this codepath) and exposes a new return_next_rng parameter, defaulting to False, that gives callers explicit access to the (next_rng, rv) tuple. This replaces the awkward .owner.outputs / .make_node().outputs dance used internally to grab the next-rng output from RV calls.
Pass return_next_rng=True to every RandomVariable / XRV call that PyTensor v3 emits a "RandomVariable Ops will stop hiding the rng output" deprecation for. Two flavours of call site are covered: * Sites that already wanted the next-rng output and were doing the awkward .owner.outputs unpacking are rewritten to take the tuple directly from the new API. * Sites that did not capture the next-rng (DimDistribution.dist's XRV path, Empirical's integers sampler, the initial-point jitter, and the various pt.random.<dist>(...) call sites scattered across the distribution implementations) now spell the discard explicitly via ``_, rv = ...(..., return_next_rng=True)``. SymbolicRandomVariable calls (e.g. PrecisionMvNormalRV.rv_op) keep their .owner.outputs access since the kwarg is specific to plain RandomVariable / XRV Ops.
PyTensor v3 also emits a 'Calling a RandomVariable without an explicit rng' DeprecationWarning. This commit threads an explicit rng (a fresh shared default_rng) through the few entry points where rng was being left implicit: * Distribution.dist / DimDistribution.dist (via the _call_rv_op helper and the analogous DimDistribution path). * change_rv_size in shape_utils, which used to rebuild a resized RV via rv_node.op(*params, size=new_size) without passing rng. * The initial-point jitter uniform call. * CustomDist.rv_op, which now goes through _call_rv_op so the same rng-injection logic applies to ad-hoc CustomDistRV instances. * The dims xrv_op classmethods that wrap a core RV; they now forward return_next_rng (and any other kwargs) to the underlying as_xrv call so DimDistribution.dist's opt-in to return_next_rng=True works for them too. Censored returns (None, rv) when return_next_rng is set since it has no rng of its own.
The legacy arviz package's dict_to_dataset and from_dict have different signatures from the arviz_base ones that PyMC has been migrating to. Switch the remaining call sites in `pymc/backends/arviz.py`, `pymc/sampling/mcmc.py`, and the `tests/stats/test_log_density.py` test fixtures to import from `arviz_base` directly so the kwargs (`inference_library`, `sample_dims`) line up.
PyTensor v3 deprecated assignment to SharedVariable.default_update without offering a replacement: rng updates should now be threaded through pytensor.function's updates argument or inferred from the graph by collect_default_updates. Remove every remaining call site that mutated default_update: * change_rv_size no longer tries to replicate the old rng's default_update on the resized RV. * The Scan logprob rewriter relies on the inferred updates returned by the surrounding construct_scan call instead of mutating each inner rng. * collect_default_updates no longer respects user-provided input_rng.default_update. * The jax fallback no longer rejects shared variables with a default_update set (it now only rejects shared RandomTypes). * The corresponding test paths are dropped, and test_change_rv_size_default_update is removed entirely as it was exclusively exercising the deprecated mechanism.
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## v6 #8246 +/- ##
=====================================
Coverage ? 84.83%
=====================================
Files ? 125
Lines ? 20133
Branches ? 0
=====================================
Hits ? 17080
Misses ? 3053
Partials ? 0
🚀 New features to boost your workflow:
|
|
Took a look at the compiled logp+dlogp, and we pay some price for the whole matrix construction. For an Full logp+dlogp graph1. Diagonal gradient routed through an
|
|
With a few general rewrites, got it down to this form: So as good as I can think of |
Contains commits from #8243
Similar idea as #7380 (but this is actually simpler). Almost the same rewrite as LKJCholeskyCov, except here we unconstrain to the full dense matrix.