[None][feat] Optimize nemotron-h from python level#13032
[None][feat] Optimize nemotron-h from python level#13032Wanli-Jiang merged 1 commit intoNVIDIA:mainfrom
Conversation
db95bb7 to
f9beed0
Compare
📝 WalkthroughWalkthroughTwo changes optimize tensor-handling logic: one simplifies routing eligibility conditions in a fused MOE implementation by removing an edge-case constraint, and another optimizes the Mamba2 decode path by pre-computing and caching expanded tensors rather than recomputing them per call. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes 🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tensorrt_llm/_torch/modules/mamba/mamba2_mixer.py`:
- Around line 243-251: post_load_weights currently assigns plain tensor
attributes (_A_expanded, _dt_bias_expanded, _D_expanded) which are not moved
with module.to()/cuda(); replace those plain attributes with registered buffers
using self.register_buffer("<name>", tensor, persistent=False) so they follow
module device/dtype moves—i.e., register _A_expanded (ensure
.to(dtype=torch.float32) as before), _dt_bias_expanded, and _D_expanded after
creating them with repeat, using the same names so existing code that references
self._A_expanded, self._dt_bias_expanded, and self._D_expanded continues to
work.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro Plus
Run ID: f424cb9b-0111-4ba8-b308-63bc342fce7d
📒 Files selected for processing (2)
tensorrt_llm/_torch/modules/fused_moe/routing.pytensorrt_llm/_torch/modules/mamba/mamba2_mixer.py
|
/bot run --disable-fail-fast |
f9beed0 to
5f969ea
Compare
|
/bot run --disable-fail-fast |
|
PR_Github #43399 [ run ] triggered by Bot. Commit: |
|
PR_Github #43399 [ run ] completed with state
|
5f969ea to
a2b418b
Compare
|
/bot run --disable-fail-fast |
|
PR_Github #43610 [ run ] triggered by Bot. Commit: |
|
PR_Github #43610 [ run ] completed with state
|
|
/bot run |
|
PR_Github #43730 [ run ] triggered by Bot. Commit: |
|
PR_Github #43730 [ run ] completed with state
|
a2b418b to
42d00cb
Compare
|
/bot run --disable-fail-fast |
|
PR_Github #43957 [ run ] triggered by Bot. Commit: |
* Enable more c++ routing combinations. * Update mamba tensor operations. Signed-off-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com>
42d00cb to
b40951e
Compare
|
/bot run --disable-fail-fast |
|
PR_Github #44089 [ run ] triggered by Bot. Commit: |
|
PR_Github #44089 [ run ] completed with state
|
|
/bot run --disable-fail-fast |
|
PR_Github #44245 [ run ] triggered by Bot. Commit: |
|
PR_Github #44245 [ run ] completed with state
|
|
/bot skip --comment "Tests were passed within two CI tests" |
|
PR_Github #44334 [ skip ] triggered by Bot. Commit: |
|
PR_Github #44334 [ skip ] completed with state |
Summary by CodeRabbit
Description
Test Coverage
PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
Update tava architecture diagram if there is a significant design change in PR.
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
To see a list of available CI bot commands, please comment
/bot help.