[None][fix] Revert backend for Nemotron ViT to TRT-LLM#13191
[None][fix] Revert backend for Nemotron ViT to TRT-LLM#13191yechank-nvidia wants to merge 2 commits intoNVIDIA:mainfrom
Conversation
Signed-off-by: yechank <161688079+yechank-nvidia@users.noreply.github.com>
|
/bot run |
👎 Promotion blocked, new vulnerability foundVulnerability report
|
📝 WalkthroughWalkthroughThe Changes
Estimated code review effort🎯 1 (Trivial) | ⏱️ ~2 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
tensorrt_llm/_torch/models/modeling_radio.py (1)
1013-1020:⚠️ Potential issue | 🟡 MinorUpdate stale constructor docstring default.
Line 1013 defaults
vision_attn_backendto"TRTLLM", but Line 1019 still documents"FLASHINFER". Please sync docs with behavior.Suggested patch
- vision_attn_backend: Attention backend to use for the vision tower. Defaults to "FLASHINFER". + vision_attn_backend: Attention backend to use for the vision tower. Defaults to "TRTLLM".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tensorrt_llm/_torch/models/modeling_radio.py` around lines 1013 - 1020, The constructor docstring for the RADIO model has a stale default: update the docstring text that currently says "Defaults to \"FLASHINFER\"" to reflect the actual parameter default "TRTLLM" for the vision_attn_backend argument (the vision_attn_backend parameter in the model's __init__/constructor in modeling_radio.py); keep the rest of the docstring unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@tensorrt_llm/_torch/models/modeling_radio.py`:
- Around line 1013-1020: The constructor docstring for the RADIO model has a
stale default: update the docstring text that currently says "Defaults to
\"FLASHINFER\"" to reflect the actual parameter default "TRTLLM" for the
vision_attn_backend argument (the vision_attn_backend parameter in the model's
__init__/constructor in modeling_radio.py); keep the rest of the docstring
unchanged.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro Plus
Run ID: 5122d312-1b8d-45e3-a044-c216d5b863de
📒 Files selected for processing (1)
tensorrt_llm/_torch/models/modeling_radio.py
Signed-off-by: yechank <161688079+yechank-nvidia@users.noreply.github.com>
|
/bot run |
|
PR_Github #44203 [ run ] triggered by Bot. Commit: |
|
PR_Github #44203 [ run ] completed with state
|
|
/bot run |
|
PR_Github #44232 [ run ] triggered by Bot. Commit: |
Summary by CodeRabbit