[None][fix] Do not leak KV cache quantization into vision encoder#13181
[None][fix] Do not leak KV cache quantization into vision encoder#131812ez4bz merged 1 commit intoNVIDIA:mainfrom
Conversation
|
/bot run |
📝 WalkthroughWalkthroughThis change modifies RADIO vision model quantization handling to clear KV-cache quantization settings when Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
🧹 Nitpick comments (1)
tests/unittest/_torch/modeling/test_modeling_radio.py (1)
82-95: Consider asserting quant config is sanitized before forward.A direct assertion makes the regression intent explicit and catches config leakage earlier than backend execution failures.
Proposed test hardening
def test_radio_fp8_parent_kv_cache_does_not_leak_into_vit(tiny_vit_config): @@ vision_model = RADIOVisionModel(_make_fp8_model_config(), disable_quantization=True) + assert vision_model.model_config.quant_config is not None + assert vision_model.model_config.quant_config.quant_algo is None + assert vision_model.model_config.quant_config.kv_cache_quant_algo is None @@ with torch.inference_mode(): features = vision_model.forward(pixel_values)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unittest/_torch/modeling/test_modeling_radio.py` around lines 82 - 95, The test should explicitly assert that the model's quantization config was sanitized/disabled before calling forward: after creating vision_model via RADIOVisionModel(_make_fp8_model_config(), disable_quantization=True) add an assertion that the quant/config state reflects disabling (for example assert vision_model.disable_quantization is True or assert getattr(vision_model, "quantization_config", None) is None) so the regression intent is explicit and any config leakage is caught prior to calling vision_model.forward.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@tests/unittest/_torch/modeling/test_modeling_radio.py`:
- Around line 82-95: The test should explicitly assert that the model's
quantization config was sanitized/disabled before calling forward: after
creating vision_model via RADIOVisionModel(_make_fp8_model_config(),
disable_quantization=True) add an assertion that the quant/config state reflects
disabling (for example assert vision_model.disable_quantization is True or
assert getattr(vision_model, "quantization_config", None) is None) so the
regression intent is explicit and any config leakage is caught prior to calling
vision_model.forward.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro Plus
Run ID: 9cf1ca88-f436-4572-85e1-d22760bd6142
📒 Files selected for processing (3)
tensorrt_llm/_torch/models/modeling_radio.pytests/integration/test_lists/test-db/l0_a10.ymltests/unittest/_torch/modeling/test_modeling_radio.py
|
PR_Github #44153 [ run ] triggered by Bot. Commit: |
|
PR_Github #44153 [ run ] completed with state
|
|
/bot run --disable-fail-fast |
|
PR_Github #44207 [ run ] triggered by Bot. Commit: |
|
/bot run --disable-fail-fast |
|
PR_Github #44225 [ run ] triggered by Bot. Commit: |
|
PR_Github #44225 [ run ] completed with state
|
Signed-off-by: William Zhang <133824995+2ez4bz@users.noreply.github.com>
3c2a3cb to
5643300
Compare
|
/bot run --disable-fail-fast |
|
PR_Github #44491 [ run ] triggered by Bot. Commit: |
|
PR_Github #44491 [ run ] completed with state |
Summary by CodeRabbit
Release Notes
Bug Fixes
Tests
Description
We were erroneously passing the KV cache quant config
from the LLM into the vision encoder for nemotron models.
This commit fixes that, and adds a regression test for it.
Test Coverage
PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
Update tava architecture diagram if there is a significant design change in PR.
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
To see a list of available CI bot commands, please comment
/bot help.