[None][fix] Handle None tokenizer in OpenAI server#13184
[None][fix] Handle None tokenizer in OpenAI server#13184galagam wants to merge 1 commit intoNVIDIA:mainfrom
Conversation
…enAI server When num_postprocess_workers > 0, tokenization is delegated to worker processes and self.tokenizer is None on the main server process. Three request handler paths accessed self.tokenizer.tokenizer.vocab_size unconditionally, crashing all requests with AttributeError regardless of whether logit_bias was used. vocab_size is only needed in _logit_bias_to_embedding_bias, and only when logit_bias is actually provided in the request. For the common case of logit_bias=None the function returns None immediately without touching vocab_size. - Add OpenAIServer._vocab_size property returning tokenizer.tokenizer.vocab_size, or None if tokenizer is absent - Replace the three direct accesses with self._vocab_size - Update _logit_bias_to_embedding_bias and to_sampling_params to accept Optional[int] for vocab_size; the previous hardcoded default of 32000 was silently wrong for models with a different vocabulary size - Raise a clear ValueError if logit_bias is provided but vocab_size is None, instead of crashing with a cryptic TypeError Signed-off-by: Gal Hubara Agam <96368689+galagam@users.noreply.github.com>
|
/bot run |
📝 WalkthroughWalkthroughThe changes enable graceful handling of missing tokenizers (e.g., when Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
tensorrt_llm/serve/openai_server.py (1)
1-1:⚠️ Potential issue | 🟠 MajorAdd the required NVIDIA SPDX/copyright header to this modified source file.
Line 1 starts with the shebang, but the required repository copyright/SPDX header is missing in this file revision.
Proposed header patch
#!/usr/bin/env python +# SPDX-FileCopyrightText: Copyright (c) 2025-2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved. +# SPDX-License-Identifier: Apache-2.0As per coding guidelines: “All TensorRT-LLM source files must contain an NVIDIA copyright header with the year of latest meaningful modification.”
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tensorrt_llm/serve/openai_server.py` at line 1, Add the required NVIDIA copyright/SPDX header to the top of the modified source file (immediately above or replacing the current shebang in openai_server.py) so the file contains the repository's standard NVIDIA copyright header with the year of latest meaningful modification and the SPDX identifier; ensure the shebang (#!/usr/bin/env python) remains present and that the header formatting matches other TensorRT-LLM files (same block style and exact SPDX tag).tensorrt_llm/serve/openai_protocol.py (1)
1-3:⚠️ Potential issue | 🟠 MajorAdd the required NVIDIA SPDX/copyright header to this modified source file.
The modified file currently lacks the repository-required copyright/SPDX header block.
Proposed header patch
+# SPDX-FileCopyrightText: Copyright (c) 2025-2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved. +# SPDX-License-Identifier: Apache-2.0 + # Adapted from # https://github.com/vllm-project/vllm/blob/4db5176d9758b720b05460c50ace3c01026eb158/vllm/entrypoints/openai/protocol.pyAs per coding guidelines: “Add NVIDIA copyright header on ALL new files and update year on modified files.”
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tensorrt_llm/serve/openai_protocol.py` around lines 1 - 3, Add the required NVIDIA SPDX/copyright header block at the very top of the file (above all comments and imports) to satisfy the repository policy; insert the standard NVIDIA header used across the repo including the SPDX-License-Identifier and the appropriate copyright year(s), updating the year if this is a modified file, so the header appears before the existing module comment and the "import base64" line.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@tensorrt_llm/serve/openai_protocol.py`:
- Around line 1-3: Add the required NVIDIA SPDX/copyright header block at the
very top of the file (above all comments and imports) to satisfy the repository
policy; insert the standard NVIDIA header used across the repo including the
SPDX-License-Identifier and the appropriate copyright year(s), updating the year
if this is a modified file, so the header appears before the existing module
comment and the "import base64" line.
In `@tensorrt_llm/serve/openai_server.py`:
- Line 1: Add the required NVIDIA copyright/SPDX header to the top of the
modified source file (immediately above or replacing the current shebang in
openai_server.py) so the file contains the repository's standard NVIDIA
copyright header with the year of latest meaningful modification and the SPDX
identifier; ensure the shebang (#!/usr/bin/env python) remains present and that
the header formatting matches other TensorRT-LLM files (same block style and
exact SPDX tag).
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro Plus
Run ID: 87efe4fa-b5ee-4cd5-9a2a-a11708b35434
📒 Files selected for processing (3)
tensorrt_llm/serve/openai_protocol.pytensorrt_llm/serve/openai_server.pytests/unittest/llmapi/apps/test_harmony_parsing.py
|
PR_Github #44164 [ run ] triggered by Bot. Commit: |
|
PR_Github #44164 [ run ] completed with state
|
|
/bot run |
|
PR_Github #44214 [ run ] triggered by Bot. Commit: |
|
PR_Github #44214 [ run ] completed with state
|
Description
When num_postprocess_workers > 0, tokenization is delegated to worker processes and self.tokenizer is None on the main server process. Three request handler paths accessed self.tokenizer.tokenizer.vocab_size unconditionally, crashing all requests with AttributeError regardless of whether logit_bias was used.
vocab_size is only needed in _logit_bias_to_embedding_bias, and only when logit_bias is actually provided in the request. For the common case of logit_bias=None the function returns None immediately without touching vocab_size.
Test Coverage
tests/unittest/llmapi/apps/test_harmony_parsing.py::test_none_tokenizer_num_postprocess_workers
PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
Update tava architecture diagram if there is a significant design change in PR.
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
To see a list of available CI bot commands, please comment
/bot help.Summary by CodeRabbit