Skip to content

[None][fix] Handle None tokenizer in OpenAI server#13184

Open
galagam wants to merge 1 commit intoNVIDIA:mainfrom
nv-auto-deploy:gagam-none-tokenizer-fix
Open

[None][fix] Handle None tokenizer in OpenAI server#13184
galagam wants to merge 1 commit intoNVIDIA:mainfrom
nv-auto-deploy:gagam-none-tokenizer-fix

Conversation

@galagam
Copy link
Copy Markdown
Collaborator

@galagam galagam commented Apr 19, 2026

Description

When num_postprocess_workers > 0, tokenization is delegated to worker processes and self.tokenizer is None on the main server process. Three request handler paths accessed self.tokenizer.tokenizer.vocab_size unconditionally, crashing all requests with AttributeError regardless of whether logit_bias was used.

vocab_size is only needed in _logit_bias_to_embedding_bias, and only when logit_bias is actually provided in the request. For the common case of logit_bias=None the function returns None immediately without touching vocab_size.

  • Add OpenAIServer._vocab_size property returning tokenizer.tokenizer.vocab_size, or None if tokenizer is absent
  • Replace the three direct accesses with self._vocab_size
  • Update _logit_bias_to_embedding_bias and to_sampling_params to accept Optional[int] for vocab_size; the previous hardcoded default of 32000 was silently wrong for models with a different vocabulary size
  • Raise a clear ValueError if logit_bias is provided but vocab_size is None, instead of crashing with a cryptic TypeError

Test Coverage

tests/unittest/llmapi/apps/test_harmony_parsing.py::test_none_tokenizer_num_postprocess_workers

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

To see a list of available CI bot commands, please comment /bot help.

Summary by CodeRabbit

  • Bug Fixes
    • Added validation for logit_bias parameter to prevent issues when the server is configured without tokenizer access (such as when postprocessing workers are enabled).
    • Enhanced error reporting to clearly communicate when logit_bias cannot be applied due to missing tokenizer configuration.
    • Improved vocabulary size handling to gracefully manage configurations with optional tokenizer availability.

…enAI server

When num_postprocess_workers > 0, tokenization is delegated to worker
processes and self.tokenizer is None on the main server process. Three
request handler paths accessed self.tokenizer.tokenizer.vocab_size
unconditionally, crashing all requests with AttributeError regardless
of whether logit_bias was used.

vocab_size is only needed in _logit_bias_to_embedding_bias, and only
when logit_bias is actually provided in the request. For the common
case of logit_bias=None the function returns None immediately without
touching vocab_size.

- Add OpenAIServer._vocab_size property returning
  tokenizer.tokenizer.vocab_size, or None if tokenizer is absent
- Replace the three direct accesses with self._vocab_size
- Update _logit_bias_to_embedding_bias and to_sampling_params to accept
  Optional[int] for vocab_size; the previous hardcoded default of 32000
  was silently wrong for models with a different vocabulary size
- Raise a clear ValueError if logit_bias is provided but vocab_size is
  None, instead of crashing with a cryptic TypeError

Signed-off-by: Gal Hubara Agam <96368689+galagam@users.noreply.github.com>
@galagam galagam requested a review from a team as a code owner April 19, 2026 06:33
@galagam galagam requested a review from Superjomn April 19, 2026 06:33
@galagam galagam self-assigned this Apr 19, 2026
@galagam galagam changed the title [None][fix] Handle None tokenizer + num_postprocess_workers > 0 in Op… [None][fix] Handle None tokenizer in OpenAI server Apr 19, 2026
@galagam
Copy link
Copy Markdown
Collaborator Author

galagam commented Apr 19, 2026

/bot run

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 19, 2026

📝 Walkthrough

Walkthrough

The changes enable graceful handling of missing tokenizers (e.g., when num_postprocess_workers > 0) by allowing vocab_size to be None in OpenAI protocol methods, adding validation that raises ValueError when logit_bias is used without a tokenizer, introducing a private property to safely access tokenizer vocab size, and adding test coverage for the new behavior.

Changes

Cohort / File(s) Summary
Protocol Updates
tensorrt_llm/serve/openai_protocol.py
Updated _logit_bias_to_embedding_bias and both to_sampling_params methods to accept vocab_size: Optional[int] (default None). Added runtime validation that raises ValueError with message "logit_bias requires a tokenizer" when logit_bias is provided but vocab_size is None.
Server Implementation
tensorrt_llm/serve/openai_server.py
Added private property _vocab_size that safely returns Optional[int] tokenizer vocab size (or None if unavailable). Updated all request.to_sampling_params(...) call sites to use vocab_size=self._vocab_size instead of directly accessing tokenizer internals.
Test Coverage
tests/unittest/llmapi/apps/test_harmony_parsing.py
Added imports for OpenAIServer and _logit_bias_to_embedding_bias. Added test test_none_tokenizer_num_postprocess_workers that verifies ValueError is raised when logit_bias is used with vocab_size=None.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 27.27% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly summarizes the main fix: handling None tokenizer in the OpenAI server when num_postprocess_workers > 0, which is the primary objective of the PR.
Description check ✅ Passed The PR description includes all required sections: clear explanation of the problem (tokenizer=None with num_postprocess_workers>0), the solution (Optional[int] vocab_size with validation), test coverage, and completed checklist.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
tensorrt_llm/serve/openai_server.py (1)

1-1: ⚠️ Potential issue | 🟠 Major

Add the required NVIDIA SPDX/copyright header to this modified source file.

Line 1 starts with the shebang, but the required repository copyright/SPDX header is missing in this file revision.

Proposed header patch
 #!/usr/bin/env python
+# SPDX-FileCopyrightText: Copyright (c) 2025-2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+# SPDX-License-Identifier: Apache-2.0

As per coding guidelines: “All TensorRT-LLM source files must contain an NVIDIA copyright header with the year of latest meaningful modification.”

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tensorrt_llm/serve/openai_server.py` at line 1, Add the required NVIDIA
copyright/SPDX header to the top of the modified source file (immediately above
or replacing the current shebang in openai_server.py) so the file contains the
repository's standard NVIDIA copyright header with the year of latest meaningful
modification and the SPDX identifier; ensure the shebang (#!/usr/bin/env python)
remains present and that the header formatting matches other TensorRT-LLM files
(same block style and exact SPDX tag).
tensorrt_llm/serve/openai_protocol.py (1)

1-3: ⚠️ Potential issue | 🟠 Major

Add the required NVIDIA SPDX/copyright header to this modified source file.

The modified file currently lacks the repository-required copyright/SPDX header block.

Proposed header patch
+# SPDX-FileCopyrightText: Copyright (c) 2025-2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+# SPDX-License-Identifier: Apache-2.0
+
 # Adapted from
 # https://github.com/vllm-project/vllm/blob/4db5176d9758b720b05460c50ace3c01026eb158/vllm/entrypoints/openai/protocol.py

As per coding guidelines: “Add NVIDIA copyright header on ALL new files and update year on modified files.”

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tensorrt_llm/serve/openai_protocol.py` around lines 1 - 3, Add the required
NVIDIA SPDX/copyright header block at the very top of the file (above all
comments and imports) to satisfy the repository policy; insert the standard
NVIDIA header used across the repo including the SPDX-License-Identifier and the
appropriate copyright year(s), updating the year if this is a modified file, so
the header appears before the existing module comment and the "import base64"
line.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@tensorrt_llm/serve/openai_protocol.py`:
- Around line 1-3: Add the required NVIDIA SPDX/copyright header block at the
very top of the file (above all comments and imports) to satisfy the repository
policy; insert the standard NVIDIA header used across the repo including the
SPDX-License-Identifier and the appropriate copyright year(s), updating the year
if this is a modified file, so the header appears before the existing module
comment and the "import base64" line.

In `@tensorrt_llm/serve/openai_server.py`:
- Line 1: Add the required NVIDIA copyright/SPDX header to the top of the
modified source file (immediately above or replacing the current shebang in
openai_server.py) so the file contains the repository's standard NVIDIA
copyright header with the year of latest meaningful modification and the SPDX
identifier; ensure the shebang (#!/usr/bin/env python) remains present and that
the header formatting matches other TensorRT-LLM files (same block style and
exact SPDX tag).

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro Plus

Run ID: 87efe4fa-b5ee-4cd5-9a2a-a11708b35434

📥 Commits

Reviewing files that changed from the base of the PR and between fc9d130 and ce3b82d.

📒 Files selected for processing (3)
  • tensorrt_llm/serve/openai_protocol.py
  • tensorrt_llm/serve/openai_server.py
  • tests/unittest/llmapi/apps/test_harmony_parsing.py

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #44164 [ run ] triggered by Bot. Commit: ce3b82d Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #44164 [ run ] completed with state SUCCESS. Commit: ce3b82d
/LLM/main/L0_MergeRequest_PR pipeline #34591 completed with status: 'FAILURE'

CI Report

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Link to invocation

@galagam
Copy link
Copy Markdown
Collaborator Author

galagam commented Apr 19, 2026

/bot run

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #44214 [ run ] triggered by Bot. Commit: ce3b82d Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #44214 [ run ] completed with state FAILURE. Commit: ce3b82d
/LLM/main/L0_MergeRequest_PR pipeline #34639 completed with status: 'FAILURE'

CI Report

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Link to invocation

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants