Skip to content

feat: add tool calling support to m serve#850

Open
markstur wants to merge 10 commits intogenerative-computing:mainfrom
markstur:issue_825
Open

feat: add tool calling support to m serve#850
markstur wants to merge 10 commits intogenerative-computing:mainfrom
markstur:issue_825

Conversation

@markstur
Copy link
Copy Markdown
Contributor

@markstur markstur commented Apr 13, 2026

Misc PR

Type of PR

  • Bug Fix
  • New Feature
  • Documentation
  • Other

Description

Successfully added tool calling support to m serve CLI with proper type annotations. Here's what was implemented:

Changes Made

1. Updated Models (cli/serve/models.py)

  • Added ToolCallFunction model for function details in tool calls
  • Added ChatCompletionMessageToolCall model for tool call structure
  • Extended ChatCompletionMessage to include optional tool_calls field
  • Updated Choice.finish_reason to support "tool_calls" value

2. Modified Server Logic (cli/serve/app.py)

  • Added json and Literal imports for proper typing
  • Imported new tool-related models
  • Updated _build_model_options() to pass through tools (mapped to ModelOption.TOOLS) and tool_choice parameters
  • Enhanced make_chat_endpoint() to:
    • Extract tool calls from ModelOutputThunk.tool_calls with proper type checking (isinstance(dict))
    • Generate unique IDs for each tool call in format call_<24-char-hex>
    • Serialize tool arguments to JSON
    • Set finish_reason with proper Literal type annotation
    • Return tool calls in OpenAI-compatible format

3. Comprehensive Tests (test/cli/test_serve_tool_calling.py)

  • 8 new tests covering:
    • Single and multiple tool calls
    • Tool call formatting and serialization
    • Complex nested arguments
    • Tool parameters passed to model_options
    • Backward compatibility (requests without tools)
    • Usage info alongside tool calls

4. Updated Existing Test (test/cli/test_serve.py)

  • Renamed test_tool_params_excluded_from_model_options to test_tool_params_passed_to_model_options
  • Updated assertions to verify tools and tool_choice are now passed through

5. Example Code

  • docs/examples/m_serve/m_serve_example_tool_calling.py: Complete server example with GetWeatherTool and GetStockPriceTool implementations
  • docs/examples/m_serve/client_tool_calling.py: Client demonstrating how to call the tool-enabled server with various scenarios

Key Features

OpenAI-Compatible: Follows OpenAI's tool calling API format
Type-Safe: Proper Literal type annotations for finish_reason
Robust Type Checking: Uses isinstance(dict) to avoid Mock object issues
Automatic Tool Call Detection: Extracts tool calls from ModelOutputThunk
Proper Finish Reasons: Returns "tool_calls" when tools are invoked, "stop" otherwise
Unique Tool Call IDs: Generates unique IDs in format call_<24-char-hex>
JSON Serialization: Properly serializes tool arguments to JSON strings
Backward Compatible: Works with existing code that doesn't use tools
Fully Tested: All 43 serve tests pass, including 8 new tool-specific tests
Type Checked: Passes mypy type checking

Usage

Start server with tool support:

uv run m serve docs/examples/m_serve/m_serve_example_tool_calling.py

Call with tools from client:

response = requests.post(
    "http://localhost:8080/v1/chat/completions",
    json={
        "model": "gpt-3.5-turbo",
        "messages": [{"role": "user", "content": "What's the weather in Paris?"}],
        "tools": [...],  # Tool definitions
        "tool_choice": "auto"
    }
)

The implementation properly handles tool calls from Mellea's ModelOutputThunk and formats them according to OpenAI's API specification with full type safety.

Testing

  • Tests added to the respective file if code was changed
  • New code has 100% coverage if code as added
  • Ensure existing tests and github automation passes (a maintainer will kick off the github automation when the rest of the PR is populated)

@markstur markstur requested a review from a team as a code owner April 13, 2026 23:38
@markstur markstur marked this pull request as draft April 13, 2026 23:38
@github-actions github-actions bot added the enhancement New feature or request label Apr 13, 2026
@github-actions
Copy link
Copy Markdown
Contributor

The PR description has been updated. Please fill out the template for your PR to be reviewed.

@planetf1
Copy link
Copy Markdown
Contributor

@markstur Do you want review comments yet or still WIP?

@markstur
Copy link
Copy Markdown
Contributor Author

@markstur Do you want review comments yet or still WIP?

Comments would be great! It is draft because I need to do more review/test myself on the generated code. I don't want to waste your time but comments early would be very welcome.

Copy link
Copy Markdown
Member

@psschwei psschwei left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review: feat: add tool calling support to m serve

Good feature PR — the core plumbing is correct and the OpenAI-compatible response format looks right. A couple of bugs to fix before merge, plus some improvements.

Summary

The implementation correctly wires tool calling through the serve endpoint: tools maps to ModelOption.TOOLS, tool_choice passes through as-is, and the response extracts tool calls from ModelOutputThunk into the OpenAI format. The Pydantic models mirror the OpenAI types well, and tests cover the main paths.

Two bugs need fixing (see inline comments):

  1. Empty tool_calls dict produces incorrect finish_reason: "tool_calls" with an empty array
  2. Client example's multi-turn loop duplicates the assistant message for each tool call

Other improvements (see inline comments):

  • Unused loop variable tool_name
  • eval() in example code with # noqa suppressing the security lint for copy-pasters
  • Missing test for the empty dict edge case
  • hasattr check is always true for ModelOutputThunk — defensive but masks upstream bugs

What's working well

  • Pydantic models (ToolCallFunction, ChatCompletionMessageToolCall) closely match OpenAI types
  • _build_model_options change is clean — tools removed from exclusion set, mapped to ModelOption.TOOLS
  • 8 well-structured tests covering single/multiple tool calls, finish reasons, model_options passthrough, complex args, usage info, and backward compat
  • Existing test updated consistently from "excluded" to "passed"

Comment thread cli/serve/app.py
Comment thread cli/serve/app.py Outdated
Comment thread test/cli/test_serve_tool_calling.py
Comment thread docs/examples/m_serve/client_tool_calling.py Outdated
Copy link
Copy Markdown
Contributor

@planetf1 planetf1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Two additional items not yet covered in existing review comments.

Copy link
Copy Markdown
Contributor

@planetf1 planetf1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Two additional items not covered in existing review comments.

Comment thread cli/serve/app.py Outdated
"""
try:
# In a real implementation, use a safe expression evaluator
result = eval(expression) # noqa: S307
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this example is intended to run under pytest (the # pytest: ollama, e2e header suggests so), then eval(expression) is reachable with model-controlled input during a CI test run. The # noqa: S307 suppresses the lint but not the exposure. Is the intent for this to be a runnable test or a reference example? If the former, the eval needs replacing with something safe; if the latter, dropping the # pytest: header would stop it being collected.

Comment thread docs/examples/m_serve/m_serve_example_tool_calling.py Outdated
markstur added 10 commits April 17, 2026 13:08
Signed-off-by: Mark Sturdevant <mark.sturdevant@ibm.com>
…y dict

Fixed the bug where an empty tool_calls dict ({}) incorrectly produced finish_reason="tool_calls" with an empty array instead of finish_reason="stop" with tool_calls=None.

Signed-off-by: Mark Sturdevant <mark.sturdevant@ibm.com>
…xample

Issue: The assistant message was being added inside the loop for each tool call, causing duplication when multiple tool calls were present.
Fix: Moved the assistant message append outside the loop (before processing tool calls), so it's only added once. Now the loop only adds tool responses.

Signed-off-by: Mark Sturdevant <mark.sturdevant@ibm.com>
The dict key tool_name is never used — the function name comes from model_tool_call.name. Using .values() instead.

Signed-off-by: Mark Sturdevant <mark.sturdevant@ibm.com>
Replaced hasattr() with direct __dict__ membership tests to correctly distinguish:

1. Typed instances (ModelOutputThunk[float](...)) - have __orig_class__ in their instance dict
2. Untyped instances (ModelOutputThunk(...)) - do NOT have __orig_class__ in their instance dict

Signed-off-by: Mark Sturdevant <mark.sturdevant@ibm.com>
Security issue resolved in `m_serve_example_tool_calling.py`:

**Changes made:**
- Replaced `CalculatorTool` (which used unsafe `eval()` with `# noqa: S307`) with `GetStockPriceTool`
- New tool demonstrates API-calling pattern with mock stock prices (AAPL, GOOGL, MSFT, TSLA)
- Updated all references: `calculator_tool` → `stock_price_tool`
- Maintains the same tool calling demonstration with two tools (weather + stock price)

**Why this is better:**
- Eliminates security risk entirely (no `eval()` or suppressed lints)
- Still demonstrates multiple tools effectively
- Uses safe, realistic API-calling pattern that users can copy
- No dangerous code that could be copy-pasted into production

Signed-off-by: Mark Sturdevant <mark.sturdevant@ibm.com>
Signed-off-by: Mark Sturdevant <mark.sturdevant@ibm.com>
The pass-thru behavior was not clear enough, so adding it to ModelOptions
where important options are known.  Most of these are sentinels which are
removed (because @@@) but this will be like TEMPERATURE which is passed
through to the backends.

No behavior change, but give a handly constant and a place to look for these.
This does not address all the other possible pass through args.

Signed-off-by: Mark Sturdevant <mark.sturdevant@ibm.com>
- switch server example to OpenAIBackend
- align tool-calling example with tested Granite model setup
- narrow advertised tools when `tool_choice` selects a specific function
- enable `tool_calls=True` in the serve path
- replace calculator example with stock-price tool
- examples 1/2 as tool-call-only demos
- example 4 as the full tool execution round-trip
- improve client diagnostics for empty/no-tool responses

Signed-off-by: Mark Sturdevant <mark.sturdevant@ibm.com>
Assisted-by: IBM Bob
Signed-off-by: Mark Sturdevant <mark.sturdevant@ibm.com>
@markstur
Copy link
Copy Markdown
Contributor Author

Code Review: feat: add tool calling support to m serve

Good feature PR — the core plumbing is correct and the OpenAI-compatible response format looks right. A couple of bugs to fix before merge, plus some improvements.

Summary

The implementation correctly wires tool calling through the serve endpoint: tools maps to ModelOption.TOOLS, tool_choice passes through as-is, and the response extracts tool calls from ModelOutputThunk into the OpenAI format. The Pydantic models mirror the OpenAI types well, and tests cover the main paths.

Two bugs need fixing (see inline comments):

  1. Empty tool_calls dict produces incorrect finish_reason: "tool_calls" with an empty array
  2. Client example's multi-turn loop duplicates the assistant message for each tool call

Other improvements (see inline comments):

  • Unused loop variable tool_name
  • eval() in example code with # noqa suppressing the security lint for copy-pasters
  • Missing test for the empty dict edge case
  • hasattr check is always true for ModelOutputThunk — defensive but masks upstream bugs

What's working well

  • Pydantic models (ToolCallFunction, ChatCompletionMessageToolCall) closely match OpenAI types
  • _build_model_options change is clean — tools removed from exclusion set, mapped to ModelOption.TOOLS
  • 8 well-structured tests covering single/multiple tool calls, finish reasons, model_options passthrough, complex args, usage info, and backward compat
  • Existing test updated consistently from "excluded" to "passed"

Fixed all these.
The eval one goes away with the removal of calc (replaced by stock "look-up")

@markstur markstur requested a review from psschwei April 17, 2026 20:49
@markstur markstur marked this pull request as ready for review April 17, 2026 20:49
@markstur markstur requested a review from planetf1 April 17, 2026 20:49
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

m serve OpenAI API tool calling round-trip

3 participants