feat: add tool calling support to m serve#850
feat: add tool calling support to m serve#850markstur wants to merge 10 commits intogenerative-computing:mainfrom
Conversation
|
The PR description has been updated. Please fill out the template for your PR to be reviewed. |
|
@markstur Do you want review comments yet or still WIP? |
Comments would be great! It is draft because I need to do more review/test myself on the generated code. I don't want to waste your time but comments early would be very welcome. |
psschwei
left a comment
There was a problem hiding this comment.
Code Review: feat: add tool calling support to m serve
Good feature PR — the core plumbing is correct and the OpenAI-compatible response format looks right. A couple of bugs to fix before merge, plus some improvements.
Summary
The implementation correctly wires tool calling through the serve endpoint: tools maps to ModelOption.TOOLS, tool_choice passes through as-is, and the response extracts tool calls from ModelOutputThunk into the OpenAI format. The Pydantic models mirror the OpenAI types well, and tests cover the main paths.
Two bugs need fixing (see inline comments):
- Empty
tool_callsdict produces incorrectfinish_reason: "tool_calls"with an empty array - Client example's multi-turn loop duplicates the assistant message for each tool call
Other improvements (see inline comments):
- Unused loop variable
tool_name eval()in example code with# noqasuppressing the security lint for copy-pasters- Missing test for the empty dict edge case
hasattrcheck is always true forModelOutputThunk— defensive but masks upstream bugs
What's working well
- Pydantic models (
ToolCallFunction,ChatCompletionMessageToolCall) closely match OpenAI types _build_model_optionschange is clean —toolsremoved from exclusion set, mapped toModelOption.TOOLS- 8 well-structured tests covering single/multiple tool calls, finish reasons, model_options passthrough, complex args, usage info, and backward compat
- Existing test updated consistently from "excluded" to "passed"
planetf1
left a comment
There was a problem hiding this comment.
Two additional items not yet covered in existing review comments.
planetf1
left a comment
There was a problem hiding this comment.
Two additional items not covered in existing review comments.
| """ | ||
| try: | ||
| # In a real implementation, use a safe expression evaluator | ||
| result = eval(expression) # noqa: S307 |
There was a problem hiding this comment.
If this example is intended to run under pytest (the # pytest: ollama, e2e header suggests so), then eval(expression) is reachable with model-controlled input during a CI test run. The # noqa: S307 suppresses the lint but not the exposure. Is the intent for this to be a runnable test or a reference example? If the former, the eval needs replacing with something safe; if the latter, dropping the # pytest: header would stop it being collected.
Signed-off-by: Mark Sturdevant <mark.sturdevant@ibm.com>
…y dict
Fixed the bug where an empty tool_calls dict ({}) incorrectly produced finish_reason="tool_calls" with an empty array instead of finish_reason="stop" with tool_calls=None.
Signed-off-by: Mark Sturdevant <mark.sturdevant@ibm.com>
…xample Issue: The assistant message was being added inside the loop for each tool call, causing duplication when multiple tool calls were present. Fix: Moved the assistant message append outside the loop (before processing tool calls), so it's only added once. Now the loop only adds tool responses. Signed-off-by: Mark Sturdevant <mark.sturdevant@ibm.com>
The dict key tool_name is never used — the function name comes from model_tool_call.name. Using .values() instead. Signed-off-by: Mark Sturdevant <mark.sturdevant@ibm.com>
Replaced hasattr() with direct __dict__ membership tests to correctly distinguish: 1. Typed instances (ModelOutputThunk[float](...)) - have __orig_class__ in their instance dict 2. Untyped instances (ModelOutputThunk(...)) - do NOT have __orig_class__ in their instance dict Signed-off-by: Mark Sturdevant <mark.sturdevant@ibm.com>
Security issue resolved in `m_serve_example_tool_calling.py`: **Changes made:** - Replaced `CalculatorTool` (which used unsafe `eval()` with `# noqa: S307`) with `GetStockPriceTool` - New tool demonstrates API-calling pattern with mock stock prices (AAPL, GOOGL, MSFT, TSLA) - Updated all references: `calculator_tool` → `stock_price_tool` - Maintains the same tool calling demonstration with two tools (weather + stock price) **Why this is better:** - Eliminates security risk entirely (no `eval()` or suppressed lints) - Still demonstrates multiple tools effectively - Uses safe, realistic API-calling pattern that users can copy - No dangerous code that could be copy-pasted into production Signed-off-by: Mark Sturdevant <mark.sturdevant@ibm.com>
Signed-off-by: Mark Sturdevant <mark.sturdevant@ibm.com>
The pass-thru behavior was not clear enough, so adding it to ModelOptions where important options are known. Most of these are sentinels which are removed (because @@@) but this will be like TEMPERATURE which is passed through to the backends. No behavior change, but give a handly constant and a place to look for these. This does not address all the other possible pass through args. Signed-off-by: Mark Sturdevant <mark.sturdevant@ibm.com>
- switch server example to OpenAIBackend - align tool-calling example with tested Granite model setup - narrow advertised tools when `tool_choice` selects a specific function - enable `tool_calls=True` in the serve path - replace calculator example with stock-price tool - examples 1/2 as tool-call-only demos - example 4 as the full tool execution round-trip - improve client diagnostics for empty/no-tool responses Signed-off-by: Mark Sturdevant <mark.sturdevant@ibm.com> Assisted-by: IBM Bob
Signed-off-by: Mark Sturdevant <mark.sturdevant@ibm.com>
Fixed all these. |
Misc PR
Type of PR
Description
Successfully added tool calling support to
m serveCLI with proper type annotations. Here's what was implemented:Changes Made
1. Updated Models (
cli/serve/models.py)ToolCallFunctionmodel for function details in tool callsChatCompletionMessageToolCallmodel for tool call structureChatCompletionMessageto include optionaltool_callsfieldChoice.finish_reasonto support"tool_calls"value2. Modified Server Logic (
cli/serve/app.py)jsonandLiteralimports for proper typing_build_model_options()to pass throughtools(mapped toModelOption.TOOLS) andtool_choiceparametersmake_chat_endpoint()to:ModelOutputThunk.tool_callswith proper type checking (isinstance(dict))call_<24-char-hex>finish_reasonwith properLiteraltype annotation3. Comprehensive Tests (
test/cli/test_serve_tool_calling.py)4. Updated Existing Test (
test/cli/test_serve.py)test_tool_params_excluded_from_model_optionstotest_tool_params_passed_to_model_options5. Example Code
docs/examples/m_serve/m_serve_example_tool_calling.py: Complete server example withGetWeatherToolandGetStockPriceToolimplementationsdocs/examples/m_serve/client_tool_calling.py: Client demonstrating how to call the tool-enabled server with various scenariosKey Features
✅ OpenAI-Compatible: Follows OpenAI's tool calling API format
✅ Type-Safe: Proper
Literaltype annotations forfinish_reason✅ Robust Type Checking: Uses
isinstance(dict)to avoid Mock object issues✅ Automatic Tool Call Detection: Extracts tool calls from
ModelOutputThunk✅ Proper Finish Reasons: Returns
"tool_calls"when tools are invoked,"stop"otherwise✅ Unique Tool Call IDs: Generates unique IDs in format
call_<24-char-hex>✅ JSON Serialization: Properly serializes tool arguments to JSON strings
✅ Backward Compatible: Works with existing code that doesn't use tools
✅ Fully Tested: All 43 serve tests pass, including 8 new tool-specific tests
✅ Type Checked: Passes mypy type checking
Usage
Start server with tool support:
Call with tools from client:
The implementation properly handles tool calls from Mellea's
ModelOutputThunkand formats them according to OpenAI's API specification with full type safety.Testing