Skip to content

feat: add multi-provider LLM support with MiniMax M2.7#105

Open
octo-patch wants to merge 3 commits intoNirDiamant:mainfrom
octo-patch:feature/add-minimax-provider
Open

feat: add multi-provider LLM support with MiniMax M2.7#105
octo-patch wants to merge 3 commits intoNirDiamant:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link
Copy Markdown

@octo-patch octo-patch commented Mar 17, 2026

Summary

  • Add unified utils/llm_provider.py with provider registry supporting OpenAI and MiniMax
  • Default MiniMax model upgraded to MiniMax-M2.7 (latest, 204K context window)
  • Also supports MiniMax-M2.5 and MiniMax-M2.5-highspeed as explicit model choices
  • Temperature auto-clamping for MiniMax API compatibility
  • Tutorial notebook demonstrating multi-provider conversational agent
  • 23 unit tests + 7 integration tests

Test plan

  • All 23 unit tests pass
  • Integration tests verify M2.7, M2.5, and highspeed model variants
  • Temperature clamping verified for edge cases

Summary by CodeRabbit

  • New Features

    • Multi-provider LLM support enabling seamless switching between OpenAI, MiniMax, and compatible providers
    • Environment-driven provider selection for flexible deployments
  • Documentation

    • New tutorial demonstrating a provider-agnostic conversational agent workflow
    • README expanded with multi-provider examples, updated catalog entries, and configuration guidance
  • Tests

    • Added unit and integration tests covering provider selection, model variants, temperature handling, and conversational history
  • Chores

    • Minor content and consistency edits across listings and descriptions

Add a shared utils/llm_provider.py module that lets any tutorial switch
between OpenAI and MiniMax (M2.5 / M2.5-highspeed, 204K context) via a
single get_llm(provider=...) call.  MiniMax uses an OpenAI-compatible
API so no extra SDK is needed.

Changes:
- utils/llm_provider.py: provider registry + get_llm() helper with
  temperature clamping for MiniMax (rejects 0)
- all_agents_tutorials/multi_provider_conversational_agent.ipynb:
  tutorial showing provider switching, side-by-side comparison, and
  env-driven selection
- tests/test_llm_provider.py: 23 unit tests (mocked)
- tests/test_minimax_integration.py: 6 live integration tests
  (auto-skipped when MINIMAX_API_KEY is absent)
- README.md: new table entry, framework section entry, key-features line
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 17, 2026

📝 Walkthrough

Walkthrough

Adds a provider-agnostic LLM utility and provider registry (OpenAI, MiniMax), a new tutorial notebook, unit and integration tests for provider behavior and MiniMax integration, and a small utils package comment.

Changes

Cohort / File(s) Summary
LLM Provider Core
utils/llm_provider.py
New module adding PROVIDERS registry, list_providers(), get_provider_info(), and get_llm() to build LangChain ChatOpenAI instances; loads .env, validates API keys, resolves models, clamps MiniMax temperature, and forwards extra kwargs.
Tutorial & Docs
all_agents_tutorials/multi_provider_conversational_agent.ipynb, README.md
New Jupyter notebook demonstrating provider-agnostic conversational agent (build_agent, in-memory history, env-driven provider selection) and README updates describing multi-provider support and examples.
Tests — Unit
tests/test_llm_provider.py
Adds unit tests for provider catalog, get_provider_info() immutability, get_llm() behaviors across providers (model selection, temperature handling, API key handling, extra-kwargs forwarding) and error cases; uses mocks and env patching.
Tests — Integration
tests/test_minimax_integration.py
Adds MiniMax integration tests (guarded by MINIMAX_API_KEY) covering connectivity, model variants, temperature handling, and multi-turn context retention; skipped when key absent.
Package Init
utils/__init__.py
Adds a single-line comment: "# GenAI Agents - Shared Utilities".

Sequence Diagram

sequenceDiagram
    participant User
    participant App
    participant Selector as get_llm()
    participant Registry as PROVIDERS
    participant OpenAI
    participant MiniMax

    User->>App: invoke agent (provider explicit or via LLM_PROVIDER)
    App->>Selector: request LLM(provider, model, temperature, ...)
    Selector->>Registry: lookup provider config
    Registry-->>Selector: return api_key_env, base_url, default_model
    Selector->>Selector: load .env, read API key, adjust params (e.g., clamp temperature)
    alt provider == "minimax"
        Selector->>MiniMax: initialize ChatOpenAI with base_url and api_key
        MiniMax-->>Selector: return configured LLM instance
    else provider == "openai"
        Selector->>OpenAI: initialize ChatOpenAI with api_key
        OpenAI-->>Selector: return configured LLM instance
    end
    Selector-->>App: configured LLM returned
    App->>App: build_agent(llm) with history and prompt
    App->>OpenAI: send messages (or)
    App->>MiniMax: send messages
    OpenAI-->>App: response
    MiniMax-->>App: response
    App-->>User: present reply
Loading

Estimated Code Review Effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Poem

🐰 Hopping through keys and provider trails,

I bind them neat in one quick trail of tails.
OpenAI or MiniMax — one little call,
Swap, run, and chat: I welcome them all! 🥕

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 11.43% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The PR title accurately reflects the primary change: adding multi-provider LLM support with MiniMax M2.7 as a new provider, which is the core focus of the changeset.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@all_agents_tutorials/multi_provider_conversational_agent.ipynb`:
- Around line 92-101: The use of os.path.abspath("__file__") in the
sys.path.insert call is invalid in notebooks because __file__ is undefined;
update the sys.path insertion in the startup block (the sys.path.insert(0,
os.path.join(os.path.dirname(os.path.abspath("__file__")), "..")) call) to a
notebook-safe approach such as inserting the repository root explicitly (e.g.,
use "." or os.getcwd()) so imports like utils.llm_provider and the subsequent
imports (get_llm, list_providers, RunnableWithMessageHistory,
ChatMessageHistory, ChatPromptTemplate, MessagesPlaceholder) resolve reliably
when run in Jupyter.

In `@README.md`:
- Around line 204-214: The README has duplicate numbering: "Multi-Provider
Conversational Agent..." is numbered 6 and the following "ATLAS: Academic Task
and Learning Agent System" is also numbered 6; update the ATLAS entry to 7 and
increment all subsequent numbered items accordingly so the sequence is
continuous; search for the section headings "Multi-Provider Conversational Agent
with MiniMax Support" and "ATLAS: Academic Task and Learning Agent System" to
locate the block and then renumber every following ordered entry (including
sublists) by +1.

In `@utils/llm_provider.py`:
- Around line 10-11: Replace the typographic EN DASH characters (U+2013) in the
module docstring bullets with a standard ASCII hyphen-minus (U+002D): change
entries like "**openai** – OpenAI GPT models (default)" and "**minimax** –
MiniMax M2.5 / M2.5-highspeed (204K context, OpenAI-compatible)" to use "-"
instead of "–", and apply the same replacement for the other EN DASH occurrences
in the same docstring near the other model bullet lines so all hyphens are
ASCII.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 00825fed-4cdc-4ae2-a62f-954ada87f003

📥 Commits

Reviewing files that changed from the base of the PR and between cbeab13 and 9a36036.

📒 Files selected for processing (7)
  • README.md
  • all_agents_tutorials/multi_provider_conversational_agent.ipynb
  • tests/__init__.py
  • tests/test_llm_provider.py
  • tests/test_minimax_integration.py
  • utils/__init__.py
  • utils/llm_provider.py

PR Bot and others added 2 commits March 17, 2026 14:11
- Replace os.path.abspath("__file__") with os.getcwd() in notebook
  (os.path.abspath on a literal string doesn't resolve to the notebook's
  directory)
- Fix duplicate section numbering in README.md (renumber ATLAS and all
  subsequent entries after new Multi-Provider Agent section)
- Replace typographic EN DASH (U+2013) with ASCII hyphen-minus (U+002D)
  in llm_provider.py docstring to avoid encoding issues
- Update default_model in PROVIDERS registry to MiniMax-M2.7
- Add M2.7 integration test alongside existing M2.5/highspeed tests
- Update notebook, README, and docstring references
- All 23 unit tests pass

Co-Authored-By: Octopus <liyuan851277048@icloud.com>
@octo-patch octo-patch changed the title feat: add multi-provider LLM support with MiniMax M2.5 feat: add multi-provider LLM support with MiniMax M2.7 Mar 18, 2026
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@README.md`:
- Around line 144-145: The README has a numbering mismatch: the summary table
row for "Multi-Provider Agent (MiniMax)" is labeled 46 while the detailed
section header for the same tutorial is labeled 6; update the summary table
entry for "Multi-Provider Agent (MiniMax)" to use the same index as the detailed
section (change 46 to 6) so the table and the detailed list match, or
alternatively rename the detailed section to 46—make the change in the README
where the table row containing "Multi-Provider Agent (MiniMax)" and the detailed
section header "Multi-Provider Agent (MiniMax)" appear so both indices are
identical.
- Line 422: Update the tutorial title string "Multi agent research team -
Autogen" to use proper compound-adjective hyphenation: change it to "Multi-agent
research team - Autogen" in the README where that link/title text appears so the
hyperlink and surrounding markup remain unchanged.

In `@tests/test_llm_provider.py`:
- Around line 151-166: The tests test_missing_api_key_raises and
test_missing_minimax_key_raises clear os.environ but still call get_llm which
invokes load_dotenv at runtime; mock utils.llm_provider.load_dotenv to prevent
reloading environment by adding `@patch`("utils.llm_provider.load_dotenv",
return_value=None) above the existing `@patch.dict` decorator on both test
functions so load_dotenv is a no-op during the tests, keeping the environment
cleared for get_llm to raise the expected EnvironmentError.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: dcb46db6-b339-4fee-931f-8c8e3e849053

📥 Commits

Reviewing files that changed from the base of the PR and between e26e3bc and 0b5d317.

📒 Files selected for processing (5)
  • README.md
  • all_agents_tutorials/multi_provider_conversational_agent.ipynb
  • tests/test_llm_provider.py
  • tests/test_minimax_integration.py
  • utils/llm_provider.py
🚧 Files skipped from review as they are similar to previous changes (2)
  • all_agents_tutorials/multi_provider_conversational_agent.ipynb
  • tests/test_minimax_integration.py

Comment on lines +144 to 145
| 46 | 🔧 **Framework** | [Multi-Provider Agent (MiniMax)](all_agents_tutorials/multi_provider_conversational_agent.ipynb) | LangChain | Multi-provider LLM support, OpenAI/MiniMax switching, env-based config |

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Table index is inconsistent with the detailed section numbering.

Line 144 lists Multi-Provider Agent (MiniMax) as #46, while the detailed framework section introduces the same tutorial as #6 (Line 204). Please align the numbering scheme between the summary table and detailed list.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@README.md` around lines 144 - 145, The README has a numbering mismatch: the
summary table row for "Multi-Provider Agent (MiniMax)" is labeled 46 while the
detailed section header for the same tutorial is labeled 6; update the summary
table entry for "Multi-Provider Agent (MiniMax)" to use the same index as the
detailed section (change 46 to 6) so the table and the detailed list match, or
alternatively rename the detailed section to 46—make the change in the README
where the table row containing "Multi-Provider Agent (MiniMax)" and the detailed
section header "Multi-Provider Agent (MiniMax)" appear so both indices are
identical.



27. **[Multi agent research team - Autogen](https://github.com/NirDiamant/GenAI_Agents/blob/main/all_agents_tutorials/research_team_autogen.ipynb)**
28. **[Multi agent research team - Autogen](https://github.com/NirDiamant/GenAI_Agents/blob/main/all_agents_tutorials/research_team_autogen.ipynb)**
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix compound adjective hyphenation in the tutorial title.

Line 422 should use “Multi-agent” instead of “Multi agent”.

🔧 Proposed fix
-28. **[Multi agent research team - Autogen](https://github.com/NirDiamant/GenAI_Agents/blob/main/all_agents_tutorials/research_team_autogen.ipynb)**
+28. **[Multi-agent research team - Autogen](https://github.com/NirDiamant/GenAI_Agents/blob/main/all_agents_tutorials/research_team_autogen.ipynb)**
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
28. **[Multi agent research team - Autogen](https://github.com/NirDiamant/GenAI_Agents/blob/main/all_agents_tutorials/research_team_autogen.ipynb)**
28. **[Multi-agent research team - Autogen](https://github.com/NirDiamant/GenAI_Agents/blob/main/all_agents_tutorials/research_team_autogen.ipynb)**
🧰 Tools
🪛 LanguageTool

[grammar] ~422-~422: Use a hyphen to join words.
Context: ... retrieval and synthesis. 28. **[Multi agent research team - Autogen](https://g...

(QB_NEW_EN_HYPHEN)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@README.md` at line 422, Update the tutorial title string "Multi agent
research team - Autogen" to use proper compound-adjective hyphenation: change it
to "Multi-agent research team - Autogen" in the README where that link/title
text appears so the hyperlink and surrounding markup remain unchanged.

Comment on lines +151 to +166
@patch.dict(os.environ, {}, clear=True)
def test_missing_api_key_raises(self):
# Remove all env vars to trigger the error
for key in ("OPENAI_API_KEY", "MINIMAX_API_KEY"):
os.environ.pop(key, None)
with self.assertRaises(EnvironmentError) as ctx:
get_llm(provider="openai")
self.assertIn("OPENAI_API_KEY", str(ctx.exception))

@patch.dict(os.environ, {}, clear=True)
def test_missing_minimax_key_raises(self):
for key in ("OPENAI_API_KEY", "MINIMAX_API_KEY"):
os.environ.pop(key, None)
with self.assertRaises(EnvironmentError) as ctx:
get_llm(provider="minimax")
self.assertIn("MINIMAX_API_KEY", str(ctx.exception))
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "== get_llm dotenv usage =="
rg -n -C2 'def get_llm|load_dotenv\(' utils/llm_provider.py

echo
echo "== missing-key test decorators =="
rg -n -C3 'test_missing_api_key_raises|test_missing_minimax_key_raises|@patch\("utils\.llm_provider\.load_dotenv"' tests/test_llm_provider.py

Repository: NirDiamant/GenAI_Agents

Length of output: 1040


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "== Imports in utils/llm_provider.py =="
head -30 utils/llm_provider.py

echo
echo "== Full test methods to verify decorator placement =="
sed -n '151,166p' tests/test_llm_provider.py

Repository: NirDiamant/GenAI_Agents

Length of output: 1757


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "== Import statements in utils/llm_provider.py =="
rg -n '^from|^import' utils/llm_provider.py | head -20

echo
echo "== Test file imports =="
rg -n '^from|^import' tests/test_llm_provider.py | head -20

Repository: NirDiamant/GenAI_Agents

Length of output: 418


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "== Check current state of test methods (full context) =="
sed -n '149,167p' tests/test_llm_provider.py

echo
echo "== All uses of load_dotenv in codebase =="
rg -n 'load_dotenv' --type py

echo
echo "== Check for .env file in repo =="
find . -name ".env*" -type f 2>/dev/null | head -20

Repository: NirDiamant/GenAI_Agents

Length of output: 1219


Mock load_dotenv() to prevent environment-dependent test behavior.

Tests at lines 151–166 use @patch.dict(os.environ, {}, clear=True) to clear the environment, but get_llm() calls load_dotenv() at runtime (line 105). If a .env file exists in the test environment, it will reload environment variables after the test setup, causing these tests to fail inconsistently.

Add @patch("utils.llm_provider.load_dotenv", return_value=None) above @patch.dict() in both test methods to prevent this:

Proposed fix
-    `@patch.dict`(os.environ, {}, clear=True)
-    def test_missing_api_key_raises(self):
+    `@patch`("utils.llm_provider.load_dotenv", return_value=None)
+    `@patch.dict`(os.environ, {}, clear=True)
+    def test_missing_api_key_raises(self, _mock_load_dotenv):
         # Remove all env vars to trigger the error
         for key in ("OPENAI_API_KEY", "MINIMAX_API_KEY"):
             os.environ.pop(key, None)
         with self.assertRaises(EnvironmentError) as ctx:
             get_llm(provider="openai")
         self.assertIn("OPENAI_API_KEY", str(ctx.exception))

-    `@patch.dict`(os.environ, {}, clear=True)
-    def test_missing_minimax_key_raises(self):
+    `@patch`("utils.llm_provider.load_dotenv", return_value=None)
+    `@patch.dict`(os.environ, {}, clear=True)
+    def test_missing_minimax_key_raises(self, _mock_load_dotenv):
         for key in ("OPENAI_API_KEY", "MINIMAX_API_KEY"):
             os.environ.pop(key, None)
         with self.assertRaises(EnvironmentError) as ctx:
             get_llm(provider="minimax")
         self.assertIn("MINIMAX_API_KEY", str(ctx.exception))
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/test_llm_provider.py` around lines 151 - 166, The tests
test_missing_api_key_raises and test_missing_minimax_key_raises clear os.environ
but still call get_llm which invokes load_dotenv at runtime; mock
utils.llm_provider.load_dotenv to prevent reloading environment by adding
`@patch`("utils.llm_provider.load_dotenv", return_value=None) above the existing
`@patch.dict` decorator on both test functions so load_dotenv is a no-op during
the tests, keeping the environment cleared for get_llm to raise the expected
EnvironmentError.

@NirDiamant
Copy link
Copy Markdown
Owner

Same feedback as the parallel PR on RAG_Techniques - the multi-provider pattern is useful but needs to be genuinely provider-agnostic rather than MiniMax-focused. Please see my comment on NirDiamant/RAG_Techniques#128 for details.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants