feat: add multi-provider LLM support with MiniMax M2.7#105
feat: add multi-provider LLM support with MiniMax M2.7#105octo-patch wants to merge 3 commits intoNirDiamant:mainfrom
Conversation
Add a shared utils/llm_provider.py module that lets any tutorial switch between OpenAI and MiniMax (M2.5 / M2.5-highspeed, 204K context) via a single get_llm(provider=...) call. MiniMax uses an OpenAI-compatible API so no extra SDK is needed. Changes: - utils/llm_provider.py: provider registry + get_llm() helper with temperature clamping for MiniMax (rejects 0) - all_agents_tutorials/multi_provider_conversational_agent.ipynb: tutorial showing provider switching, side-by-side comparison, and env-driven selection - tests/test_llm_provider.py: 23 unit tests (mocked) - tests/test_minimax_integration.py: 6 live integration tests (auto-skipped when MINIMAX_API_KEY is absent) - README.md: new table entry, framework section entry, key-features line
📝 WalkthroughWalkthroughAdds a provider-agnostic LLM utility and provider registry (OpenAI, MiniMax), a new tutorial notebook, unit and integration tests for provider behavior and MiniMax integration, and a small utils package comment. Changes
Sequence DiagramsequenceDiagram
participant User
participant App
participant Selector as get_llm()
participant Registry as PROVIDERS
participant OpenAI
participant MiniMax
User->>App: invoke agent (provider explicit or via LLM_PROVIDER)
App->>Selector: request LLM(provider, model, temperature, ...)
Selector->>Registry: lookup provider config
Registry-->>Selector: return api_key_env, base_url, default_model
Selector->>Selector: load .env, read API key, adjust params (e.g., clamp temperature)
alt provider == "minimax"
Selector->>MiniMax: initialize ChatOpenAI with base_url and api_key
MiniMax-->>Selector: return configured LLM instance
else provider == "openai"
Selector->>OpenAI: initialize ChatOpenAI with api_key
OpenAI-->>Selector: return configured LLM instance
end
Selector-->>App: configured LLM returned
App->>App: build_agent(llm) with history and prompt
App->>OpenAI: send messages (or)
App->>MiniMax: send messages
OpenAI-->>App: response
MiniMax-->>App: response
App-->>User: present reply
Estimated Code Review Effort🎯 3 (Moderate) | ⏱️ ~25 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@all_agents_tutorials/multi_provider_conversational_agent.ipynb`:
- Around line 92-101: The use of os.path.abspath("__file__") in the
sys.path.insert call is invalid in notebooks because __file__ is undefined;
update the sys.path insertion in the startup block (the sys.path.insert(0,
os.path.join(os.path.dirname(os.path.abspath("__file__")), "..")) call) to a
notebook-safe approach such as inserting the repository root explicitly (e.g.,
use "." or os.getcwd()) so imports like utils.llm_provider and the subsequent
imports (get_llm, list_providers, RunnableWithMessageHistory,
ChatMessageHistory, ChatPromptTemplate, MessagesPlaceholder) resolve reliably
when run in Jupyter.
In `@README.md`:
- Around line 204-214: The README has duplicate numbering: "Multi-Provider
Conversational Agent..." is numbered 6 and the following "ATLAS: Academic Task
and Learning Agent System" is also numbered 6; update the ATLAS entry to 7 and
increment all subsequent numbered items accordingly so the sequence is
continuous; search for the section headings "Multi-Provider Conversational Agent
with MiniMax Support" and "ATLAS: Academic Task and Learning Agent System" to
locate the block and then renumber every following ordered entry (including
sublists) by +1.
In `@utils/llm_provider.py`:
- Around line 10-11: Replace the typographic EN DASH characters (U+2013) in the
module docstring bullets with a standard ASCII hyphen-minus (U+002D): change
entries like "**openai** – OpenAI GPT models (default)" and "**minimax** –
MiniMax M2.5 / M2.5-highspeed (204K context, OpenAI-compatible)" to use "-"
instead of "–", and apply the same replacement for the other EN DASH occurrences
in the same docstring near the other model bullet lines so all hyphens are
ASCII.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 00825fed-4cdc-4ae2-a62f-954ada87f003
📒 Files selected for processing (7)
README.mdall_agents_tutorials/multi_provider_conversational_agent.ipynbtests/__init__.pytests/test_llm_provider.pytests/test_minimax_integration.pyutils/__init__.pyutils/llm_provider.py
- Replace os.path.abspath("__file__") with os.getcwd() in notebook
(os.path.abspath on a literal string doesn't resolve to the notebook's
directory)
- Fix duplicate section numbering in README.md (renumber ATLAS and all
subsequent entries after new Multi-Provider Agent section)
- Replace typographic EN DASH (U+2013) with ASCII hyphen-minus (U+002D)
in llm_provider.py docstring to avoid encoding issues
- Update default_model in PROVIDERS registry to MiniMax-M2.7 - Add M2.7 integration test alongside existing M2.5/highspeed tests - Update notebook, README, and docstring references - All 23 unit tests pass Co-Authored-By: Octopus <liyuan851277048@icloud.com>
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@README.md`:
- Around line 144-145: The README has a numbering mismatch: the summary table
row for "Multi-Provider Agent (MiniMax)" is labeled 46 while the detailed
section header for the same tutorial is labeled 6; update the summary table
entry for "Multi-Provider Agent (MiniMax)" to use the same index as the detailed
section (change 46 to 6) so the table and the detailed list match, or
alternatively rename the detailed section to 46—make the change in the README
where the table row containing "Multi-Provider Agent (MiniMax)" and the detailed
section header "Multi-Provider Agent (MiniMax)" appear so both indices are
identical.
- Line 422: Update the tutorial title string "Multi agent research team -
Autogen" to use proper compound-adjective hyphenation: change it to "Multi-agent
research team - Autogen" in the README where that link/title text appears so the
hyperlink and surrounding markup remain unchanged.
In `@tests/test_llm_provider.py`:
- Around line 151-166: The tests test_missing_api_key_raises and
test_missing_minimax_key_raises clear os.environ but still call get_llm which
invokes load_dotenv at runtime; mock utils.llm_provider.load_dotenv to prevent
reloading environment by adding `@patch`("utils.llm_provider.load_dotenv",
return_value=None) above the existing `@patch.dict` decorator on both test
functions so load_dotenv is a no-op during the tests, keeping the environment
cleared for get_llm to raise the expected EnvironmentError.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: dcb46db6-b339-4fee-931f-8c8e3e849053
📒 Files selected for processing (5)
README.mdall_agents_tutorials/multi_provider_conversational_agent.ipynbtests/test_llm_provider.pytests/test_minimax_integration.pyutils/llm_provider.py
🚧 Files skipped from review as they are similar to previous changes (2)
- all_agents_tutorials/multi_provider_conversational_agent.ipynb
- tests/test_minimax_integration.py
| | 46 | 🔧 **Framework** | [Multi-Provider Agent (MiniMax)](all_agents_tutorials/multi_provider_conversational_agent.ipynb) | LangChain | Multi-provider LLM support, OpenAI/MiniMax switching, env-based config | | ||
|
|
There was a problem hiding this comment.
Table index is inconsistent with the detailed section numbering.
Line 144 lists Multi-Provider Agent (MiniMax) as #46, while the detailed framework section introduces the same tutorial as #6 (Line 204). Please align the numbering scheme between the summary table and detailed list.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@README.md` around lines 144 - 145, The README has a numbering mismatch: the
summary table row for "Multi-Provider Agent (MiniMax)" is labeled 46 while the
detailed section header for the same tutorial is labeled 6; update the summary
table entry for "Multi-Provider Agent (MiniMax)" to use the same index as the
detailed section (change 46 to 6) so the table and the detailed list match, or
alternatively rename the detailed section to 46—make the change in the README
where the table row containing "Multi-Provider Agent (MiniMax)" and the detailed
section header "Multi-Provider Agent (MiniMax)" appear so both indices are
identical.
|
|
||
|
|
||
| 27. **[Multi agent research team - Autogen](https://github.com/NirDiamant/GenAI_Agents/blob/main/all_agents_tutorials/research_team_autogen.ipynb)** | ||
| 28. **[Multi agent research team - Autogen](https://github.com/NirDiamant/GenAI_Agents/blob/main/all_agents_tutorials/research_team_autogen.ipynb)** |
There was a problem hiding this comment.
Fix compound adjective hyphenation in the tutorial title.
Line 422 should use “Multi-agent” instead of “Multi agent”.
🔧 Proposed fix
-28. **[Multi agent research team - Autogen](https://github.com/NirDiamant/GenAI_Agents/blob/main/all_agents_tutorials/research_team_autogen.ipynb)**
+28. **[Multi-agent research team - Autogen](https://github.com/NirDiamant/GenAI_Agents/blob/main/all_agents_tutorials/research_team_autogen.ipynb)**📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| 28. **[Multi agent research team - Autogen](https://github.com/NirDiamant/GenAI_Agents/blob/main/all_agents_tutorials/research_team_autogen.ipynb)** | |
| 28. **[Multi-agent research team - Autogen](https://github.com/NirDiamant/GenAI_Agents/blob/main/all_agents_tutorials/research_team_autogen.ipynb)** |
🧰 Tools
🪛 LanguageTool
[grammar] ~422-~422: Use a hyphen to join words.
Context: ... retrieval and synthesis. 28. **[Multi agent research team - Autogen](https://g...
(QB_NEW_EN_HYPHEN)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@README.md` at line 422, Update the tutorial title string "Multi agent
research team - Autogen" to use proper compound-adjective hyphenation: change it
to "Multi-agent research team - Autogen" in the README where that link/title
text appears so the hyperlink and surrounding markup remain unchanged.
| @patch.dict(os.environ, {}, clear=True) | ||
| def test_missing_api_key_raises(self): | ||
| # Remove all env vars to trigger the error | ||
| for key in ("OPENAI_API_KEY", "MINIMAX_API_KEY"): | ||
| os.environ.pop(key, None) | ||
| with self.assertRaises(EnvironmentError) as ctx: | ||
| get_llm(provider="openai") | ||
| self.assertIn("OPENAI_API_KEY", str(ctx.exception)) | ||
|
|
||
| @patch.dict(os.environ, {}, clear=True) | ||
| def test_missing_minimax_key_raises(self): | ||
| for key in ("OPENAI_API_KEY", "MINIMAX_API_KEY"): | ||
| os.environ.pop(key, None) | ||
| with self.assertRaises(EnvironmentError) as ctx: | ||
| get_llm(provider="minimax") | ||
| self.assertIn("MINIMAX_API_KEY", str(ctx.exception)) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "== get_llm dotenv usage =="
rg -n -C2 'def get_llm|load_dotenv\(' utils/llm_provider.py
echo
echo "== missing-key test decorators =="
rg -n -C3 'test_missing_api_key_raises|test_missing_minimax_key_raises|@patch\("utils\.llm_provider\.load_dotenv"' tests/test_llm_provider.pyRepository: NirDiamant/GenAI_Agents
Length of output: 1040
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "== Imports in utils/llm_provider.py =="
head -30 utils/llm_provider.py
echo
echo "== Full test methods to verify decorator placement =="
sed -n '151,166p' tests/test_llm_provider.pyRepository: NirDiamant/GenAI_Agents
Length of output: 1757
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "== Import statements in utils/llm_provider.py =="
rg -n '^from|^import' utils/llm_provider.py | head -20
echo
echo "== Test file imports =="
rg -n '^from|^import' tests/test_llm_provider.py | head -20Repository: NirDiamant/GenAI_Agents
Length of output: 418
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "== Check current state of test methods (full context) =="
sed -n '149,167p' tests/test_llm_provider.py
echo
echo "== All uses of load_dotenv in codebase =="
rg -n 'load_dotenv' --type py
echo
echo "== Check for .env file in repo =="
find . -name ".env*" -type f 2>/dev/null | head -20Repository: NirDiamant/GenAI_Agents
Length of output: 1219
Mock load_dotenv() to prevent environment-dependent test behavior.
Tests at lines 151–166 use @patch.dict(os.environ, {}, clear=True) to clear the environment, but get_llm() calls load_dotenv() at runtime (line 105). If a .env file exists in the test environment, it will reload environment variables after the test setup, causing these tests to fail inconsistently.
Add @patch("utils.llm_provider.load_dotenv", return_value=None) above @patch.dict() in both test methods to prevent this:
Proposed fix
- `@patch.dict`(os.environ, {}, clear=True)
- def test_missing_api_key_raises(self):
+ `@patch`("utils.llm_provider.load_dotenv", return_value=None)
+ `@patch.dict`(os.environ, {}, clear=True)
+ def test_missing_api_key_raises(self, _mock_load_dotenv):
# Remove all env vars to trigger the error
for key in ("OPENAI_API_KEY", "MINIMAX_API_KEY"):
os.environ.pop(key, None)
with self.assertRaises(EnvironmentError) as ctx:
get_llm(provider="openai")
self.assertIn("OPENAI_API_KEY", str(ctx.exception))
- `@patch.dict`(os.environ, {}, clear=True)
- def test_missing_minimax_key_raises(self):
+ `@patch`("utils.llm_provider.load_dotenv", return_value=None)
+ `@patch.dict`(os.environ, {}, clear=True)
+ def test_missing_minimax_key_raises(self, _mock_load_dotenv):
for key in ("OPENAI_API_KEY", "MINIMAX_API_KEY"):
os.environ.pop(key, None)
with self.assertRaises(EnvironmentError) as ctx:
get_llm(provider="minimax")
self.assertIn("MINIMAX_API_KEY", str(ctx.exception))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/test_llm_provider.py` around lines 151 - 166, The tests
test_missing_api_key_raises and test_missing_minimax_key_raises clear os.environ
but still call get_llm which invokes load_dotenv at runtime; mock
utils.llm_provider.load_dotenv to prevent reloading environment by adding
`@patch`("utils.llm_provider.load_dotenv", return_value=None) above the existing
`@patch.dict` decorator on both test functions so load_dotenv is a no-op during
the tests, keeping the environment cleared for get_llm to raise the expected
EnvironmentError.
|
Same feedback as the parallel PR on RAG_Techniques - the multi-provider pattern is useful but needs to be genuinely provider-agnostic rather than MiniMax-focused. Please see my comment on NirDiamant/RAG_Techniques#128 for details. |
Summary
utils/llm_provider.pywith provider registry supporting OpenAI and MiniMaxTest plan
Summary by CodeRabbit
New Features
Documentation
Tests
Chores