Skip to content
Open
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 10 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -88,6 +88,7 @@ Whether you're a novice eager to learn or an expert ready to share your knowledg
- 🛠️ Practical, ready-to-use agent implementations
- 🌟 Regular updates with the latest advancements in GenAI
- 🤝 Share your own agent creations with the community
- 🔌 Multi-provider LLM support — switch between OpenAI, [MiniMax](https://www.minimaxi.com/), and other OpenAI-compatible providers via `utils/llm_provider.py`

## GenAI Agent Implementations

Expand Down Expand Up @@ -140,6 +141,7 @@ Below is a comprehensive overview of our GenAI agent implementations, organized
| 43 | 🔍 **QA** | [EU Green Deal Bot](all_agents_tutorials/EU_Green_Compliance_FAQ_Bot.ipynb) | LangGraph | Regulatory compliance, FAQ system |
| 44 | 🔍 **QA** | [Systematic Review](all_agents_tutorials/systematic_review_of_scientific_articles.ipynb) | LangGraph | Academic paper processing, draft generation |
| 45 | 🌟 **Advanced** | [Controllable RAG Agent](https://github.com/NirDiamant/Controllable-RAG-Agent) | Custom | Complex question answering, deterministic graph |
| 46 | 🔧 **Framework** | [Multi-Provider Agent (MiniMax)](all_agents_tutorials/multi_provider_conversational_agent.ipynb) | LangChain | Multi-provider LLM support, OpenAI/MiniMax switching, env-based config |

Comment on lines +144 to 145
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Table index is inconsistent with the detailed section numbering.

Line 144 lists Multi-Provider Agent (MiniMax) as #46, while the detailed framework section introduces the same tutorial as #6 (Line 204). Please align the numbering scheme between the summary table and detailed list.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@README.md` around lines 144 - 145, The README has a numbering mismatch: the
summary table row for "Multi-Provider Agent (MiniMax)" is labeled 46 while the
detailed section header for the same tutorial is labeled 6; update the summary
table entry for "Multi-Provider Agent (MiniMax)" to use the same index as the
detailed section (change 46 to 6) so the table and the detailed list match, or
alternatively rename the detailed section to 46—make the change in the README
where the table row containing "Multi-Provider Agent (MiniMax)" and the detailed
section header "Multi-Provider Agent (MiniMax)" appear so both indices are
identical.

Explore our extensive list of GenAI agent implementations, sorted by categories:

Expand Down Expand Up @@ -199,6 +201,14 @@ Explore our extensive list of GenAI agent implementations, sorted by categories:
- **[Official MCP Documentation](https://modelcontextprotocol.io/introduction)**
- **[MCP GitHub Repository](https://github.com/modelcontextprotocol)**

6. **[Multi-Provider Conversational Agent with MiniMax Support](https://github.com/NirDiamant/GenAI_Agents/blob/main/all_agents_tutorials/multi_provider_conversational_agent.ipynb)**

#### Overview 🔎
Demonstrates how to build a conversational agent that works with **multiple LLM providers** through a unified interface. Shows how to use [MiniMax](https://www.minimaxi.com/) M2.5 (204K context window, OpenAI-compatible API) alongside OpenAI, switching providers with a single parameter change.

#### Implementation 🛠️
Introduces a shared `utils/llm_provider.py` module with a provider registry and `get_llm()` helper. Any tutorial notebook can import it to switch between OpenAI and MiniMax without changing agent logic. Includes environment-driven provider selection for production use.

### 🎓 Educational and Research Agents

6. **[ATLAS: Academic Task and Learning Agent System](https://github.com/NirDiamant/GenAI_Agents/blob/main/all_agents_tutorials/Academic_Task_Learning_Agent_LangGraph.ipynb)**
Expand Down
343 changes: 343 additions & 0 deletions all_agents_tutorials/multi_provider_conversational_agent.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,343 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Multi-Provider Conversational Agent with MiniMax Support\n",
"\n",
"## Overview\n",
"This tutorial demonstrates how to build a conversational agent that can work with **multiple LLM providers** through a single, unified interface. In addition to OpenAI, we show how to use [MiniMax](https://www.minimaxi.com/) (M2.5 / M2.5-highspeed) as an alternative LLM backend.\n",
"\n",
"## Motivation\n",
"Most GenAI agent tutorials are hard-wired to a single provider (typically OpenAI). In production you often need to:\n",
"- **Switch providers** without rewriting agent code.\n",
"- **Reduce costs** by routing certain workloads to cheaper models.\n",
"- **Improve resilience** by falling back to a secondary provider when the primary is unavailable.\n",
"\n",
"MiniMax M2.5 is a high-capability model with a **204K token context window** and an OpenAI-compatible API, making it a practical alternative.\n",
"\n",
"## Key Components\n",
"1. **`utils/llm_provider.py`** \u2013 shared helper that returns a LangChain `ChatOpenAI` instance for any registered provider.\n",
"2. **Provider registry** \u2013 a dictionary mapping provider names to their configuration (base URL, API-key env var, default model).\n",
"3. **Conversational chain** \u2013 LangChain prompt + LLM + message history, identical regardless of which provider is active.\n",
"\n",
"## Method Details\n",
"\n",
"### Architecture\n",
"```\n",
"User Input\n",
" \u2502\n",
" \u25bc\n",
"Prompt Template \u2500\u2500\u2500\u2500\u25b6 get_llm(provider) \u2500\u2500\u2500\u2500\u25b6 LLM Response\n",
" \u25b2 \u2502\n",
" \u2502 \u250c\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2510\n",
"History Store \u2502 OpenAI \u2502\n",
" \u2502 MiniMax \u2502\n",
" \u2502 ... \u2502\n",
" \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n",
"```\n",
"\n",
"The `get_llm()` function reads the provider configuration and returns the right `ChatOpenAI` object. The rest of the agent code never touches provider-specific details."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup\n",
"\n",
"### Install required packages"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# %pip install -q langchain langchain_openai openai python-dotenv"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Configure environment variables\n",
"\n",
"Create a `.env` file in the repository root (or export the variables in your shell):\n",
"\n",
"```bash\n",
"# Required for OpenAI provider\n",
"OPENAI_API_KEY=sk-...\n",
"\n",
"# Required for MiniMax provider (get yours at https://www.minimaxi.com/)\n",
"MINIMAX_API_KEY=sk-...\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Import the multi-provider helper and LangChain components"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import sys, os\n",
"\n",
"# Ensure the repository root is on the Python path so we can import utils/\n",
"sys.path.insert(0, os.path.join(os.path.dirname(os.path.abspath(\"__file__\")), \"..\"))\n",
"\n",
"from utils.llm_provider import get_llm, list_providers\n",
"from langchain_core.runnables.history import RunnableWithMessageHistory\n",
"from langchain_community.chat_message_histories import ChatMessageHistory\n",
"from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### List available providers"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(\"Available LLM providers:\", list_providers())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Part 1 \u2013 Conversational Agent with OpenAI (baseline)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Initialize the LLM via the unified helper"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"llm_openai = get_llm(provider=\"openai\", model=\"gpt-4o-mini\", temperature=0)\n",
"print(f\"Provider: OpenAI | Model: {llm_openai.model_name}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Build the conversational chain (provider-agnostic)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def build_agent(llm):\n",
" \"\"\"Build a conversational agent chain for any LangChain-compatible LLM.\"\"\"\n",
" prompt = ChatPromptTemplate.from_messages([\n",
" (\"system\", \"You are a helpful AI assistant. Keep answers concise.\"),\n",
" MessagesPlaceholder(variable_name=\"history\"),\n",
" (\"human\", \"{input}\"),\n",
" ])\n",
"\n",
" store: dict = {}\n",
"\n",
" def get_history(session_id: str):\n",
" if session_id not in store:\n",
" store[session_id] = ChatMessageHistory()\n",
" return store[session_id]\n",
"\n",
" chain = prompt | llm\n",
" return RunnableWithMessageHistory(\n",
" chain,\n",
" get_history,\n",
" input_messages_key=\"input\",\n",
" history_messages_key=\"history\",\n",
" ), store"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"agent_openai, history_openai = build_agent(llm_openai)\n",
"\n",
"config = {\"configurable\": {\"session_id\": \"demo_openai\"}}\n",
"\n",
"r1 = agent_openai.invoke({\"input\": \"Hello! My name is Alice.\"}, config=config)\n",
"print(\"OpenAI:\", r1.content)\n",
"\n",
"r2 = agent_openai.invoke({\"input\": \"What is my name?\"}, config=config)\n",
"print(\"OpenAI:\", r2.content)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Part 2 \u2013 Switch to MiniMax M2.5\n",
"\n",
"Switching providers is a one-line change. The rest of the agent code stays exactly the same."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"llm_minimax = get_llm(provider=\"minimax\") # defaults to MiniMax-M2.5\n",
"print(f\"Provider: MiniMax | Model: {llm_minimax.model_name}\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"agent_minimax, history_minimax = build_agent(llm_minimax)\n",
"\n",
"config = {\"configurable\": {\"session_id\": \"demo_minimax\"}}\n",
"\n",
"r1 = agent_minimax.invoke({\"input\": \"Hello! My name is Alice.\"}, config=config)\n",
"print(\"MiniMax:\", r1.content)\n",
"\n",
"r2 = agent_minimax.invoke({\"input\": \"What is my name?\"}, config=config)\n",
"print(\"MiniMax:\", r2.content)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Use the high-speed variant for lower latency"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"llm_fast = get_llm(provider=\"minimax\", model=\"MiniMax-M2.5-highspeed\")\n",
"agent_fast, _ = build_agent(llm_fast)\n",
"\n",
"config = {\"configurable\": {\"session_id\": \"demo_fast\"}}\n",
"r = agent_fast.invoke({\"input\": \"Explain quantum computing in two sentences.\"}, config=config)\n",
"print(\"MiniMax (highspeed):\", r.content)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Part 3 \u2013 Compare providers side-by-side\n",
"\n",
"Run the same prompt through both providers and compare outputs."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"prompt_text = \"What are the three most important factors when choosing a cloud LLM provider?\"\n",
"\n",
"for name, llm in [(\"OpenAI\", llm_openai), (\"MiniMax\", llm_minimax)]:\n",
" agent, _ = build_agent(llm)\n",
" config = {\"configurable\": {\"session_id\": f\"compare_{name}\"}}\n",
" resp = agent.invoke({\"input\": prompt_text}, config=config)\n",
" print(f\"\\n--- {name} ---\")\n",
" print(resp.content)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Part 4 \u2013 Environment-driven provider selection\n",
"\n",
"In production, you often want the provider to be controlled by an environment variable rather than hard-coded."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"# Set via environment (e.g. LLM_PROVIDER=minimax python app.py)\n",
"provider_name = os.getenv(\"LLM_PROVIDER\", \"minimax\")\n",
"llm_env = get_llm(provider=provider_name)\n",
"\n",
"agent_env, _ = build_agent(llm_env)\n",
"config = {\"configurable\": {\"session_id\": \"env_demo\"}}\n",
"r = agent_env.invoke({\"input\": \"Summarize the benefits of multi-provider LLM setups.\"}, config=config)\n",
"print(f\"[{provider_name}]:\", r.content)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Conclusion\n",
"\n",
"By introducing a thin **provider abstraction** (`utils/llm_provider.py`), all tutorials in this repository can:\n",
"\n",
"- Support **OpenAI** and **MiniMax** (and any future OpenAI-compatible provider) without code changes.\n",
"- Switch providers via a single parameter or environment variable.\n",
"- Share the same agent logic regardless of the underlying LLM.\n",
"\n",
"### Next steps\n",
"- Explore more complex agents in this repository using the `get_llm()` helper.\n",
"- Add additional providers by extending `PROVIDERS` in `utils/llm_provider.py`.\n",
"- Try MiniMax-M2.5-highspeed for latency-sensitive workloads.\n",
"\n",
"### References\n",
"- [MiniMax Platform](https://www.minimaxi.com/)\n",
"- [MiniMax API Documentation](https://www.minimaxi.com/document/introduction)\n",
"- [LangChain OpenAI Integration](https://python.langchain.com/docs/integrations/chat/openai/)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"name": "python",
"version": "3.10.0"
}
},
"nbformat": 4,
"nbformat_minor": 4
}
Empty file added tests/__init__.py
Empty file.
Loading