Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
90 changes: 50 additions & 40 deletions README.md

Large diffs are not rendered by default.

333 changes: 333 additions & 0 deletions all_agents_tutorials/multi_provider_conversational_agent.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,333 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Multi-Provider Conversational Agent with MiniMax Support\n",
"\n",
"## Overview\n",
"This tutorial demonstrates how to build a conversational agent that can work with **multiple LLM providers** through a single, unified interface. In addition to OpenAI, we show how to use [MiniMax](https://www.minimaxi.com/) (M2.7 / M2.5 / M2.5-highspeed) as an alternative LLM backend.\n",
"\n",
"## Motivation\n",
"Most GenAI agent tutorials are hard-wired to a single provider (typically OpenAI). In production you often need to:\n",
"- **Switch providers** without rewriting agent code.\n",
"- **Reduce costs** by routing certain workloads to cheaper models.\n",
"- **Improve resilience** by falling back to a secondary provider when the primary is unavailable.\n",
"\n",
"MiniMax M2.7 is the latest high-capability model with a **204K token context window** and an OpenAI-compatible API, making it a practical alternative.\n",
"\n",
"## Key Components\n",
"1. **`utils/llm_provider.py`** – shared helper that returns a LangChain `ChatOpenAI` instance for any registered provider.\n",
"2. **Provider registry** – a dictionary mapping provider names to their configuration (base URL, API-key env var, default model).\n",
"3. **Conversational chain** – LangChain prompt + LLM + message history, identical regardless of which provider is active.\n",
"\n",
"## Method Details\n",
"\n",
"### Architecture\n",
"```\n",
"User Input\n",
" │\n",
" ▼\n",
"Prompt Template ────▶ get_llm(provider) ────▶ LLM Response\n",
" ▲ │\n",
" │ ┌────┴────┐\n",
"History Store │ OpenAI │\n",
" │ MiniMax │\n",
" │ ... │\n",
" └─────────┘\n",
"```\n",
"\n",
"The `get_llm()` function reads the provider configuration and returns the right `ChatOpenAI` object. The rest of the agent code never touches provider-specific details."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup\n",
"\n",
"### Install required packages"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# %pip install -q langchain langchain_openai openai python-dotenv"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Configure environment variables\n",
"\n",
"Create a `.env` file in the repository root (or export the variables in your shell):\n",
"\n",
"```bash\n",
"# Required for OpenAI provider\n",
"OPENAI_API_KEY=sk-...\n",
"\n",
"# Required for MiniMax provider (get yours at https://www.minimaxi.com/)\n",
"MINIMAX_API_KEY=sk-...\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Import the multi-provider helper and LangChain components"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": "import sys, os\n\n# Ensure the repository root is on the Python path so we can import utils/\nsys.path.insert(0, os.path.join(os.getcwd(), \"..\"))\n\nfrom utils.llm_provider import get_llm, list_providers\nfrom langchain_core.runnables.history import RunnableWithMessageHistory\nfrom langchain_community.chat_message_histories import ChatMessageHistory\nfrom langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder"
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### List available providers"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(\"Available LLM providers:\", list_providers())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Part 1 – Conversational Agent with OpenAI (baseline)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Initialize the LLM via the unified helper"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"llm_openai = get_llm(provider=\"openai\", model=\"gpt-4o-mini\", temperature=0)\n",
"print(f\"Provider: OpenAI | Model: {llm_openai.model_name}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Build the conversational chain (provider-agnostic)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def build_agent(llm):\n",
" \"\"\"Build a conversational agent chain for any LangChain-compatible LLM.\"\"\"\n",
" prompt = ChatPromptTemplate.from_messages([\n",
" (\"system\", \"You are a helpful AI assistant. Keep answers concise.\"),\n",
" MessagesPlaceholder(variable_name=\"history\"),\n",
" (\"human\", \"{input}\"),\n",
" ])\n",
"\n",
" store: dict = {}\n",
"\n",
" def get_history(session_id: str):\n",
" if session_id not in store:\n",
" store[session_id] = ChatMessageHistory()\n",
" return store[session_id]\n",
"\n",
" chain = prompt | llm\n",
" return RunnableWithMessageHistory(\n",
" chain,\n",
" get_history,\n",
" input_messages_key=\"input\",\n",
" history_messages_key=\"history\",\n",
" ), store"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"agent_openai, history_openai = build_agent(llm_openai)\n",
"\n",
"config = {\"configurable\": {\"session_id\": \"demo_openai\"}}\n",
"\n",
"r1 = agent_openai.invoke({\"input\": \"Hello! My name is Alice.\"}, config=config)\n",
"print(\"OpenAI:\", r1.content)\n",
"\n",
"r2 = agent_openai.invoke({\"input\": \"What is my name?\"}, config=config)\n",
"print(\"OpenAI:\", r2.content)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Part 2 – Switch to MiniMax M2.7\n",
"\n",
"Switching providers is a one-line change. The rest of the agent code stays exactly the same."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"llm_minimax = get_llm(provider=\"minimax\") # defaults to MiniMax-M2.7\n",
"print(f\"Provider: MiniMax | Model: {llm_minimax.model_name}\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"agent_minimax, history_minimax = build_agent(llm_minimax)\n",
"\n",
"config = {\"configurable\": {\"session_id\": \"demo_minimax\"}}\n",
"\n",
"r1 = agent_minimax.invoke({\"input\": \"Hello! My name is Alice.\"}, config=config)\n",
"print(\"MiniMax:\", r1.content)\n",
"\n",
"r2 = agent_minimax.invoke({\"input\": \"What is my name?\"}, config=config)\n",
"print(\"MiniMax:\", r2.content)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Use the high-speed variant for lower latency"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"llm_fast = get_llm(provider=\"minimax\", model=\"MiniMax-M2.5-highspeed\")\n",
"agent_fast, _ = build_agent(llm_fast)\n",
"\n",
"config = {\"configurable\": {\"session_id\": \"demo_fast\"}}\n",
"r = agent_fast.invoke({\"input\": \"Explain quantum computing in two sentences.\"}, config=config)\n",
"print(\"MiniMax (highspeed):\", r.content)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Part 3 – Compare providers side-by-side\n",
"\n",
"Run the same prompt through both providers and compare outputs."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"prompt_text = \"What are the three most important factors when choosing a cloud LLM provider?\"\n",
"\n",
"for name, llm in [(\"OpenAI\", llm_openai), (\"MiniMax\", llm_minimax)]:\n",
" agent, _ = build_agent(llm)\n",
" config = {\"configurable\": {\"session_id\": f\"compare_{name}\"}}\n",
" resp = agent.invoke({\"input\": prompt_text}, config=config)\n",
" print(f\"\\n--- {name} ---\")\n",
" print(resp.content)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Part 4 – Environment-driven provider selection\n",
"\n",
"In production, you often want the provider to be controlled by an environment variable rather than hard-coded."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"# Set via environment (e.g. LLM_PROVIDER=minimax python app.py)\n",
"provider_name = os.getenv(\"LLM_PROVIDER\", \"minimax\")\n",
"llm_env = get_llm(provider=provider_name)\n",
"\n",
"agent_env, _ = build_agent(llm_env)\n",
"config = {\"configurable\": {\"session_id\": \"env_demo\"}}\n",
"r = agent_env.invoke({\"input\": \"Summarize the benefits of multi-provider LLM setups.\"}, config=config)\n",
"print(f\"[{provider_name}]:\", r.content)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Conclusion\n",
"\n",
"By introducing a thin **provider abstraction** (`utils/llm_provider.py`), all tutorials in this repository can:\n",
"\n",
"- Support **OpenAI** and **MiniMax** (and any future OpenAI-compatible provider) without code changes.\n",
"- Switch providers via a single parameter or environment variable.\n",
"- Share the same agent logic regardless of the underlying LLM.\n",
"\n",
"### Next steps\n",
"- Explore more complex agents in this repository using the `get_llm()` helper.\n",
"- Add additional providers by extending `PROVIDERS` in `utils/llm_provider.py`.\n",
"- Try MiniMax-M2.5-highspeed for latency-sensitive workloads, or MiniMax-M2.7 for the latest capabilities.\n",
"\n",
"### References\n",
"- [MiniMax Platform](https://www.minimaxi.com/)\n",
"- [MiniMax API Documentation](https://www.minimaxi.com/document/introduction)\n",
"- [LangChain OpenAI Integration](https://python.langchain.com/docs/integrations/chat/openai/)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"name": "python",
"version": "3.10.0"
}
},
"nbformat": 4,
"nbformat_minor": 4
}
Empty file added tests/__init__.py
Empty file.
Loading