-
-
Notifications
You must be signed in to change notification settings - Fork 3.5k
feat: add multi-provider LLM support with MiniMax M2.7 #105
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
octo-patch
wants to merge
3
commits into
NirDiamant:main
Choose a base branch
from
octo-patch:feature/add-minimax-provider
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from 1 commit
Commits
Show all changes
3 commits
Select commit
Hold shift + click to select a range
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Some comments aren't visible on the classic Files Changed page.
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
343 changes: 343 additions & 0 deletions
343
all_agents_tutorials/multi_provider_conversational_agent.ipynb
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,343 @@ | ||
| { | ||
| "cells": [ | ||
| { | ||
| "cell_type": "markdown", | ||
| "metadata": {}, | ||
| "source": [ | ||
| "# Multi-Provider Conversational Agent with MiniMax Support\n", | ||
| "\n", | ||
| "## Overview\n", | ||
| "This tutorial demonstrates how to build a conversational agent that can work with **multiple LLM providers** through a single, unified interface. In addition to OpenAI, we show how to use [MiniMax](https://www.minimaxi.com/) (M2.5 / M2.5-highspeed) as an alternative LLM backend.\n", | ||
| "\n", | ||
| "## Motivation\n", | ||
| "Most GenAI agent tutorials are hard-wired to a single provider (typically OpenAI). In production you often need to:\n", | ||
| "- **Switch providers** without rewriting agent code.\n", | ||
| "- **Reduce costs** by routing certain workloads to cheaper models.\n", | ||
| "- **Improve resilience** by falling back to a secondary provider when the primary is unavailable.\n", | ||
| "\n", | ||
| "MiniMax M2.5 is a high-capability model with a **204K token context window** and an OpenAI-compatible API, making it a practical alternative.\n", | ||
| "\n", | ||
| "## Key Components\n", | ||
| "1. **`utils/llm_provider.py`** \u2013 shared helper that returns a LangChain `ChatOpenAI` instance for any registered provider.\n", | ||
| "2. **Provider registry** \u2013 a dictionary mapping provider names to their configuration (base URL, API-key env var, default model).\n", | ||
| "3. **Conversational chain** \u2013 LangChain prompt + LLM + message history, identical regardless of which provider is active.\n", | ||
| "\n", | ||
| "## Method Details\n", | ||
| "\n", | ||
| "### Architecture\n", | ||
| "```\n", | ||
| "User Input\n", | ||
| " \u2502\n", | ||
| " \u25bc\n", | ||
| "Prompt Template \u2500\u2500\u2500\u2500\u25b6 get_llm(provider) \u2500\u2500\u2500\u2500\u25b6 LLM Response\n", | ||
| " \u25b2 \u2502\n", | ||
| " \u2502 \u250c\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2510\n", | ||
| "History Store \u2502 OpenAI \u2502\n", | ||
| " \u2502 MiniMax \u2502\n", | ||
| " \u2502 ... \u2502\n", | ||
| " \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n", | ||
| "```\n", | ||
| "\n", | ||
| "The `get_llm()` function reads the provider configuration and returns the right `ChatOpenAI` object. The rest of the agent code never touches provider-specific details." | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "markdown", | ||
| "metadata": {}, | ||
| "source": [ | ||
| "## Setup\n", | ||
| "\n", | ||
| "### Install required packages" | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "code", | ||
| "execution_count": null, | ||
| "metadata": {}, | ||
| "outputs": [], | ||
| "source": [ | ||
| "# %pip install -q langchain langchain_openai openai python-dotenv" | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "markdown", | ||
| "metadata": {}, | ||
| "source": [ | ||
| "### Configure environment variables\n", | ||
| "\n", | ||
| "Create a `.env` file in the repository root (or export the variables in your shell):\n", | ||
| "\n", | ||
| "```bash\n", | ||
| "# Required for OpenAI provider\n", | ||
| "OPENAI_API_KEY=sk-...\n", | ||
| "\n", | ||
| "# Required for MiniMax provider (get yours at https://www.minimaxi.com/)\n", | ||
| "MINIMAX_API_KEY=sk-...\n", | ||
| "```" | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "markdown", | ||
| "metadata": {}, | ||
| "source": [ | ||
| "### Import the multi-provider helper and LangChain components" | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "code", | ||
| "execution_count": null, | ||
| "metadata": {}, | ||
| "outputs": [], | ||
| "source": [ | ||
| "import sys, os\n", | ||
| "\n", | ||
| "# Ensure the repository root is on the Python path so we can import utils/\n", | ||
| "sys.path.insert(0, os.path.join(os.path.dirname(os.path.abspath(\"__file__\")), \"..\"))\n", | ||
| "\n", | ||
| "from utils.llm_provider import get_llm, list_providers\n", | ||
| "from langchain_core.runnables.history import RunnableWithMessageHistory\n", | ||
| "from langchain_community.chat_message_histories import ChatMessageHistory\n", | ||
| "from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder" | ||
| ] | ||
coderabbitai[bot] marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
| }, | ||
| { | ||
| "cell_type": "markdown", | ||
| "metadata": {}, | ||
| "source": [ | ||
| "### List available providers" | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "code", | ||
| "execution_count": null, | ||
| "metadata": {}, | ||
| "outputs": [], | ||
| "source": [ | ||
| "print(\"Available LLM providers:\", list_providers())" | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "markdown", | ||
| "metadata": {}, | ||
| "source": [ | ||
| "## Part 1 \u2013 Conversational Agent with OpenAI (baseline)" | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "markdown", | ||
| "metadata": {}, | ||
| "source": [ | ||
| "### Initialize the LLM via the unified helper" | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "code", | ||
| "execution_count": null, | ||
| "metadata": {}, | ||
| "outputs": [], | ||
| "source": [ | ||
| "llm_openai = get_llm(provider=\"openai\", model=\"gpt-4o-mini\", temperature=0)\n", | ||
| "print(f\"Provider: OpenAI | Model: {llm_openai.model_name}\")" | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "markdown", | ||
| "metadata": {}, | ||
| "source": [ | ||
| "### Build the conversational chain (provider-agnostic)" | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "code", | ||
| "execution_count": null, | ||
| "metadata": {}, | ||
| "outputs": [], | ||
| "source": [ | ||
| "def build_agent(llm):\n", | ||
| " \"\"\"Build a conversational agent chain for any LangChain-compatible LLM.\"\"\"\n", | ||
| " prompt = ChatPromptTemplate.from_messages([\n", | ||
| " (\"system\", \"You are a helpful AI assistant. Keep answers concise.\"),\n", | ||
| " MessagesPlaceholder(variable_name=\"history\"),\n", | ||
| " (\"human\", \"{input}\"),\n", | ||
| " ])\n", | ||
| "\n", | ||
| " store: dict = {}\n", | ||
| "\n", | ||
| " def get_history(session_id: str):\n", | ||
| " if session_id not in store:\n", | ||
| " store[session_id] = ChatMessageHistory()\n", | ||
| " return store[session_id]\n", | ||
| "\n", | ||
| " chain = prompt | llm\n", | ||
| " return RunnableWithMessageHistory(\n", | ||
| " chain,\n", | ||
| " get_history,\n", | ||
| " input_messages_key=\"input\",\n", | ||
| " history_messages_key=\"history\",\n", | ||
| " ), store" | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "code", | ||
| "execution_count": null, | ||
| "metadata": {}, | ||
| "outputs": [], | ||
| "source": [ | ||
| "agent_openai, history_openai = build_agent(llm_openai)\n", | ||
| "\n", | ||
| "config = {\"configurable\": {\"session_id\": \"demo_openai\"}}\n", | ||
| "\n", | ||
| "r1 = agent_openai.invoke({\"input\": \"Hello! My name is Alice.\"}, config=config)\n", | ||
| "print(\"OpenAI:\", r1.content)\n", | ||
| "\n", | ||
| "r2 = agent_openai.invoke({\"input\": \"What is my name?\"}, config=config)\n", | ||
| "print(\"OpenAI:\", r2.content)" | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "markdown", | ||
| "metadata": {}, | ||
| "source": [ | ||
| "## Part 2 \u2013 Switch to MiniMax M2.5\n", | ||
| "\n", | ||
| "Switching providers is a one-line change. The rest of the agent code stays exactly the same." | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "code", | ||
| "execution_count": null, | ||
| "metadata": {}, | ||
| "outputs": [], | ||
| "source": [ | ||
| "llm_minimax = get_llm(provider=\"minimax\") # defaults to MiniMax-M2.5\n", | ||
| "print(f\"Provider: MiniMax | Model: {llm_minimax.model_name}\")" | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "code", | ||
| "execution_count": null, | ||
| "metadata": {}, | ||
| "outputs": [], | ||
| "source": [ | ||
| "agent_minimax, history_minimax = build_agent(llm_minimax)\n", | ||
| "\n", | ||
| "config = {\"configurable\": {\"session_id\": \"demo_minimax\"}}\n", | ||
| "\n", | ||
| "r1 = agent_minimax.invoke({\"input\": \"Hello! My name is Alice.\"}, config=config)\n", | ||
| "print(\"MiniMax:\", r1.content)\n", | ||
| "\n", | ||
| "r2 = agent_minimax.invoke({\"input\": \"What is my name?\"}, config=config)\n", | ||
| "print(\"MiniMax:\", r2.content)" | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "markdown", | ||
| "metadata": {}, | ||
| "source": [ | ||
| "### Use the high-speed variant for lower latency" | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "code", | ||
| "execution_count": null, | ||
| "metadata": {}, | ||
| "outputs": [], | ||
| "source": [ | ||
| "llm_fast = get_llm(provider=\"minimax\", model=\"MiniMax-M2.5-highspeed\")\n", | ||
| "agent_fast, _ = build_agent(llm_fast)\n", | ||
| "\n", | ||
| "config = {\"configurable\": {\"session_id\": \"demo_fast\"}}\n", | ||
| "r = agent_fast.invoke({\"input\": \"Explain quantum computing in two sentences.\"}, config=config)\n", | ||
| "print(\"MiniMax (highspeed):\", r.content)" | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "markdown", | ||
| "metadata": {}, | ||
| "source": [ | ||
| "## Part 3 \u2013 Compare providers side-by-side\n", | ||
| "\n", | ||
| "Run the same prompt through both providers and compare outputs." | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "code", | ||
| "execution_count": null, | ||
| "metadata": {}, | ||
| "outputs": [], | ||
| "source": [ | ||
| "prompt_text = \"What are the three most important factors when choosing a cloud LLM provider?\"\n", | ||
| "\n", | ||
| "for name, llm in [(\"OpenAI\", llm_openai), (\"MiniMax\", llm_minimax)]:\n", | ||
| " agent, _ = build_agent(llm)\n", | ||
| " config = {\"configurable\": {\"session_id\": f\"compare_{name}\"}}\n", | ||
| " resp = agent.invoke({\"input\": prompt_text}, config=config)\n", | ||
| " print(f\"\\n--- {name} ---\")\n", | ||
| " print(resp.content)" | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "markdown", | ||
| "metadata": {}, | ||
| "source": [ | ||
| "## Part 4 \u2013 Environment-driven provider selection\n", | ||
| "\n", | ||
| "In production, you often want the provider to be controlled by an environment variable rather than hard-coded." | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "code", | ||
| "execution_count": null, | ||
| "metadata": {}, | ||
| "outputs": [], | ||
| "source": [ | ||
| "import os\n", | ||
| "\n", | ||
| "# Set via environment (e.g. LLM_PROVIDER=minimax python app.py)\n", | ||
| "provider_name = os.getenv(\"LLM_PROVIDER\", \"minimax\")\n", | ||
| "llm_env = get_llm(provider=provider_name)\n", | ||
| "\n", | ||
| "agent_env, _ = build_agent(llm_env)\n", | ||
| "config = {\"configurable\": {\"session_id\": \"env_demo\"}}\n", | ||
| "r = agent_env.invoke({\"input\": \"Summarize the benefits of multi-provider LLM setups.\"}, config=config)\n", | ||
| "print(f\"[{provider_name}]:\", r.content)" | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "markdown", | ||
| "metadata": {}, | ||
| "source": [ | ||
| "## Conclusion\n", | ||
| "\n", | ||
| "By introducing a thin **provider abstraction** (`utils/llm_provider.py`), all tutorials in this repository can:\n", | ||
| "\n", | ||
| "- Support **OpenAI** and **MiniMax** (and any future OpenAI-compatible provider) without code changes.\n", | ||
| "- Switch providers via a single parameter or environment variable.\n", | ||
| "- Share the same agent logic regardless of the underlying LLM.\n", | ||
| "\n", | ||
| "### Next steps\n", | ||
| "- Explore more complex agents in this repository using the `get_llm()` helper.\n", | ||
| "- Add additional providers by extending `PROVIDERS` in `utils/llm_provider.py`.\n", | ||
| "- Try MiniMax-M2.5-highspeed for latency-sensitive workloads.\n", | ||
| "\n", | ||
| "### References\n", | ||
| "- [MiniMax Platform](https://www.minimaxi.com/)\n", | ||
| "- [MiniMax API Documentation](https://www.minimaxi.com/document/introduction)\n", | ||
| "- [LangChain OpenAI Integration](https://python.langchain.com/docs/integrations/chat/openai/)" | ||
| ] | ||
| } | ||
| ], | ||
| "metadata": { | ||
| "kernelspec": { | ||
| "display_name": "Python 3", | ||
| "language": "python", | ||
| "name": "python3" | ||
| }, | ||
| "language_info": { | ||
| "name": "python", | ||
| "version": "3.10.0" | ||
| } | ||
| }, | ||
| "nbformat": 4, | ||
| "nbformat_minor": 4 | ||
| } | ||
Empty file.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Table index is inconsistent with the detailed section numbering.
Line 144 lists Multi-Provider Agent (MiniMax) as
#46, while the detailed framework section introduces the same tutorial as#6(Line 204). Please align the numbering scheme between the summary table and detailed list.🤖 Prompt for AI Agents