AI-powered conversational search engine
Multi-model integration | Real-time conversational search | Deep Research support
WeClaws 是一个可一键部署的多用户微信 AI 助理机器人管理面板。你可以在 Web 端统一管理多个 AI 机器人,支持工具调用、Skills、MCP、子智能体、记忆、做梦、定时任务和沙盒执行等能力。
WeClaws is a one-click deployable management dashboard for multi-user WeChat AI assistant bots. It lets you manage multiple AI bots from the web and supports tool calling, Skills, MCP, sub-agents, memory, dreaming, scheduled tasks, and sandbox execution.
SearChat is a modern AI-powered conversational search engine built with Turborepo monorepo architecture, integrating Node.js + Koa backend and Vue 3 + TypeScript frontend.
🎯 Key Features:
- 🤖 Multi-model Support - Compatible with OpenAI, Anthropic, Gemini APIs
- 🔍 Multiple Search Engines - Support for Bing, Google, SearXNG and more
- 💬 Conversational Search - Multi-turn chat-based search experience
- ⏰ Chat History - Conversation history cached in browser (IndexedDB/LocalStorage)
- 🧠 Deep Research Mode - Refactoring deep research functionality
- 🔌 MCP Support - (TODO) Support for external MCP services
- 🖼️ Image Search - (TODO) Support for image and video search
- 📂 File Parsing - (TODO) Support for document upload and content extraction
- Intelligent Research Mode - Deep research functionality
- Iterative Exploration - Workflow orchestration based on LangChain + LangGraph
- Comprehensive Report Generation - Automatically generate structured research reports
Important
To achieve the best results, the model must support Tool Call (Function Calling).
- OpenAI API compatible
- Google Gemini API compatible
- Anthropic API compatible
- Google Vertex AI compatible
- SearXNG - Open source aggregated search, no API key required
- Bing Search - Microsoft Bing web search API
- Google Search - Google web search API
- Tavily - Tavily web search API
- Exa - Exa.ai web search API
- Bocha - BochaAI web search API
- ChatGLM Web Search - Zhipu AI free search plugin
- Responsive Design - Perfect adaptation for desktop and mobile
- Dark/Light Theme - Support for automatic system theme switching
- Internationalization - Multi-language interface (i18n)
- Real-time Streaming - Typewriter effect answer display
- Contextual Conversation - Support for multi-turn dialogue and history
Deep Research mode uses AI-driven iterative search and analysis to generate comprehensive and in-depth research reports on any topic.
Key Features:
- 🔄 Iterative Research - Automatically identifies knowledge gaps and performs follow-up searches
- 📊 Structured Reports - Generates well-organized research reports with citations
- 🔗 Citation Support - Includes source references with configurable formats (
[[citation:1]]or clickable URLs) - 🎯 Multi-Engine Search - Leverages multiple search engines for comprehensive results
If you want to integrate Deep Research capabilities into your own Node.js project:
npm install deepsearcherQuick Example:
import { DeepResearch } from 'deepsearcher';
const deepResearch = new DeepResearch({
searcher: async ({ query }) => {
// Your search implementation
return searchResults;
},
options: {
type: 'openai',
apiKey: 'your-api-key',
enableCitationUrl: false, // Use [[citation:1]] format
},
});
const agent = await deepResearch.compile();
const result = await agent.invoke({
messages: [{ role: 'user', content: 'Your research question' }],
});Citation Format Options:
enableCitationUrl: true(default) - Outputs<sup>[[1](url)]</sup>format with clickable linksenableCitationUrl: false- Outputs[[citation:1]]simple format
Documentation: DeepResearch NPM Package
- Install Docker and Docker Compose
- Prepare AI model API keys (configure in
model.json) - Optional: Configure search engine API keys (in
docker-compose.yaml) - Ensure network access to required services (SearXNG needs Google access)
1. Create docker-compose.yaml file
Please refer to the deploy/docker-compose.yaml file.
Edit the docker-compose.yaml file and modify the corresponding environment variables in the search_chat service:
services:
search_chat:
container_name: search_chat
image: docker.cnb.cool/aigc/aisearch:v1.2.0-alpha
environment:
# Server Configuration
- PORT=3000
# Search Engine API Keys (configure as needed)
- BING_SEARCH_KEY=your_bing_key
- GOOGLE_SEARCH_KEY=your_google_key
- GOOGLE_SEARCH_ID=your_google_cse_id
- TAVILY_KEY=your_tavily_key
- ZHIPU_KEY=your_zhipu_key
- EXA_KEY=your_exa_key
- BOCHA_KEY=your_bocha_key
# Web Content Extraction (optional)
- JINA_KEY=your_jina_key
# SearXNG Configuration (included by default, ready to use)
- SEARXNG_HOSTNAME=http://searxng:8080
- SEARXNG_SAFE=0
- SEARXNG_LANGUAGE=en
- SEARXNG_ENGINES=bing,google
- SEARXNG_IMAGES_ENGINES=bing,google
# DeepResearch Configuration
- DEEP_MAX_RESEARCH_LOOPS=3
- DEEP_NUMBER_OF_INITIAL_QUERIES=3
# Domain Whitelist (optional)
- WHITELIST_DOMAINS=
volumes:
- ./model.json:/app/apps/server/dist/model.json
ports:
- "3000:3000"
restart: alwaysCreate and edit the model.json file in the same directory as docker-compose.yaml to configure AI models and API keys:
[
{
"provider": "openai",
"type": "openai",
"baseURL": "https://api.openai.com/v1",
"apiKey": "sk-your-openai-api-key",
"apiMode": "openai-responses",
"models": [
{
"name": "gpt-4o-mini",
"alias": "GPT-4o Mini",
"description": "OpenAI GPT-4o Mini model",
"maxTokens": 262144,
"intentAnalysis": true
},
{
"name": "gpt-4o",
"alias": "GPT-4o",
"description": "OpenAI GPT-4o model",
"maxTokens": 262144
}
]
},
{
"provider": "anthropic",
"type": "anthropic",
"baseURL": "https://api.anthropic.com/v1",
"apiKey": "sk-your-anthropic-api-key",
"models": [
{
"name": "claude-sonnet-4-5",
"alias": "Claude Sonnet 4.5",
"description": "Anthropic Claude Sonnet 4.5",
"maxTokens": 131072
}
]
}
]Models with intentAnalysis: true will be used for search intent analysis and query rewriting. It's recommended to set smaller models here to improve response speed.
Configuration Description:
provider: Model provider nametype: API type (openai/anthropic/google etc.)baseURL: API base URLapiKey: Your API keyapiMode: Optional for OpenAI-compatible providers. Useopenai-completions(default) for Chat Completions, oropenai-responsesfor the OpenAI Responses API. Leave it unset for OpenAI-compatible endpoints that do not support the Responses API.models: Model list with name, alias, description and max tokens
When apiMode is set on an OpenAI provider, the main chat response, search intent analysis, and DeepResearch flows use the same mode.
docker compose up -dOpen your browser and visit: http://localhost:3000
# Stop services
docker compose down
# Pull latest image
docker pull docker.cnb.cool/aigc/searchchat:latest
# Restart
docker compose up -dThe project supports multiple search engines. Choose the appropriate search source based on your needs. SearXNG is recommended.
Advantages: Completely free, no API key required, aggregates multiple search sources, protects privacy
SearXNG is an open-source metasearch engine that aggregates results from multiple search services without tracking users. Built into Docker deployment, ready to use out of the box.
Configuration Options:
SEARXNG_ENGINES: Set search engines (default: bing,google)SEARXNG_LANGUAGE: Search language (zh=Chinese, en-US=English, all=all)SEARXNG_SAFE: Safe search level (0=off, 1=moderate, 2=strict)
[!IMPORTANT]
Make sure to activate the json format to use the API. This can be done by adding the following line to the searxng/settings.yml file:
search:
formats:
- html
- json- Node.js >= 20
- Package Manager yarn@3.5.1
- Build Tool Turborepo
search_with_ai/
├── apps/
│ ├── server/ # Backend service (Koa + TypeScript)
│ │ ├── src/
│ │ │ ├── app.ts # Application entry
│ │ │ ├── controller.ts # Route controllers
│ │ │ ├── interface.ts # Type definitions
│ │ │ └── model.json # Model configuration
│ │ └── package.json
│ └── web/ # Frontend application (Vue 3 + TypeScript)
│ ├── src/
│ │ ├── pages/ # Page components
│ │ ├── stores/ # Pinia state management
│ │ └── components/ # Common components
│ └── package.json
├── deploy/ # Deployment configuration
│ ├── docker-compose.yaml
│ ├── .env.docker
│ └── model.json
└── package.json # Root configuration
# Clone project
git clone https://github.com/sear-chat/SearChat.git
cd SearChat
# Install dependencies (run in root, will install all sub-project dependencies)
yarn installCopy and edit server environment configuration:
# Copy environment configuration template
cp apps/server/.env apps/server/.env.local
# Edit configuration file
vim apps/server/.env.local# Start both frontend and backend development servers
yarn dev
# Or use Turborepo command
turbo devAccess URLs:
- Frontend: http://localhost:5173
- Backend: http://localhost:3000
# Build all applications
yarn build
# Or
turbo build- Framework: Koa.js + TypeScript
- AI Integration: LangChain + LangGraph
- Search Engines: Multi-engine adapter pattern
- Framework: Vue 3 + Composition API
- Build: Vite + TypeScript
- UI Library: TDesign Vue Next
- State Management: Pinia + persistence
- Styling: Tailwind CSS + Less
Welcome to contribute to the project! Please follow these steps:
- Fork the project to your GitHub account
- Create a feature branch
git checkout -b feature/amazing-feature - Commit your changes
git commit -m 'Add amazing feature' - Push the branch
git push origin feature/amazing-feature - Create a Pull Request
- GitHub Issues - Report bugs or feature requests
- GitHub Discussions - Technical discussions and Q&A
This project is licensed under the MIT License.
- SearXNG - Open source search engine
- LangChain - AI application development framework
- Tencent EdgeOne - CDN acceleration support
⭐ If this project helps you, please give it a Star!
