| title | emoji | colorFrom | colorTo | sdk | sdk_version | app_file | pinned |
|---|---|---|---|---|---|---|---|
Thinking Model Client |
π€ |
blue |
blue |
docker |
1.0.0 |
Dockerfile |
false |
A modern React-based chat application that provides a unique interface for interacting with AI models. The application not only displays model responses but also visualizes the thinking process behind each response, giving users insight into how the AI arrives at its conclusions.
- π§ Thinking Process Visualization: See the step-by-step reasoning behind each AI response with interactive visualizations
- π Flexible API Integration: Easily connect to different AI models through configurable API endpoints
- πΎ Conversation Persistence: All chats are automatically saved in local storage for continuity
- π³ Docker Deployment: Ready for containerized deployment with included Docker configuration
- βοΈ Customizable Settings: Adjust API parameters and model configurations through an intuitive settings panel
- π¬ Real-time Chat: Modern interface with smooth animations and multiple conversation tabs
- π€ Multiple Models: Support for various AI model integrations through a unified interface
- π οΈ Modern Stack: Built with React and Vite for optimal performance and development experience
- π§ͺ Quality Assured: Comprehensive unit tests ensure reliable functionality
- π Local Data Storage: All data is stored locally for enhanced privacy and security
- β‘ xsai Integration: Powered by xsai (extra-small AI SDK) for efficient and lightweight AI model connections
- π§© Reasoning Extraction: Automatic extraction and visualization of AI reasoning processes using xsai utilities
- Node.js (v14 or higher)
- npm or yarn
- Clone the repository:
git clone https://github.com/tao12345666333/thinking-model-client.git
cd thinking-model-client- Install dependencies:
npm install- Start the development server:
npm startThis will concurrently run both the frontend development server and the backend proxy server.
- Open your browser and navigate to
http://localhost:5173to use the application.
The application can be configured through the settings panel, which supports multiple profiles:
Each chat profile includes:
- Profile Name: Custom name for the profile
- API Endpoint: The endpoint for the AI model
- Ends with
/β/chat/completionswill be appended - Ends with
#β#will be removed - Other cases β
/v1/chat/completionswill be appended
- Ends with
- API Key: Your authentication key for the API
- Model Name: The model to use (e.g., DeepSeek-R1)
A separate profile for conversation summarization:
- API Endpoint: Endpoint for the summarization service
- API Key: Authentication key for summarization
- Model Name: The model to use for summarization
All settings are stored locally for privacy and security. You can manage multiple chat profiles and switch between them as needed.
This application now uses xsai - an extra-small AI SDK for efficient LLM connections. The integration provides:
- Lightweight: Minimal dependencies and small bundle size
- Runtime Agnostic: Works in Node.js, Deno, Bun, and browsers
- Streaming Support: Built-in streaming capabilities for real-time responses
- Reasoning Extraction: Automatic extraction of thinking processes from model responses
- Chat Streaming: Uses
@xsai/stream-textfor real-time message streaming - Summarization: Uses
@xsai/generate-textfor conversation title generation - Reasoning Processing: Uses
@xsai/utils-reasoningto extract and display thinking processes
To test the xsai integration independently:
- Edit the
test-xsai.jsfile with your API credentials - Run the test script:
node test-xsai.jsThis will test both text generation and streaming with reasoning extraction.
The application has been migrated from using node-fetch directly to using xsai's abstraction layer. This provides:
- Better error handling
- Consistent API across different model providers
- Built-in streaming utilities
- Simplified reasoning extraction