Using AI Programs
With the development of AI in the past two years, the open-source community Github has seen many excellent open-source AI program projects emerge. Users can choose suitable open-source AI programs based on their needs. Through practice, it has been found that programs supporting OpenAI Compatible can use the API from this site, and fortunately, most AI programs do this. The unified access method: Base_Url interface + API Key.
The API interface depends on the program, and generally there are three situations, with the vast majority being the first one:
https://api.juheai.top
https://api.juheai.top/v1
https://api.juheai.top/v1/chat/completions
Below, we will list the configuration methods for some excellent AI applications to help users save time and quickly use these programs:
LibreChat
Introduction: A foreign project, a ChatUI program that mimics the GPT PLUS interface UI. So far, it's the most powerful ChatUI. The most impressive aspect of the program is its support for rich AI functions - conversation, RAG file analysis, plugins, voice, multi-terminal synchronization, it can do it all.
Project Address: https://github.com/danny-avila/LibreChat
Configuration: Generally, after logging into the program, you only need to fill in the API Key. Access portal: https://librechat.aijuhe.top/login. If you want to deploy it yourself, you can refer to the article 《Librechat Quick Deployment Guide》.
NextChat
Introduction: This is a project by a Chinese developer. Because it was so smooth, the team was acquired. Currently, the new team has also released a commercial version, but without the developer's creation, the commercial version's style has completely changed. The open-source version is still more comfortable.
Project Address: https://github.com/ChatGPTNextWeb/ChatGPT-Next-Web
Configuration: You need to check the custom interface in the settings and fill in the interface address: https://api.juheai.top and API KEY. Access portal: https://nextchat.aijuhe.top. If you want to deploy it yourself, you can refer to the article 《Deploy a Low-Cost GPT Program for Yourself or Clients Through NextChat (ChatGPT-Next-Web)》
Dooy-AI
Introduction: The original name on GitHub was chatgpt-web-midjourney-proxy, and the author is Dooy. The project name was long and had low recognition, so we simply named it after the author, which is more catchy. This project not only supports chat but also supports visual operations for Midjourney drawing, Suno music, and Luma video creation, making it highly playable.
Project Address: https://github.com/Dooy/chatgpt-web-midjourney-proxy
Configuration: You need to fill in the interface address: https://api.juheai.top and API Key in the settings. Access portal: https://dooy.aijuhe.top. If you want to deploy it yourself, you can refer to the article 《Having a Private GPT: Complete Deployment Guide for chatgpt-web-midjourney-proxy》
LobeChat
Introduction: A modern design open-source ChatGPT/LLMs chat application and development framework, supporting voice synthesis, multimodal, extensible (function call) plugin system, one-click free access to your own ChatGPT/Gemini/Claude/Ollama application.
Project Address: https://github.com/lobehub/lobe-chat
Configuration: You need to fill in the interface address: https://api.juheai.top/v1 and API Key in the settings. Access portal: https://lobe.aijuhe.top/.
Chatbox
Introduction: Chatbox AI is an AI client application and intelligent assistant that supports many advanced AI models and APIs, available on Windows, MacOS, Android, iOS, Linux, and web versions.
Project Address: https://github.com/Bin-Huang/chatbox
Configuration: You need to select OPENAI API in the settings, fill in the API domain: https://api.juheai.top and API Key, download and install the client for local use.
Immersive Translation
Introduction: A web-renowned bilingual comparative web page translation plugin. You can use it completely free to translate foreign language web pages in real-time, translate PDFs, translate EPUB e-books, translate video bilingual subtitles, etc. You can also freely choose to call OpenAI (ChatGPT), DeepL, Gemini, and other artificial intelligence engines to translate the above content. You can use it anytime, anywhere on your phone, truly helping you break down information barriers.
Project Address: https://github.com/immersive-translate/immersive-translate
Configuration: After installing the Immersive Translation browser plugin, select OpenAI in the settings, and fill in the API Key and custom API interface address: https://api.juheai.top/v1/chat/completions as shown in the figure, and you can start using it.
ChatGPTBox
Introduction: Deeply integrates ChatGPT into your browser, everything you need is here. Initially thought to be just a simple browser page translation plugin, it wasn't until I started using it that I discovered it's more than that. I think its page summarization and conversation capabilities are more excellent, and of course, there are more features for you to discover.
Project Address: https://github.com/josStorer/chatGPTBox
Configuration: After installing the ChatGPTBox plugin, fill in the custom OpenAI API address: https://api.juheai.top in the Advanced - API Address menu, then go to the General menu, enter the API Key and corresponding model in the API mode to use.
chatgpt-on-wechat
Introduction: A chatbot based on large models, supporting access to WeChat Official Accounts, Enterprise WeChat applications, Feishu, DingTalk, etc. You can choose GPT3.5/GPT-4o/GPT4.0/Claude/Wenxin Yiyan/Xunfei Spark/Tongyi Qianwen/Gemini/GLM-4/Claude/Kimi/LinkAI, capable of processing text, voice, and images, accessing the operating system and the internet, and supporting customized enterprise intelligent customer service based on your own knowledge base.
Project Address: https://github.com/zhayujie/chatgpt-on-wechat
Configuration Method 1: If you deploy using docker-compose, add the following environment variables in the docker/docker-compose.yml file:
<!-- Other parameter items -->
environment:
OPEN_AI_API_KEY: 'sk-xxx'
OPEN_AI_API_BASE: 'https://api.juheai.top/v1'
<!-- Other parameter items -->
Configuration Method 2: If deploying directly with Python, add the following to the config.json file:
<!-- Other parameter items -->
"open_ai_api_key": "sk-xxx",
"open_ai_api_base": "https://api.openai.com/v1",
<!-- Other parameter items -->
SillyTavern
Introduction: The image speaks for itself. This is a program that allows role-playing based on large models. I haven't played it, but if you're interested, you can research it. JuheNext fully supports API access.
Project Address: https://github.com/SillyTavern/SillyTavern
Configuration Method: Enter the custom endpoint: https://api.juheai.top/v1 and custom API key in the settings.
openai-translator
Introduction: Both a browser plugin and a cross-platform desktop application, a word translation browser plugin and cross-platform desktop application based on the ChatGPT API.
Project Address: https://github.com/openai-translator/openai-translator
Configuration Method: Enter the API URL: https://api.juheai.top and API key in the settings.
continue
Introduction: continue is the leading open-source AI code assistant. You can connect any model and any context to build custom auto-completion and chat experiences in VS Code and JetBrains.
Project Address: https://github.com/continuedev/continue
Configuration Method: Install the continue plugin for your IDE, and enter the following content in the config.json file (delete the original content):
{
"models": [
{
"title": "JuheAI",
"provider": "openai",
"model": "gpt-4o",
"apiBase": "https://api.juheai.top/v1",
"apiType": "openai",
"apiKey": "sk-xxx"
}
]
}
fastgpt
Introduction: A knowledge base question-answering system based on LLM large language models, providing out-of-the-box data processing, model calling, and other capabilities. At the same time, workflow orchestration can be performed through Flow visualization to achieve complex question-answering scenarios!
Project Address: https://github.com/labring/FastGPT
Configuration Method: In the corresponding file, for Docker Compose deployment, it's in the files/docker/docker-compose.yml file. Modify the corresponding interface and API Key before deployment and startup. No need to deploy the one-api program.
<!-- Other configurations -->
# AI model API address. Be sure to add /v1. The default here is the access address of OneApi.
- OPENAI_BASE_URL=https://api.juheai.top/v1
# AI model API Key. (The default here is OneAPI's quick default key, after testing, be sure to modify it promptly)
- CHAT_API_KEY=sk-xxx
<!-- Other configurations -->
dify
Introduction: Dify is an open-source LLM application development platform. Its intuitive interface combines AI workflows, RAG pipelines, Agents, model management, observability features, etc., allowing you to quickly go from prototype to production.
Project Address: https://github.com/langgenius/dify
Configuration Method: After logging into the program, select openai-api-compatible in the settings and set it up as shown in the figure.
gpt_academic
Introduction: Provides practical interaction interfaces for GPT/GLM and other LLM large language models, specially optimized for paper reading/polishing/writing experience, modular design, supports custom shortcut buttons & function plugins, supports Python and C++ project analysis & self-translation functions, PDF/LaTex paper translation & summarization functions, supports parallel querying of multiple LLM models, supports chatglm3 and other local models. Connects to Tongyi Qianwen, deepseekcoder, Xunfei Spark, Wenxin Yiyan, llama2, rwkv, claude2, moss, etc.
Project Address: https://github.com/binary-husky/gpt_academic
Configuration Method: Before starting the program, modify the following two environment variable parameters in config.py:
API_KEY = "sk-xxx"
API_URL_REDIRECT = {"https://api.openai.com/v1/chat/completions": "https://api.juheai.top/v1/chat/completions"}
AnythingLLM
Introduction: The all-in-one AI application you've been looking for. Chat with your documents, use AI agents, highly configurable, multi-user, without cumbersome setup.
Project Address: https://github.com/Mintplex-Labs/anything-llm
Configuration Method: In the settings, you can configure LLM preferences, Embedder preferences. Set the LLM provider to Generic OpenAI, BaseURL to: https://api.juheai.top/v1, ChatModelName is recommended to be filled with: gpt-4o, Embedding Model is recommended to be filled with: text-embedding-ada-002. Token context window, Max Tokens, and Max embedding chunk length should be set according to the numbers shown in the figure:
OpenWebUI
Introduction: Open WebUI is an extensible, feature-rich, and user-friendly self-hosted WebUI designed to run completely offline. It supports various LLM runners, including Ollama and OpenAI-compatible APIs. For more information, please refer to our Open WebUI documentation.
Project Address: https://github.com/open-webui/open-webui
Configuration Method: Replace OPENAI_API_BASE_URLS in the environment variables with https://api.juheai.top/v1.
Docker Run
docker run -d -p 3000:8080 \
-v open-webui:/app/backend/data \
-e OPENAI_API_BASE_URLS="https://api.juheai.top/v1" \
-e OPENAI_API_KEYS="sk-xxx" \
--name open-webui \
--restart always \
ghcr.io/open-webui/open-webui:main
Docker Compose
services:
open-webui:
environment:
- 'OPENAI_API_BASE_URLS=https://api.juheai.top/v1'
- 'OPENAI_API_KEYS=sk-xxx'
BotGem
Introduction: BotGem is an intelligent chat assistant application that supports multiple systems on PC and mobile. It uses advanced natural language processing technology to understand and respond to your text messages. You can use BotGem to ask questions, share ideas, seek advice, or just chat casually.
Project Address: https://botgem.com/
Configuration Method: Open settings and modify the API Server and API Key.
DrawAI
Introduction: An AI image generation tool that can batch generate DALLE images, supporting OpenAI compatible endpoints (proxy API).
Project Address: https://github.com/sunsky89757/DrawAI
Configuration Method: Open settings and set the API interface address and API Key.
cline(claude-dev)
Introduction: In your Integrated Development Environment (IDE), the automatic coding agent can create/edit files, execute commands, etc., with your permission at every step.
Project Address: https://github.com/cline/cline
Configuration Method: Install this plugin in VScode, open settings, select OpenAI Compatible, and set Base URL, API Key, and Model ID.
SiYuan Note
Introduction: SiYuan Note is a privacy-first personal knowledge management system that supports completely offline use while also supporting end-to-end encrypted synchronization. It integrates blocks, outlines, and bidirectional links to restructure your thinking.
Project Address: https://github.com/siyuan-note/siyuan
Configuration Method: In SiYuan Note settings, find the AI section, and complete the three items of model, API Key, and API base address, filling in gpt-4o, your purchased Key, and https://api.juheai.top/v1 respectively.
Jan
Introduction: Jan is an open-source ChatGPT alternative that runs 100% offline on your computer. It supports multiple engines (llama.cpp, TensorRT-LLM).
Project Address: https://github.com/janhq/jan
Configuration Method: Find OpenAI in the settings and enter the API Key and interface information: https://api.juheai.top/v1/chat/completions
Cursor
Introduction: AI code editor built for highly efficient work. Cursor is the best way to code collaboratively with AI.
Project Address: https://www.cursor.com/
Configuration Method: In the settings, find Models
- use OpenAI API Key, and enter API Key
and Override OpenAI Base URL
information: https://api.juheai.top/v1
, then add the required models.
Note: When adding new models, if you need to use Claude models, please do not fill in the original model as it will report an error. You can fill in the following models to use smoothly (the actual code is mapped from the original model):
Fill in c-3-5-h-20241022
for claude-3-5-haiku-20241022
; Fill in c-3-5-s-20240620
for claude-3-5-sonnet-20240620
; Fill in c-3-5-s-20241022
for claude-3-5-sonnet-20241022
.
If you have other models that need mapping, please contact our customer service.
Ragflow
Introduction: RAGFlow is an open-source RAG (Retrieval-Augmented Generation) engine based on deep document understanding, the best AI knowledge base construction tool.
Project Address: https://github.com/infiniflow/ragflow
Configuration Method: Open the Ragflow program>>, click on the avatar
in the upper right corner, enter the Model Providers
option, select OpenAI-API-Compatible
- Add Model
in the Models to be added
.
You need to set up four types of models: chat
: regular conversation model; embedding
: vector model; rerank
: reranking model; image2text
: image-to-text model. First, you can check all the models from here>>, and then fill in the target models in ragflow.
chat
model settings (using claude-3-7-sonnet-20250219 as an example, whether it's openai, claude, or gemini, because JuheNext has unified the interface format, they all need to be connected through the OpenAI-API-Compatible
compatible format):
- Model type:
chat
- Model name:
claude-3-7-sonnet-20250219
- Base url:
https://api.juheai.top/v1
- API-Key:
sk-xx
(fill in your own key) - Maximum tokens:
16000
- Support Vision:
checked
(this model supports visual recognition, after checking, the model type will automatically be added asimag2text
)
embedding
model settings (recommended to use text-embedding-ada-002
, high concurrency, good effect)
- Model type:
embedding
- Model name:
text-embedding-ada-002
- Base url:
https://api.juheai.top/v1
- API-Key:
sk-xx
(fill in your own key) - Maximum tokens:
1500
rerank
model settings (recommended to use bge-reranker-v2-m3
)
- Model type:
rerank
- Model name:
bge-reranker-v2-m3
- Base url:
https://api.juheai.top/v1/rerank
- API-Key:
sk-xx
(fill in your own key) - Maximum tokens:
1500
TIP
Open-source programs are hard to come by. We hope that after using them, you will go to the project address and give the author a star. We will also continue to add more supported open-source AI programs. If you have used good open-source AI projects, you are also welcome to contact us to supplement them.