--- title: CodeDebug Multi-LLM Debugger emoji: ⚡ colorFrom: blue colorTo: purple sdk: docker app_port: 7860 pinned: false --- # ⚡ CodeDebug — Multi-LLM Debugging Engine (LangChain) A code debugging assistant that queries **3 LLMs in parallel** using LangChain LCEL chains, then uses **Qwen3 as a judge** to synthesize the best final answer. ## Architecture ``` Your query ├──▶ Trinity Large (arcee-ai) ─╮ ├──▶ StepFun Flash (stepfun) ─┼──▶ Qwen3 Judge ──▶ Reasoning + Final Answer └──▶ Nemotron Nano (nvidia) ─╯ LangChain Stack: PANEL_PROMPT | ChatOpenAI | StrOutputParser × 3 (asyncio.gather) JUDGE_PROMPT | ChatOpenAI | StrOutputParser × 1 ``` ## Setup on Hugging Face Spaces 1. Fork this Space 2. **Settings → Variables and secrets → New secret** - Name: `OPENROUTER_API_KEY` - Value: your key from [openrouter.ai](https://openrouter.ai) 3. The Space rebuilds automatically (~2 min) ## Local development ```bash # 1. Create .env echo "OPENROUTER_API_KEY=sk-or-v1-..." > .env # 2. Install pip install -r requirements.txt # 3. Run uvicorn app.main:app --reload --port 7860 ``` Open http://localhost:7860 ## File structure ``` ├── app/ │ ├── __init__.py │ ├── config.py # pydantic-settings │ ├── prompts.py # LangChain ChatPromptTemplates │ ├── llm_factory.py # ChatOpenAI factory (OpenRouter) │ ├── llm_chain.py # LCEL pipeline (panel + judge) │ └── main.py # FastAPI routes ├── frontend/ │ └── index.html # Single-file UI ├── requirements.txt ├── Dockerfile └── README.md ``` ## API `POST /api/v1/debug` ```json { "question": "Why does my Python function return None?", "temperature": 0.3 } ``` Response: ```json { "question": "...", "panel": [ { "model": "...", "label": "Trinity Large", "response": "...", "latency_ms": 1200, "error": null }, ... ], "judge": { "reasoning": "...", "final_answer": "...", "latency_ms": 3100, "error": null }, "total_ms": 4300 } ```