umairali64488 commited on
Commit
7999da7
·
verified ·
1 Parent(s): 8c99647

Upload 3 files

Browse files
Files changed (3) hide show
  1. Dockerfile +15 -0
  2. README.md +87 -6
  3. requirements.txt +10 -0
Dockerfile ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ FROM python:3.11-slim
2
+
3
+ WORKDIR /app
4
+
5
+ # Install dependencies
6
+ COPY requirements.txt .
7
+ RUN pip install --no-cache-dir -r requirements.txt
8
+
9
+ # Copy app source
10
+ COPY app/ ./app/
11
+ COPY frontend/ ./frontend/
12
+
13
+ EXPOSE 7860
14
+
15
+ CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "7860"]
README.md CHANGED
@@ -1,11 +1,92 @@
1
  ---
2
- title: Multi Llm Debugging Engine
3
- emoji: 👀
4
- colorFrom: green
5
- colorTo: green
6
  sdk: docker
 
7
  pinned: false
8
- license: mit
9
  ---
10
 
11
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ title: CodeDebug Multi-LLM Debugger
3
+ emoji:
4
+ colorFrom: blue
5
+ colorTo: purple
6
  sdk: docker
7
+ app_port: 7860
8
  pinned: false
 
9
  ---
10
 
11
+ # CodeDebug Multi-LLM Debugging Engine (LangChain)
12
+
13
+ A code debugging assistant that queries **3 LLMs in parallel** using LangChain LCEL chains,
14
+ then uses **Qwen3 as a judge** to synthesize the best final answer.
15
+
16
+ ## Architecture
17
+
18
+ ```
19
+ Your query
20
+ ├──▶ Trinity Large (arcee-ai) ─╮
21
+ ├──▶ StepFun Flash (stepfun) ─┼──▶ Qwen3 Judge ──▶ Reasoning + Final Answer
22
+ └──▶ Nemotron Nano (nvidia) ─╯
23
+
24
+ LangChain Stack:
25
+ PANEL_PROMPT | ChatOpenAI | StrOutputParser × 3 (asyncio.gather)
26
+ JUDGE_PROMPT | ChatOpenAI | StrOutputParser × 1
27
+ ```
28
+
29
+ ## Setup on Hugging Face Spaces
30
+
31
+ 1. Fork this Space
32
+ 2. **Settings → Variables and secrets → New secret**
33
+ - Name: `OPENROUTER_API_KEY`
34
+ - Value: your key from [openrouter.ai](https://openrouter.ai)
35
+ 3. The Space rebuilds automatically (~2 min)
36
+
37
+ ## Local development
38
+
39
+ ```bash
40
+ # 1. Create .env
41
+ echo "OPENROUTER_API_KEY=sk-or-v1-..." > .env
42
+
43
+ # 2. Install
44
+ pip install -r requirements.txt
45
+
46
+ # 3. Run
47
+ uvicorn app.main:app --reload --port 7860
48
+ ```
49
+
50
+ Open http://localhost:7860
51
+
52
+ ## File structure
53
+
54
+ ```
55
+ ├── app/
56
+ │ ├── __init__.py
57
+ │ ├── config.py # pydantic-settings
58
+ │ ├── prompts.py # LangChain ChatPromptTemplates
59
+ │ ├── llm_factory.py # ChatOpenAI factory (OpenRouter)
60
+ │ ├── llm_chain.py # LCEL pipeline (panel + judge)
61
+ │ └── main.py # FastAPI routes
62
+ ├── frontend/
63
+ │ └── index.html # Single-file UI
64
+ ├── requirements.txt
65
+ ├── Dockerfile
66
+ └── README.md
67
+ ```
68
+
69
+ ## API
70
+
71
+ `POST /api/v1/debug`
72
+ ```json
73
+ { "question": "Why does my Python function return None?", "temperature": 0.3 }
74
+ ```
75
+
76
+ Response:
77
+ ```json
78
+ {
79
+ "question": "...",
80
+ "panel": [
81
+ { "model": "...", "label": "Trinity Large", "response": "...", "latency_ms": 1200, "error": null },
82
+ ...
83
+ ],
84
+ "judge": {
85
+ "reasoning": "...",
86
+ "final_answer": "...",
87
+ "latency_ms": 3100,
88
+ "error": null
89
+ },
90
+ "total_ms": 4300
91
+ }
92
+ ```
requirements.txt ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ fastapi==0.115.5
2
+ uvicorn[standard]==0.32.1
3
+ python-multipart==0.0.17
4
+ pydantic==2.10.3
5
+ pydantic-settings==2.7.0
6
+
7
+ # ── LangChain stack ────────────────────────────────────────────────────────
8
+ langchain==0.3.13
9
+ langchain-core==0.3.28
10
+ langchain-openai==0.3.0