File size: 4,689 Bytes
4484246
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
# Kernelbot Data Processing Skills

This document describes how to extract and process submission data from the Kernelbot database.

## Database Connection

The production database is hosted on Heroku. **NEVER run write operations (INSERT, UPDATE, DELETE) on this database.**

```bash
# Get DATABASE_URL from Heroku
heroku config:get DATABASE_URL --app discord-cluster-manager
```

## Database Schema

The relevant tables are in the `leaderboard` schema:

| Table | Description |
|-------|-------------|
| `leaderboard.leaderboard` | Problem definitions (id, name, deadline, task, description) |
| `leaderboard.submission` | User submissions (id, leaderboard_id, user_id, code_id, submission_time, status) |
| `leaderboard.runs` | Execution results (submission_id, score, passed, mode, runner, result) |
| `leaderboard.user_info` | User details (id, user_name) |
| `leaderboard.gpu_type` | GPU types per problem (leaderboard_id, gpu_type) |
| `leaderboard.code_files` | Actual submission code content (old_code text, code bytea) |

## Key Problem IDs

### NVFP4 Problems
- **595**: nvfp4_gemv
- **597**: nvfp4_gemm
- **598**: nvfp4_dual_gemm
- **730**: nvfp4_group_gemm (not released yet)

### AMD Problems
- **398**: amd-identity
- **399**: amd-fp8-mm
- **430**: amd-mixture-of-experts
- **463**: amd-mla-decode
- **563**: amd-all2all
- **564**: amd-gemm-rs
- **565**: amd-ag-gemm

## Run Modes

| Mode | Description | Has Score? |
|------|-------------|------------|
| `test` | Correctness tests | No |
| `benchmark` | Performance benchmarks (internal) | No |
| `leaderboard` | Official leaderboard runs | **Yes** |
| `profile.0-3` | Profiling runs | No |

**Important:**
- Use `mode = 'leaderboard'` when joining runs to get scores.
- **Lower scores are better** (scores are execution time in seconds).

## SQL Queries

All SQL queries are in `queries.sql`. Key queries:
- List all problems
- Check submission counts
- Export deduplicated submissions with code
- Get top N submissions
- Get user progression over time

## Adding Support for a New Problem

### Step 1: Find the Problem ID
Use the "LIST ALL PROBLEMS" query from `queries.sql`.

### Step 2: Check Submission Counts
Use the "CHECK SUBMISSION COUNTS" query from `queries.sql`.

### Step 3: Export Deduplicated Submissions
Use the "EXPORT DEDUPLICATED SUBMISSIONS WITH CODE" query from `queries.sql`.

```python
import pandas as pd
import psycopg2

DATABASE_URL = "..."  # from heroku config:get
conn = psycopg2.connect(DATABASE_URL)

# Read query from queries.sql and modify problem IDs as needed
with open('queries.sql') as f:
    # Find and use the export query section
    pass

df = pd.read_sql(query, conn)
df.to_parquet('new_problem_submissions.parquet', index=False)
```

### Step 4: Verify Data Quality
```python
from analyze_submissions import load_submissions, leaderboard_summary

df = load_submissions('new_problem_submissions.parquet')
print(leaderboard_summary(df))
```

## Accessing Submission Code

The parquet files include the full code content for each submission:

```python
from analyze_submissions import load_submissions

df = load_submissions()

# Get a specific user's best submission
user_subs = df[(df['user_name'] == 'gau.nernst') & (df['problem_name'] == 'nvfp4_gemv')]
best = user_subs.sort_values('score').head(1)

# Access the code
code = best['code'].values[0]
print(code)
```

## Helper Functions

Use `analyze_submissions.py`:

```python
from analyze_submissions import (
    load_submissions,      # Load parquet file
    author_progression,    # See user's submissions over time
    top_contestants,       # Get leaderboard rankings
    leaderboard_summary,   # Summary stats per problem
    user_stats,            # Stats for a specific user
    format_score           # Format score with units (us, ms, s)
)
```

## Environment Setup

```bash
uv venv .venv
source .venv/bin/activate
uv pip install pandas pyarrow psycopg2-binary
```

## Files

| File | Description |
|------|-------------|
| `nvidia_nvfp4_submissions.parquet` | Deduplicated NVIDIA NVFP4 submissions with code (~1.4 GB) |
| `queries.sql` | All SQL queries for data extraction |
| `scripts/nvfp4/analyze_submissions.py` | Helper functions library |
| `scripts/nvfp4/get_fastest_submission.py` | Print user's fastest submission |
| `scripts/nvfp4/query_submissions.py` | List submission IDs or query specific ID |

## Review Checklist Before Pushing

1. Verify submission counts match expectations
2. Check for any anomalies in scores (negative, extremely large, etc.)
3. Confirm deduplication worked correctly
4. Test helper functions work with the new data
5. Run `python scripts/nvfp4/query_submissions.py` to verify