Category stringclasses 11
values | Name stringlengths 2 71 | URL stringlengths 15 139 | Description stringlengths 16 191 ⌀ |
|---|---|---|---|
Other Useful AI DevTools | Firecrawl | https://firecrawl.dev/ | Turn websites into LLM-ready data |
Other Useful AI DevTools | Agents.md | https://agents.md/ | A simple open format for guiding coding agents used by over 20k open-source projects |
Other Useful AI DevTools | Vercel AI Gateway | https://vercel.com/blog/ai-gateway-is-now-generally-available | A gateway to access hundreds of models with zero markup on tokens (including BYOK) |
Other Useful AI DevTools | OpenRouter | https://openrouter.ai | A unified API providing access to hundreds of AI models through a single endpoint |
Other Useful AI DevTools | Fabric | https://github.com/danielmiessler/Fabric | An open-source modular system for solving specific problems using crowdsourced AI prompts that can be used anywhere |
Other Useful AI DevTools | Vibetunnel | https://vibetunnel.sh/ | VibeTunnel proxies your terminals right into the browser so you can vibe-code anywhere |
Other Useful AI DevTools | Anannas | https://anannas.ai/ | Single API to access any LLM - Seamlessly connect to multiple models through a single gateway with failproof routing cost control and instant usage insights |
Other Useful AI DevTools | CodeRabbit | https://www.coderabbit.ai | AI code reviews - cut code review time & bugs in half |
Other Useful AI DevTools | Giga AI | https://gigamind.dev/context | Giga's context engineering improves quality and understanding — so your AI works right the first time and you build faster |
Other Useful AI DevTools | Gas Town | https://github.com/steveyegge/gastown | Multi-agent orchestrator for Claude Code. Track work with convoys; sling to agents |
Coding Leaderboards | GDPval-AA | https://artificialanalysis.ai/evaluations/gdpval-aa | Artificial Analysis evaluation of OpenAI's GDPval dataset; tests AI agents on real-world knowledge work tasks across 44 occupations and 9 industries; ELO-ranked via blind pairwise comparisons |
Coding Leaderboards | Artificial Analysis Intelligence Index | https://artificialanalysis.ai/evaluations | Composite leaderboard independently measuring AI models across agents coding scientific reasoning and general knowledge; updated daily with live API performance data |
Coding Leaderboards | MirrorCode | https://epoch.ai/blog/mirrorcode-preliminary-results/ | Epoch AI × METR a new long-horizon SWE benchmark measuring AI performance on weeks-long coding tasks |
Coding Leaderboards | Code Arena | https://arena.ai/leaderboard/code | Community-voted coding leaderboard with 200k+ votes ranking models on agentic coding tasks; covers web dev React HTML game development data visualization and image-to-code generation |
Coding Leaderboards | SWE-Bench Pro (Commercial Dataset) | https://scale.com/leaderboard/swe_bench_pro_commercial | A new benchmark designed to provide a rigorous and realistic evaluation of AI agents for software engineering |
Coding Leaderboards | SWE-Bench Pro (Public Dataset) | https://scale.com/leaderboard/swe_bench_pro_public | Designed to provide a rigorous and realistic evaluation of AI agents for software engineering; addresses data contamination limited task diversity and unreliable testing |
Coding Leaderboards | SWE-bench Verified (Deprecated) | https://www.swebench.com/ | SWE-bench evaluates LLM performance on real world software issues collected from GitHub (Verified subset - deprecated) |
Coding Leaderboards | SWE-bench | https://www.swebench.com/ | SWE-bench evaluates LLM performance on real world software issues collected from GitHub |
Coding Leaderboards | SWE-bench Multilingual | https://swe-bench.com/ | 300 curated SWE-bench style tasks from 42 repositories representing 9 programming languages |
Coding Leaderboards | SWE-rebench | https://swe-rebench.com/ | A Continuously Evolving and Decontaminated Benchmark for Software Engineering LLMs |
Coding Leaderboards | Aider | https://aider.chat/docs/leaderboards/ | Aider polyglot coding leaderboard |
Coding Leaderboards | OpenRouter | https://openrouter.ai/rankings | Model Market Share Use Case Categories and App Rankings |
Coding Leaderboards | ARC-AGI-2 | https://arcprize.org/leaderboard | Stress testing the efficiency and capability of state-of-the-art AI reasoning systems |
Coding Leaderboards | Terminal-Bench@2.0 | https://www.tbench.ai/leaderboard/terminal-bench/2.0 | A benchmark measuring the capabilities of AI agents in a terminal environment |
Coding Leaderboards | Terminal-Bench | https://www.tbench.ai/leaderboard | A benchmark measuring the capabilities of AI agents in a terminal environment |
Coding Leaderboards | OSWorld | https://os-world.github.io/ | Benchmarking Multimodal Agents for Open-Ended Tasks in Real Computer Environments |
Coding Leaderboards | PR Arena | https://prarena.ai/ | Software engineering agents head to head |
Coding Leaderboards | Multi-SWE-bench | https://multi-swe-bench.github.io/#/ | A Multilingual Benchmark for Issue Resolving |
Coding Leaderboards | SWE-DEV | https://github.com/DorothyDUUU/SWE-Dev | Evaluating and Training Autonomous Feature-Driven Software Development |
Coding Leaderboards | LiveCodeBench Pro | https://livecodebenchpro.com/ | A benchmark composed of problems from Codeforces ICPC and IOI that are continuously updated to reduce data contamination |
Coding Leaderboards | LiveCodeBench | https://livecodebench.github.io/leaderboard.html | Holistic and Contamination Free Evaluation of Large Language Models for Code |
Coding Leaderboards | BigCodeArena | https://huggingface.co/spaces/bigcode/arena | A human-in-the-loop platform for evaluating code through execution |
Coding Leaderboards | Modu Merge Rate Leaderboard | https://www.askmodu.com/rankings | Real-world success rates: Ranking top coding agents by their pull request merge performance on Modu |
Coding Leaderboards | OpenBench Coding | https://openbench.dev/benchmarks/coding | An open-source framework for standardized reproducible benchmarking of large language models (LLMs) |
Coding Leaderboards | Context-Bench | https://leaderboard.letta.com/ | A benchmark for agentic context engineering |
Coding Leaderboards | Repo Bench | https://repoprompt.com/bench | Measuring large context reasoning file editing precision and instruction adherence |
Coding Leaderboards | Vending-Bench 2 | https://andonlabs.com/evals/vending-bench-2 | Measuring AI model performance on running a business over long time horizons |
Coding Leaderboards | τ-bench / τ2-bench | https://taubench.com | Benchmarking AI agents in collaborative real-world scenarios |
Coding Leaderboards | Live-SWE-agent | https://live-swe-agent.github.io/ | Can Software Engineering Agents Self-Evolve on the Fly? |
Coding Leaderboards | MCP Atlas | https://scale.com/leaderboard/mcp_atlas | Evaluates how well language models handle real-world tool use through the Model Context Protocol (MCP) |
Coding Leaderboards | CORE-Bench Hard | https://hal.cs.princeton.edu/corebench_hard | Agent is given codebase of a published scientific paper and must install dependencies run the code and answer questions about the paper |
Coding Leaderboards | APEX-Agents | https://www.mercor.com/apex/apex-agents-leaderboard/ | The AI Productivity Index for Agents measures whether frontier AI agents can execute long-horizon cross-application tasks across three jobs in professional services |
Developer Surveys | The state of AI coding in 2025: Adoption proficiency and transformation | https://stateof.themodernsoftware.dev/ | The Modern Software Developer December 2025 |
Developer Surveys | AI in Practice Survey 2025 | https://theoryvc.com/ai-in-practice-survey-2025 | Theory Ventures December 2025 |
SOURCE | joylarkin/AI-Coding-Landscape | https://github.com/joylarkin/AI-Coding-Landscape | The 2026 AI Coding Landscape - Coding agents, CLIs, IDEs, AI app builders, devtools, and more |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.