Introducing GRM2, a powerful 3 billion parameter model designed for long-term reasoning and high performance in complex tasks.
Even with only 3 billion parameters, it outperforms qwen3-32b in several benchmarks and complex reasoning tasks.
With just 3 billion parameters, it can also generate extensive and complex code with over 1000 lines, utilize tools comparable to larger models, and is perfect for agentic tasks.
GRM2 is licensed under Apache 2.0, making it ideal as a base for FineTune in other tasks.
Introducing GRM-Coder, a 14b params code model based on Qwen3-14b .
On LiveCodeBench v6 (01/08/2024 - 01/05/2025), we achieved a Pass@1 accuracy of 67.87%, up 7.08% from the baseline Pass@1 accuracy of 60.79% of Qwen3-14B. OrionLLM/GRM-Coder-14b
Introducing GRM2, a powerful 3 billion parameter model designed for long-term reasoning and high performance in complex tasks.
Even with only 3 billion parameters, it outperforms qwen3-32b in several benchmarks and complex reasoning tasks.
With just 3 billion parameters, it can also generate extensive and complex code with over 1000 lines, utilize tools comparable to larger models, and is perfect for agentic tasks.
GRM2 is licensed under Apache 2.0, making it ideal as a base for FineTune in other tasks. You can see more here: OrionLLM/GRM2-3b
Introducing GRM2, a powerful 3b parameter model designed for long-term reasoning and high performance in complex tasks.
Even with only 3b of parameters, it outperforms qwen3-32b in several benchmarks.
With only 3b of parameters, it can also generate large and complex code of over 1000 lines, use tools in a way comparable to large models, and is perfect for agentic tasks.
GRM2 is licensed under Apache 2.0, making it perfect as a FineTune base for other tasks.