⚑ Each donation = another big MoE quantized

I host 25+ free APEX MoE quantizations as independent research. My only local hardware is an NVIDIA DGX Spark (122 GB unified memory) β€” enough for ~30-50B-class MoEs, but bigger ones (200B+) require rented compute on H100/H200/Blackwell, typically $20-100 per quant.
If APEX quants are useful to you, your support directly funds those bigger runs.

πŸŽ‰ Patreon (Monthly)  |  β˜• Buy Me a Coffee  |  ⭐ GitHub Sponsors

πŸ’š Big thanks to Hugging Face for generously donating additional storage β€” much appreciated.

Nemotron-3-Nano-30B-A3B APEX GGUF

APEX (Adaptive Precision for EXpert Models) quantizations of NVIDIA-Nemotron-3-Nano-30B-A3B.

Brought to you by the LocalAI team | APEX Project | Technical Report

Benchmark Results

Benchmarks coming soon. For reference APEX benchmarks on the Qwen3.5-35B-A3B architecture, see mudler/Qwen3.5-35B-A3B-APEX-GGUF.

What is APEX?

APEX is a quantization strategy for Mixture-of-Experts (MoE) models. It classifies tensors by role (routed expert, shared expert, attention) and applies a layer-wise precision gradient -- edge layers get higher precision, middle layers get more aggressive compression. I-variants use diverse imatrix calibration (chat, code, reasoning, tool-calling, agentic traces, Wikipedia).

See the APEX project for full details, technical report, and scripts.

Architecture

  • Model: NVIDIA-Nemotron-3-Nano-30B-A3B (NemotronH)
  • Layers: 52 (23 Mamba-2, 23 MoE, 6 GQA attention)
  • Experts: 128 routed + 1 shared (6 active per token)
  • Total Parameters: 30B
  • Active Parameters: ~3.5B per token
  • APEX Config: 5+5 symmetric edge gradient across 52 layers

Run with LocalAI

local-ai run mudler/Nemotron-3-Nano-30B-A3B-APEX-GGUF@Nemotron-3-Nano-30B-A3B-APEX-I-Balanced.gguf

Credits

APEX is brought to you by the LocalAI team. Developed through human-driven, AI-assisted research. Built on llama.cpp.

Downloads last month
6,720
GGUF
Model size
32B params
Architecture
nemotron_h_moe
Hardware compatibility
Log In to add your hardware

We're not able to determine the quantization variants.

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for mudler/Nemotron-3-Nano-30B-A3B-APEX-GGUF

Quantized
(42)
this model

Collection including mudler/Nemotron-3-Nano-30B-A3B-APEX-GGUF