chromadb/context-1 MLX MXFP4

This model was converted from chromadb/context-1 to MLX format with MXFP4 (4-bit) quantization for efficient inference on Apple Silicon.

Model Description

  • Base Model: chromadb/context-1 (fine-tuned from openai/gpt-oss-20b)
  • Architecture: 20B parameter Mixture of Experts (MoE) with 32 experts, 4 active per token
  • Format: MLX with MXFP4 quantization
  • Quantization: 4.504 bits per weight

Performance (Apple M1 Max, 64GB)

Metric Value
Model Size 11 GB
Peak Memory 12 GB
Generation Speed ~69 tokens/sec
Prompt Processing ~70 tokens/sec
Latency ~14.5 ms/token

Usage

from mlx_lm import load, generate

model, tokenizer = load("foadmk/context-1-MLX-MXFP4")
response = generate(model, tokenizer, prompt="What is the capital of France?", max_tokens=100, verbose=True)

Conversion Notes

The chromadb/context-1 model uses a different weight format than the original openai/gpt-oss-20b, which required custom conversion logic:

Key Differences from Original Format

  • Dense BF16 tensors (not quantized blocks with _blocks suffix)
  • gate_up_proj shape: (experts, hidden, intermediate*2) with interleaved gate/up weights

Weight Transformations Applied

  1. gate_up_proj (32, 2880, 5760):

    • Transpose to (32, 5760, 2880)
    • Interleaved split: [:, ::2, :] for gate, [:, 1::2, :] for up
    • Result: gate_proj.weight and up_proj.weight each (32, 2880, 2880)
  2. down_proj (32, 2880, 2880):

    • Transpose to match MLX expected format
  3. Bypass mlx_lm sanitize: Pre-naming weights with .weight suffix to skip incorrect splitting

Conversion Script

A conversion script is included in this repo: convert_context1_to_mlx.py

python convert_context1_to_mlx.py --output ./context1-mlx-mxfp4

Intended Use

This model is optimized for:

  • Context-aware retrieval and search tasks
  • Running locally on Apple Silicon Macs
  • Low-latency inference without GPU requirements

Limitations

  • Requires Apple Silicon Mac with MLX support
  • Best performance on M1 Pro/Max/Ultra or newer with 32GB+ RAM
  • Model outputs structured JSON-like responses (inherited from base model training)

Citation

If you use this model, please cite the original:

@misc{chromadb-context-1,
  author = {Chroma},
  title = {Context-1: A Fine-tuned GPT-OSS Model for Retrieval},
  year = {2025},
  publisher = {HuggingFace},
  url = {https://huggingface.co/chromadb/context-1}
}

Acknowledgments

Downloads last month
617
Safetensors
Model size
21B params
Tensor type
BF16
·
U32
·
MLX
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for foadmk/context-1-MLX-MXFP4

Quantized
(5)
this model

Evaluation results