Experimental prune of michaelwaves' Amoral GPT OSS 120B from 128 experts to 112 experts. GGUF quants cannot be made because I can't run this thing on my laptop lmao maybe someone else can do it (I hope!)

Downloads last month
3
Safetensors
Model size
102B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for blascotobasco/michaelwaves-Amoral-GPT-OSS-112E

Finetuned
(1)
this model
Quantizations
2 models