Does this mean that we will have GGUF quants of models as they release, or at least support for gguf out of the box for new models in the future?
rombo dawg
rombodawg
AI & ML interests
My patreon:
https://www.patreon.com/c/Rombodawg
My Twitter:
https://x.com/dudeman6790
Recent Activity
new activity 6 days ago
MiniMaxAI/MiniMax-M2.5:Minimax 2.7??? new activity 22 days ago
microsoft/Phi-4-reasoning-vision-15B:Typo in model card? commentedon an article about 1 month ago
GGML and llama.cpp join HF to ensure the long-term progress of Local AI