no gguf support

#4
by kalle07 - opened

Error converting to fp16: INFO:hf-to-gguf:Loading model: granite-embedding-english-r2
INFO:hf-to-gguf:Model architecture: ModernBertModel
ERROR:hf-to-gguf:Model ModernBertModel is not supported

IBM Granite org

Hi @kalle07 ! The GGUF support for this is actively in the works over in llama.cpp: https://github.com/ggml-org/llama.cpp/pull/15641. Once merged, we will put up official GGUF conversions in the Granite Quantized Models collection. In the meantime, you're welcome to follow the steps on the PR to try it today. We'd love any feedback you have!

@gabegoodhart
Any update on the gguf progress for ollama?

IBM Granite org

The story here matches the other thread: https://huggingface.co/ibm-granite/granite-embedding-english-r2/discussions/2#698630216b569af5ef1c09a4

Close, but not quite there!

Sign up or log in to comment