GGUF Quantized: leesplank-municipal

This repository contains GGUF format model files for uaebn/leesplank-municipal.

Due to a tokenizer compatibility issue between this model family and llama.cpp's GGUF format, some complex Dutch compound words may be truncated during generation. If you'd like to use this model, use the safetensors model.

Files Available

Filename Quant Type Size Description
leesplank-municipal.Q4_K_M.gguf Q4_K_M 1.05 GB Recommended. Balanced quality and speed. Best for most standard laptops (8GB+ RAM).
leesplank-municipal.Q5_K_M.gguf Q5_K_M 1.20 GB High Quality. Slight accuracy increase over Q4, but uses more RAM.
leesplank-municipal.f16.gguf F16 3.32 GB Uncompressed. Maximum precision.

Downloads last month
124
GGUF
Model size
2B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

5-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for uaebn/leesplank-municipal-GGUF

Quantized
(1)
this model