--- base_model: zai-org/GLM-4.6V --- # GLM-4.6V-GGUF This model is converted from [zai-org/GLM-4.6V](https://huggingface.co/zai-org/GLM-4.6V) to GGUF using `convert_hf_to_gguf.py` To use it: ``` llama-server -hf ggml-org/GLM-4.6V-GGUF ```