YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
GGUF to MLX Conversion
- Generated: 2026-02-12T23:25:33.024611+00:00
- Source repo: TeichAI/GLM-4.7-Flash-Claude-Opus-4.5-High-Reasoning-Distill-GGUF
- Source filename: glm-4.7-flash-claude-4.5-opus.bf16.gguf
- Source revision: main
- Tensor count: 844
Weights
- weights-00001.safetensors
- weights-00002.safetensors
- weights-00003.safetensors
- weights-00004.safetensors
- weights-00005.safetensors
- weights-00006.safetensors
- weights-00007.safetensors
- weights-00008.safetensors
- weights-00009.safetensors
- weights-00010.safetensors
- weights-00011.safetensors
- weights-00012.safetensors
- weights-00013.safetensors
- weights-00014.safetensors
- weights-00015.safetensors
- weights-00016.safetensors
- weights-00017.safetensors
- weights-00018.safetensors
- weights-00019.safetensors
- weights-00020.safetensors
- weights-00021.safetensors
- weights-00022.safetensors
- weights-00023.safetensors
- weights-00024.safetensors
- weights-00025.safetensors
- weights-00026.safetensors
- weights-00027.safetensors
- weights-00028.safetensors
- weights-00029.safetensors
- weights-00030.safetensors
- weights-00031.safetensors
- weights-00032.safetensors
- weights-00033.safetensors
- weights-00034.safetensors
- weights-00035.safetensors
- weights-00036.safetensors
- weights-00037.safetensors
- weights-00038.safetensors
- weights-00039.safetensors
- weights-00040.safetensors
- weights-00041.safetensors
- weights-00042.safetensors
- weights-00043.safetensors
- weights-00044.safetensors
- weights-00045.safetensors
- weights-00046.safetensors
- weights-00047.safetensors
- weights-00048.safetensors
Config
{
"bos_token_id": 154822,
"eos_token_id": 154820,
"hidden_size": 2048,
"intermediate_size": 10240,
"max_position_embeddings": 202752,
"model_type": "deepseek2",
"moe_intermediate_size": 1536,
"num_attention_heads": 20,
"num_experts": 64,
"num_experts_per_tok": 4,
"num_experts_used": 1,
"num_hidden_layers": 47,
"num_key_value_heads": 1,
"num_local_experts": 1,
"pad_token_id": 154821,
"rms_norm_eps": 9.999999747378752e-06,
"rope_theta": 1000000.0,
"unk_token_id": 154820,
"vocab_size": 154880
}
Tokenizer
- Model: gpt2
- Pre-tokenizer: glm4
Generated files:
- tokenizer.json
- tokenizer_config.json
- special_tokens_map.json
Provenance
Converted with gguf_to_mlx.py using the gguf Python library and MLX safetensors output.
- Downloads last month
- 2
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support