jiaxwang commited on
Commit
9c67489
·
verified ·
1 Parent(s): 2f7065e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -38
README.md CHANGED
@@ -13,11 +13,10 @@ base_model:
13
  - **ROCm:** 7.0
14
  - **Operating System(s):** Linux
15
  - **Inference Engine:** [vLLM](https://docs.vllm.ai/en/latest/)
16
- - **Model Optimizer:** [AMD-Quark](https://quark.docs.amd.com/latest/index.html) (V0.11)
17
  - **moe**
18
  - **Weight quantization:** MOE-only, OCP MXFP4, Static
19
  - **Activation quantization:** MOE-only, OCP MXFP4, Dynamic
20
- - **KV cache quantization:** OCP FP8, Static
21
  - **Calibration Dataset:** [Pile](https://huggingface.co/datasets/mit-han-lab/pile-val-backup)
22
 
23
  This model was built with GLM-4.7 model by applying [AMD-Quark](https://quark.docs.amd.com/latest/index.html) for MXFP4 quantization.
@@ -25,48 +24,22 @@ This model was built with GLM-4.7 model by applying [AMD-Quark](https://quark.do
25
  # Model Quantization
26
 
27
  The model was quantized from [zai-org/GLM-4.7](https://huggingface.co/zai-org/GLM-4.7) using [AMD-Quark](https://quark.docs.amd.com/latest/index.html). The weights and activations are quantized to MXFP4.
28
- AMD-Quark has been installed from source code inside the Docker image `rocm/vllm-private:vllm_dev_base_mxfp4_20260122`.
29
 
30
  **Quantization scripts:**
31
 
32
- Note that GLM-4.7 is not in the built-in model template list in Quark V0.11, it has to be registered before quantization.
33
-
34
- - **Step1:** Register model template: creat fle `Quark/examples/torch/language_modeling/llm_ptq/quantize_glm.py`
35
- ```
36
- import runpy
37
- from quark.torch import LLMTemplate
38
-
39
- # Register GLM-4 MoE template
40
- glm4_moe_template = LLMTemplate(
41
- model_type="glm4_moe",
42
- kv_layers_name=["*k_proj", "*v_proj"],
43
- q_layer_name="*q_proj",
44
- exclude_layers_name=["lm_head","*mlp.gate","*self_attn*","*shared_experts.*","*mlp.down_proj","*mlp.gate_proj","*mlp.up_proj"],
45
- )
46
- LLMTemplate.register_template(glm4_moe_template)
47
- print(f"[INFO]: Registered template '{glm4_moe_template.model_type}'")
48
-
49
- # Run quantize_quark.py
50
- # Get the absolute path to the quantize_quark.py script
51
- quantize_script = "/app/Quark/examples/torch/language_modeling/llm_ptq/quantize_quark.py"
52
-
53
- runpy.run_path(quantize_script, run_name="__main__")
54
- ```
55
- - **Step2:** Quantize with the quantize_glm.py
56
  ```
57
  export CUDA_VISIBLE_DEVICES=0,1,2,3
58
  export MODEL_DIR=zai-org/GLM-4.7
59
  export output_dir=amd/GLM-4.7-MXFP4
60
 
61
  exclude_layers="*self_attn* *mlp.gate lm_head *mlp.gate_proj *mlp.up_proj *mlp.down_proj *shared_experts.*"
62
- python3 quantize_glm.py --model_dir $MODEL_DIR \
63
- --quant_scheme mxfp4 \
64
- --num_calib_data 128 \
65
- --exclude_layers $exclude_layers \
66
- --kv_cache_dtype fp8 \
67
- --model_export hf_format \
68
- --output_dir $output_dir \
69
- --multi_gpu
70
  ```
71
 
72
  # Deployment
@@ -104,7 +77,8 @@ The model was evaluated on GSM8K benchmarks.
104
 
105
  ### Reproduction
106
 
107
- The GSM8K results were obtained using the `lm-evaluation-harness` framework, based on the Docker image `rocm/vllm-private:vllm_dev_base_mxfp4_20260122`, with vLLM, lm-eval and amd-quark compiled and installed from source inside the image.
 
108
 
109
  #### Launching server
110
  ```
@@ -112,8 +86,7 @@ vllm serve amd/GLM-4.7-MXFP4 \
112
  --tensor-parallel-size 4 \
113
  --tool-call-parser glm47 \
114
  --reasoning-parser glm45 \
115
- --enable-auto-tool-choice \
116
- --kv_cache_dtype fp8
117
  ```
118
 
119
  #### Evaluating model in a new terminal
 
13
  - **ROCm:** 7.0
14
  - **Operating System(s):** Linux
15
  - **Inference Engine:** [vLLM](https://docs.vllm.ai/en/latest/)
16
+ - **Model Optimizer:** [AMD-Quark](https://quark.docs.amd.com/latest/index.html) (V0.11.1)
17
  - **moe**
18
  - **Weight quantization:** MOE-only, OCP MXFP4, Static
19
  - **Activation quantization:** MOE-only, OCP MXFP4, Dynamic
 
20
  - **Calibration Dataset:** [Pile](https://huggingface.co/datasets/mit-han-lab/pile-val-backup)
21
 
22
  This model was built with GLM-4.7 model by applying [AMD-Quark](https://quark.docs.amd.com/latest/index.html) for MXFP4 quantization.
 
24
  # Model Quantization
25
 
26
  The model was quantized from [zai-org/GLM-4.7](https://huggingface.co/zai-org/GLM-4.7) using [AMD-Quark](https://quark.docs.amd.com/latest/index.html). The weights and activations are quantized to MXFP4.
 
27
 
28
  **Quantization scripts:**
29
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
30
  ```
31
  export CUDA_VISIBLE_DEVICES=0,1,2,3
32
  export MODEL_DIR=zai-org/GLM-4.7
33
  export output_dir=amd/GLM-4.7-MXFP4
34
 
35
  exclude_layers="*self_attn* *mlp.gate lm_head *mlp.gate_proj *mlp.up_proj *mlp.down_proj *shared_experts.*"
36
+ python3 quantize_quark.py --model_dir $MODEL_DIR \
37
+ --quant_scheme mxfp4 \
38
+ --num_calib_data 128 \
39
+ --exclude_layers $exclude_layers \
40
+ --model_export hf_format \
41
+ --output_dir $output_dir \
42
+ --multi_gpu
 
43
  ```
44
 
45
  # Deployment
 
77
 
78
  ### Reproduction
79
 
80
+ The GSM8K results were obtained using the `lm-evaluation-harness` framework, based on the Docker image `rocm/vllm-private:vllm_dev_base_mxfp4_20260122`, with vLLM, lm-eval compiled and installed from source inside the image.
81
+ The Docker image contains the necessary vLLM code modifications to support this model.
82
 
83
  #### Launching server
84
  ```
 
86
  --tensor-parallel-size 4 \
87
  --tool-call-parser glm47 \
88
  --reasoning-parser glm45 \
89
+ --enable-auto-tool-choice
 
90
  ```
91
 
92
  #### Evaluating model in a new terminal