runtime error

Exit code: 1. Reason: in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. config.json: 0%| | 0.00/679 [00:00<?, ?B/s] config.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 679/679 [00:00<00:00, 2.70MB/s] model.safetensors: 0%| | 0.00/3.55G [00:00<?, ?B/s] model.safetensors: 2%|▏ | 67.1M/3.55G [00:04<04:18, 13.5MB/s] model.safetensors: 4%|▍ | 134M/3.55G [00:06<02:18, 24.7MB/s]  model.safetensors: 8%|β–Š | 268M/3.55G [00:07<01:13, 44.9MB/s] model.safetensors: 28%|β–ˆβ–ˆβ–Š | 1.01G/3.55G [00:08<00:12, 202MB/s] model.safetensors: 66%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 2.35G/3.55G [00:09<00:02, 478MB/s] model.safetensors: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3.55G/3.55G [00:10<00:00, 342MB/s] generation_config.json: 0%| | 0.00/181 [00:00<?, ?B/s] generation_config.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 181/181 [00:00<00:00, 1.11MB/s] Traceback (most recent call last): File "/app/app.py", line 22, in <module> model = PeftModel.from_pretrained(model, LORA_PATH) File "/usr/local/lib/python3.10/site-packages/peft/peft_model.py", line 430, in from_pretrained model.load_adapter(model_id, adapter_name, is_trainable=is_trainable, **kwargs) File "/usr/local/lib/python3.10/site-packages/peft/peft_model.py", line 1025, in load_adapter dispatch_model( File "/usr/local/lib/python3.10/site-packages/accelerate/big_modeling.py", line 374, in dispatch_model raise ValueError( ValueError: We need an `offload_dir` to dispatch this model according to this `device_map`, the following submodules need to be offloaded: base_model.model.model.layers.18, base_model.model.model.layers.19, base_model.model.model.layers.20, base_model.model.model.layers.21, base_model.model.model.layers.22, base_model.model.model.layers.23, base_model.model.model.layers.24, base_model.model.model.layers.25, base_model.model.model.layers.26, base_model.model.model.layers.27, base_model.model.model.norm, base_model.model.lm_head.

Container logs:

Fetching error logs...