runtime error

Exit code: 1. Reason: 0%| | 0.00/1.30k [00:00<?, ?B/s] chat_template.jinja: 100%|██████████| 1.30k/1.30k [00:00<00:00, 4.79MB/s] Loading model... AutoModel failed: Unrecognized model in LiquidAI/LFM2.5-Audio-1.5B. Should have a `model_type` key in its config.json.. Trying AutoModelForCausalLM... Traceback (most recent call last): File "/app/app.py", line 25, in <module> model = AutoModel.from_pretrained(model_id, trust_remote_code=True, torch_dtype=torch.float32) File "/usr/local/lib/python3.13/site-packages/transformers/models/auto/auto_factory.py", line 319, in from_pretrained config, kwargs = AutoConfig.from_pretrained( ~~~~~~~~~~~~~~~~~~~~~~~~~~^ pretrained_model_name_or_path, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<4 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/usr/local/lib/python3.13/site-packages/transformers/models/auto/configuration_auto.py", line 1471, in from_pretrained raise ValueError( ...<2 lines>... ) ValueError: Unrecognized model in LiquidAI/LFM2.5-Audio-1.5B. Should have a `model_type` key in its config.json. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/app/app.py", line 29, in <module> model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True, torch_dtype=torch.float32) File "/usr/local/lib/python3.13/site-packages/transformers/models/auto/auto_factory.py", line 319, in from_pretrained config, kwargs = AutoConfig.from_pretrained( ~~~~~~~~~~~~~~~~~~~~~~~~~~^ pretrained_model_name_or_path, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<4 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/usr/local/lib/python3.13/site-packages/transformers/models/auto/configuration_auto.py", line 1471, in from_pretrained raise ValueError( ...<2 lines>... ) ValueError: Unrecognized model in LiquidAI/LFM2.5-Audio-1.5B. Should have a `model_type` key in its config.json.

Container logs:

Fetching error logs...