runtime error

Exit code: 139. Reason: /usr/local/lib/python3.13/site-packages/diffusers/models/transformers/transformer_kandinsky.py:168: UserWarning: CUDA is not available or torch_xla is imported. Disabling autocast. @torch.autocast(device_type="cuda", dtype=torch.float32) /usr/local/lib/python3.13/site-packages/diffusers/models/transformers/transformer_kandinsky.py:272: UserWarning: CUDA is not available or torch_xla is imported. Disabling autocast. @torch.autocast(device_type="cuda", dtype=torch.float32) [2026-02-14 09:49:30] CPU threads set: 16 [2026-02-14 09:49:30] Background model loading started... [2026-02-14 09:49:30] Downloading model GGUF from Hugging Face... Traceback (most recent call last): File "/app/app.py", line 158, in <module> ).queue(concurrency_count=1) ^^^^^ AttributeError: 'Dependency' object has no attribute 'queue' z-image-turbo-Q5_0.gguf: 0%| | 0.00/5.26G [00:00<?, ?B/s]Fatal Python error: PyInterpreterState_Delete: remaining subinterpreters Python runtime state: preinitialized

Container logs:

Fetching error logs...