flash-attention-3 (#16) 8a13284 Running verified littlebird13 multimodalart HF Staff commited on 15 days ago
Update qwen_tts/core/models/modeling_qwen3_tts.py 9bdb7e2 verified littlebird13 commited on 15 days ago
Apply flash-attention-3 and pre-load all models (no dynamic reloading) (#10) 361cf94 verified littlebird13 multimodalart HF Staff commited on 15 days ago
ZeroGPU duration to 60s instead of 180s (#4) 254e5a2 verified littlebird13 victor HF Staff commited on 21 days ago
No need to add queue (#1) 7f42f0e verified littlebird13 multimodalart HF Staff commited on 21 days ago