Running on Zero MCP 18 FLUX.2 Klein LoRA Studio π₯ 18 Demo of a Collection of FLUX.2-Klein Model LoRAs
view post Post 4968 We collaborated with Hugging Face to enable you to train MoE models 12Γ faster with 35% less VRAM via our new Triton kernels (no accuracy loss). π€Train gpt-oss locally on 12.8GB VRAM with our free notebooks: https://unsloth.ai/docs/new/faster-moe See translation 1 reply Β· π₯ 29 29 π€ 5 5 + Reply
Running on Zero MCP 7 Qwen-Image-Edit-2511-LoRAs-Fast π 7 Demo of the Collection of Qwen Image Edit LoRAs
Running on T4 21 Qwen3-TTS-Daggr-UI π₯ 21 custom voice, voice design, and voice cloning, asr nodes.
Running on Zero MCP 202 LTX-2 Video [Turbo] π₯ 202 Fast high quality video with audio generation with FA3
Running on Zero MCP Featured 813 Qwen-Image-Edit-2511-LoRAs-Fast π 813 Demo of the Collection of Qwen Image Edit LoRAs
fancyfeast/llama-joycaption-beta-one-hf-llava Image-Text-to-Text β’ Updated May 16, 2025 β’ 60.2k β’ 307
stepfun-ai/GELab-Zero-4B-preview Image-Text-to-Text β’ 4B β’ Updated Dec 19, 2025 β’ 5.29k β’ 149