KuraKura AI
community
AI & ML interests
Sea Turtles
adds Q4_KM for mobile compatible
#1 opened 2 months ago
by
Tonic
adds Q4 KM gguf file for mobile compatibility
1
#1 opened 2 months ago
by
Tonic
mlabonne
authored 2
papers 3 months ago
Post
10316
New family of 1B models just dropped!
> LiquidAI/LFM2.5-1.2B-Base: 10T → 28T tokens
> LiquidAI/LFM2.5-1.2B-Instruct: new large-scale multi-stage RL
> LiquidAI/LFM2.5-1.2B-JP: our most polite model
> LiquidAI/LFM2.5-VL-1.6B: multi-image multilingual
> LiquidAI/LFM2.5-Audio-1.5B: 8x times faster, no quality loss
Super proud of this release 🤗
> LiquidAI/LFM2.5-1.2B-Base: 10T → 28T tokens
> LiquidAI/LFM2.5-1.2B-Instruct: new large-scale multi-stage RL
> LiquidAI/LFM2.5-1.2B-JP: our most polite model
> LiquidAI/LFM2.5-VL-1.6B: multi-image multilingual
> LiquidAI/LFM2.5-Audio-1.5B: 8x times faster, no quality loss
Super proud of this release 🤗
MaxLSB
authored a
paper 6 months ago
GAD-cell
updated 2
datasets 7 months ago
GAD-cell
updated 5
models 7 months ago
kurakurai/Luth-0.6B-Instruct
Text Generation • 0.6B • Updated • 424 • • 9
kurakurai/Luth-1.7B-Instruct
Text Generation • 2B • Updated • 335 • • 14
kurakurai/Luth-LFM2-350M
Text Generation • 0.4B • Updated • 172 • 15
kurakurai/Luth-LFM2-700M
Text Generation • 0.7B • Updated • 79 • 16
kurakurai/Luth-LFM2-1.2B
Text Generation • 1B • Updated • 110 • 24
GAD-cell
authored a
paper 7 months ago
Post
8430
LiquidAI/LFM2-8B-A1B just dropped!
8.3B params with only 1.5B active/token 🚀
> Quality ≈ 3–4B dense, yet faster than Qwen3-1.7B
> MoE designed to run on phones/laptops (llama.cpp / vLLM)
> Pre-trained on 12T tokens → strong math/code/IF
8.3B params with only 1.5B active/token 🚀
> Quality ≈ 3–4B dense, yet faster than Qwen3-1.7B
> MoE designed to run on phones/laptops (llama.cpp / vLLM)
> Pre-trained on 12T tokens → strong math/code/IF
Post
3884
⚛️ New drop of tiny task-specific models!
Want to do data extraction, translation, RAG, tool use, or math on a Raspberry Pi? We got you covered! ✅
These tiny models were fine-tuned to perform narrow tasks extremely well, making them competitive with much larger models.
You can deploy them today on-device or even on GPUs for big data operations!
LiquidAI/liquid-nanos-68b98d898414dd94d4d5f99a
Want to do data extraction, translation, RAG, tool use, or math on a Raspberry Pi? We got you covered! ✅
These tiny models were fine-tuned to perform narrow tasks extremely well, making them competitive with much larger models.
You can deploy them today on-device or even on GPUs for big data operations!
LiquidAI/liquid-nanos-68b98d898414dd94d4d5f99a
MaxLSB
updated 3
models 7 months ago