view post Post 6598 SmolVLM is now available on PocketPal — you can run it offline on your smartphone to interpret the world around you. 🌍📱And check out this real-time camera demo by @ngxson , powered by llama.cpp:https://github.com/ngxson/smolvlm-realtime-webcamhttps://x.com/pocketpal_ai See translation 4 replies · ❤️ 12 12 😎 1 1 + Reply
Running on CPU Upgrade Featured 2.98k The Smol Training Playbook 📚 2.98k The secrets to building world-class LLMs
deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B Text Generation • 2B • Updated Feb 24, 2025 • 879k • • 1.45k
view post Post 3008 A few days ago, Thinking Machines Lab released “LoRA Without Regret”, showing that LoRA can match full fine-tuning performance when configured right.Naturally, we decided to reproduce the results with TRL and release a guide!https://huggingface.co/docs/trl/main/en/lora_without_regret See translation 🔥 11 11 + Reply
Toward Efficient Agents: Memory, Tool learning, and Planning Paper • 2601.14192 • Published 23 days ago • 54
DavidAU/GLM-4.7-Flash-Uncensored-Heretic-NEO-CODE-Imatrix-MAX-GGUF Text Generation • 30B • Updated 15 days ago • 66.6k • 190
view post Post 6344 Uncensored, Heretic GGUF quants of GLM 4.7 (30B-A3B) with correct Llamacpp and all updates ; NEO-CODE Imatrix W 16 bit OTs.Also specialized quants (balanced for this model), and all quants are NEO-CODE Imatrix W 16 bit output tensor. DavidAU/GLM-4.7-Flash-Uncensored-Heretic-NEO-CODE-Imatrix-MAX-GGUF"Reg quants, non-heretic" :Also 16 bit ot, NEO-CODE Imatrix and specialized: DavidAU/GLM-4.7-Flash-NEO-CODE-Imatrix-MAX-GGUF See translation 🔥 7 7 👀 3 3 + Reply
view article Article Atlaset Dataset for Moroccan Darija: From Data Collection, Analysis, to Model Trainings Mar 6, 2025 • 27