right now pretty fast image gen is a bit slow but that is my computer not being good enough
Building on HF
Nico Latham
NicoBBQ1
AI & ML interests
None yet
Organizations
repliedto their post 4 months ago
repliedto their post 4 months ago
hehe thanks
posted an update 4 months ago
Post
1995
What do you think of my LLM Chat app so far?
Here are some of the features already included (and more are coming):
- Chat with AI models – Local inference via Ollama
- Reasoning support – View model thinking process (DeepSeek-R1, Qwen-QwQ, etc.)
- Vision models – Analyze images with llava, bakllava, moondream
- Image generation – Local GGUF models with GPU acceleration (CUDA)
- Fullscreen images – Click generated images to view in fullscreen
- Image attachments – File picker or clipboard paste (Ctrl+V)
- DeepSearch – Web search with tool use
- Inference Stats – Token counts, speed, duration (like Ollama verbose)
- Regenerate – Re-run any AI response
- Copy – One-click copy AI responses
Here are some of the features already included (and more are coming):
- Chat with AI models – Local inference via Ollama
- Reasoning support – View model thinking process (DeepSeek-R1, Qwen-QwQ, etc.)
- Vision models – Analyze images with llava, bakllava, moondream
- Image generation – Local GGUF models with GPU acceleration (CUDA)
- Fullscreen images – Click generated images to view in fullscreen
- Image attachments – File picker or clipboard paste (Ctrl+V)
- DeepSearch – Web search with tool use
- Inference Stats – Token counts, speed, duration (like Ollama verbose)
- Regenerate – Re-run any AI response
- Copy – One-click copy AI responses