Diner Burger
dinerburger
AI & ML interests
None yet
Recent Activity
updated a collection about 9 hours ago
Bookmarks updated a model about 23 hours ago
dinerburger/Qwen3.5-27B-GGUF updated a collection 3 days ago
BookmarksOrganizations
None yet
MXFP4 vs other 4-bit quant algos?
2
#3 opened 5 days ago
by
dinerburger
Tool Calling?
2
#1 opened 22 days ago
by
dinerburger
Ablation studies on effects of quantization on SSM weights?
#15 opened about 1 month ago
by
dinerburger
Keep ssm_ba.weight and ssm_out.weight in BF16?
#1 opened about 1 month ago
by
dinerburger
CPU-only inference broken with latest llama.cpp?
🤝 1
#4 opened about 2 months ago
by
dinerburger
QuIP - 2 bit quantised as good as 16 bit
5
#5 opened 2 months ago
by
infinityai
Thanks
#2 opened 2 months ago
by
dinerburger
Thanks bartowski for the GGUFs!
🚀❤️ 7
1
#7 opened 3 months ago
by
ubergarm
VLLM fails to serve
1
#2 opened 12 months ago
by
dinerburger
Thanks.
👍 2
5
#1 opened about 1 year ago
by
dinerburger
Failing with RooCode
5
#1 opened about 1 year ago
by
minyor25
Suggested command fails to start with vLLM 0.8.1
3
#51 opened 12 months ago
by
dinerburger
SillyTavern thinking FIX in Text Completion
👍 3
9
#1 opened about 1 year ago
by
Undi95
Phi-4 mini does not work inside of unsloth.
🔥 1
8
#1 opened about 1 year ago
by
Pinkstack
Different number of attention heads, makes rotary_ndims vs rope scaling factors wrong?
🤯👀 11
14
#1 opened about 1 year ago
by
bartowski
Thanks.
👍 2
5
#1 opened about 1 year ago
by
dinerburger
Thanks.
👍 2
5
#1 opened about 1 year ago
by
dinerburger