UI-Venus-1.5-8B-qx86-hi-mlx

Brainwaves

          arc   arc/e boolq hswag obkqa piqa  wino
qx86-hi   0.586,0.777,0.876,0.732,0.478,0.796,0.702
qx64-hi   0.573,0.752,0.870,0.724,0.468,0.794,0.714

Qwen3-VLTO-8B-Instruct
qx86x-hi  0.455,0.601,0.878,0.546,0.424,0.739,0.595

Qwen3-VLTO-8B-Thinking
qx86x-hi  0.475,0.599,0.706,0.638,0.402,0.765,0.684

Closest I could find it to compare it to was the text part extracted in the VLTO model.

-G

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("UI-Venus-1.5-8B-qx86-hi-mlx")

prompt = "hello"

if tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, add_generation_prompt=True, return_dict=False,
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)
Downloads last month
10
Safetensors
Model size
3B params
Tensor type
F16
·
U32
·
MLX
Hardware compatibility
Log In to add your hardware

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for nightmedia/UI-Venus-1.5-8B-qx86-hi-mlx

Quantized
(6)
this model