File size: 2,986 Bytes
1126ea7 943bec4 1126ea7 943bec4 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 |
---
base_model:
- kyutai/helium-1-2b
datasets:
- HuggingFaceM4/FineVision
- mvp-lab/LLaVA-OneVision-1.5-Instruct-Data
language:
- en
license: cc-by-nc-sa-4.0
pipeline_tag: image-text-to-text
library_name: transformers
---
# Helium1-VL-2B
`Helium1-VL-2B` is an instruct-tuned vision-language model (VLM) based on the [Helium1-2B](https://huggingface.co/kyutai/helium-1-2b) text-only language model and a pretrained vision encoder from [Qwen-2.5VL](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct).
This model is released as part of the **CASA** project. While the CASA architecture focuses on cross-attention fusion, `Helium1-VL-2B` serves as a high-performance **token insertion** baseline, achieving state-of-the-art results among models of comparable size trained on publicly available datasets.
- **Paper:** [CASA: Cross-Attention via Self-Attention for Efficient Vision-Language Fusion](https://huggingface.co/papers/2512.19535)
- **Project Page:** [https://kyutai.org/casa](https://kyutai.org/casa)
- **GitHub Repository:** [https://github.com/kyutai-labs/casa](https://github.com/kyutai-labs/casa)
## Sample Usage
You can run inference using the following code snippet. This model requires `trust_remote_code=True` to load the custom architecture.
```python
import torch
from transformers.models.auto.modeling_auto import AutoModel
from transformers.models.auto.processing_auto import AutoProcessor
model_id = "kyutai/Helium1-VL-2B"
model = AutoModel.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
attn_implementation="flash_attention_2",
trust_remote_code=True,
).cuda()
processor = AutoProcessor.from_pretrained(
model_id,
trust_remote_code=True,
)
conversation = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/ai2d-demo.png",
},
{
"type": "text",
"text": "Describe this image.",
},
],
},
]
inputs = processor.tokenize_messages(messages=conversation)
inputs = inputs.to(model.device)
input_len = inputs["input_ids"].shape[1]
output_ids = model.generate_from_image(
**inputs,
max_new_tokens=512,
pre_image_tokens=processor.pre_image_tokens,
post_image_tokens=processor.post_image_tokens,
eos_token_id=model.generation_config.eos_token_id,
)[0, input_len:]
response = processor.tokenizer.decode(output_ids, skip_special_tokens=True)
print(response)
```
## Citation
If you use this model or the CASA fusion paradigm in your research, please cite:
```bibtex
@article{kyutai2025casa,
author = {Moritz B\"ohle and Am\'elie Royer and Juliette Marrie and Edouard Grave and Patrick P\'erez},
year = {2025},
title = {CASA: Cross-Attention via Self-Attention for Efficient Vision-Language Fusion},
journal = {ArXiv},
url = {https://arxiv.org/abs/2512.19535}
}
``` |