Image-Text-to-Text
Transformers
Safetensors
English
CASA_Qwen_2_5_VL_3B
conversational
custom_code
ameroyer nielsr HF Staff commited on
Commit
59204a1
·
verified ·
1 Parent(s): eb26251

Improve model card: add library name, links, and sample usage (#1)

Browse files

- Improve model card: add library name, links, and sample usage (763aa63d7bf2b72f564feabdbe2c23bb33bfc452)


Co-authored-by: Niels Rogge <nielsr@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +84 -6
README.md CHANGED
@@ -1,14 +1,92 @@
1
  ---
2
- language:
3
- - en
4
  base_model:
5
  - Qwen/Qwen2.5-VL-3B-Instruct
6
- pipeline_tag: image-text-to-text
7
- license: cc-by-nc-sa-4.0
8
  datasets:
9
  - HuggingFaceM4/FineVision
10
  - mvp-lab/LLaVA-OneVision-1.5-Instruct-Data
 
 
 
 
 
11
  ---
12
- Please refer to the [main model card](https://huggingface.co/kyutai/CASA-Helium1-VL-2B) for more information and instructions to run.
13
 
14
- This model page contains model weights for `CASA-Qwen2_5-VL-3B`, a Qwen-2.5VL model adapted from token insertion to cross-attention based using CASA layers. We provide model weights for other CASA models in the associated model collection.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
 
 
2
  base_model:
3
  - Qwen/Qwen2.5-VL-3B-Instruct
 
 
4
  datasets:
5
  - HuggingFaceM4/FineVision
6
  - mvp-lab/LLaVA-OneVision-1.5-Instruct-Data
7
+ language:
8
+ - en
9
+ license: cc-by-nc-sa-4.0
10
+ pipeline_tag: image-text-to-text
11
+ library_name: transformers
12
  ---
 
13
 
14
+ # CASA-Qwen2_5-VL-3B
15
+
16
+ This repository contains the model weights for **CASA-Qwen2_5-VL-3B**, introduced in the paper [CASA: Cross-Attention via Self-Attention for Efficient Vision-Language Fusion](https://huggingface.co/papers/2512.19535).
17
+
18
+ CASA is a vision-language fusion paradigm that improves on cross-attention while preserving its scalability. This model is a [Qwen-2.5VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct) model adapted from token insertion to a cross-attention-based architecture using CASA layers.
19
+
20
+ - **Paper:** [CASA: Cross-Attention via Self-Attention for Efficient Vision-Language Fusion](https://arxiv.org/abs/2512.19535)
21
+ - **Project Page:** [kyutai.org/casa](https://kyutai.org/casa)
22
+ - **Code:** [github.com/kyutai-labs/casa](https://github.com/kyutai-labs/casa)
23
+
24
+ ## Sample Usage
25
+
26
+ This model requires `trust_remote_code=True` to load the custom architecture. Below is a snippet to run inference using `transformers`.
27
+
28
+ ```python
29
+ import torch
30
+ from transformers.models.auto.modeling_auto import AutoModel
31
+ from transformers.models.auto.processing_auto import AutoProcessor
32
+
33
+ model_id = "kyutai/CASA-Qwen2_5-VL-3B"
34
+ model = AutoModel.from_pretrained(
35
+ model_id,
36
+ torch_dtype=torch.bfloat16,
37
+ attn_implementation="flash_attention_2",
38
+ trust_remote_code=True,
39
+ ).cuda()
40
+
41
+ processor = AutoProcessor.from_pretrained(
42
+ model_id,
43
+ trust_remote_code=True,
44
+ )
45
+
46
+ conversation = [
47
+ {
48
+ "role": "user",
49
+ "content": [
50
+ {
51
+ "type": "image",
52
+ "image": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/ai2d-demo.png",
53
+ },
54
+ {
55
+ "type": "text",
56
+ "text": "Describe this image.",
57
+ },
58
+ ],
59
+ },
60
+ ]
61
+
62
+ inputs = processor.tokenize_messages(messages=conversation)
63
+ inputs = inputs.to(model.device)
64
+ input_len = inputs["input_ids"].shape[1]
65
+
66
+ output_ids = model.generate_from_image(
67
+ **inputs,
68
+ max_new_tokens=512,
69
+ pre_image_tokens=processor.pre_image_tokens,
70
+ post_image_tokens=processor.post_image_tokens,
71
+ eos_token_id=model.generation_config.eos_token_id,
72
+ )[0, input_len:]
73
+
74
+ response = processor.tokenizer.decode(output_ids, skip_special_tokens=True)
75
+ print(response)
76
+ ```
77
+
78
+ ## Citation
79
+
80
+ ```bibtex
81
+ @article{kyutai2025casa,
82
+ author = {Moritz B\"ohle and Am\'elie Royer and Juliette Marrie and Edouard Grave and Patrick P\'erez},
83
+ year = {2025},
84
+ title = {CASA: Cross-Attention via Self-Attention for Efficient Vision-Language Fusion},
85
+ journal = {ArXiv},
86
+ url = {https://arxiv.org/abs/2512.19535}
87
+ }
88
+ ```
89
+
90
+ ## License
91
+
92
+ The code in the official repository is provided under the **MIT license**. The weights for this model are released under the **CC-BY-NC-SA 4.0 license**. Additionally, as this model includes weights from Qwen2.5-VL-3B, it is subject to the [Qwen RESEARCH LICENSE AGREEMENT](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct/blob/main/LICENSE).