File size: 2,732 Bytes
c9ce025
 
58ff268
 
e6215f3
 
c9ce025
6c7b018
 
 
 
 
 
 
c9ce025
aabd12e
 
 
 
 
 
 
 
 
 
58ff268
b5f7738
 
 
 
 
 
 
 
 
 
 
 
58ff268
 
be299bf
58ff268
 
 
 
 
 
 
 
 
e6215f3
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
---
license: mit
tags:
- arxiv:2507.18405
pipeline_tag: image-feature-extraction
library_name: transformers
---
# Iwin Transformer  

<div style="display:flex;justify-content: left">
  <a href="https://arxiv.org/abs/2507.18405"><img src="https://img.shields.io/static/v1?label=Paper&message=Arxiv:Iwin&color=red&logo=arxiv"></a> &ensp;
   <a href="https://github.com/Cominder/Iwin-Transformer/"><img src="https://img.shields.io/static/v1?label=Repository&message=Github&color=blue&logo=github-pages"></a> &ensp;
</div>


## Model Source
Pre-trained Iwin Transformer models on ImageNet-1k and ImageNet-22k. For research purposes, we recommend our Github repository (https://github.com/Cominder/Iwin-Transformer), which is more suitable for both training and inference.

## Model Descprition
We introduce Iwin Transformer, a novel positionembedding-free hierarchical vision transformer, which can be fine-tuned directly from low to high resolution, through the
collaboration of innovative interleaved window attention and depthwise separable convolution. This approach uses attention to connect distant tokens and applies convolution to link neighboring tokens, enabling global information exchange within a
single module, overcoming Swin Transformer’s limitation of requiring two consecutive blocks to approximate global attention. Extensive experiments on visual benchmarks demonstrate that Iwin Transformer exhibits strong competitiveness in tasks such as image classification (87.4 top-1 accuracy on ImageNet-1K),
semantic segmentation, and video action recognition. We also validate the effectiveness of the core component in Iwin as a standalone module that can seamlessly replace the self-attention module in class-conditional image generation. The concepts
and methods introduced by the Iwin Transformer have the potential to inspire future research, like Iwin 3D Attention in video generation. The code and models are available at
https://github.com/cominder/Iwin-Transformer.

## Usage
```
from huggingface_hub import hf_hub_download

filepath =  hf_hub_download(repo_id="cominder/Iwin-Transformer", filename="iwin_base_patch4_window12_384.pth")
```


## License
The codes and model weights are realeased under the MIT License for researchers and developers. 
For commercial use, see[License-Agreement](LICENSE-AGREEMENT) and please contact [bestcallsimin@gmail.com](mailto:bestcallsimin@gmail.com).


## Citation
```
@misc{huo2025iwin,
      title={Iwin Transformer: Hierarchical Vision Transformer using Interleaved Windows}, 
      author={Simin Huo and Ning Li},
      year={2025},
      eprint={2507.18405},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2507.18405}, 
}
```