dawidope commited on
Commit
44aaef5
·
verified ·
1 Parent(s): 6179f42

Initial upload of prebuilt wheels

Browse files
.gitattributes CHANGED
@@ -33,3 +33,9 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ ace_step-1.6.0-py3-none-any.whl filter=lfs diff=lfs merge=lfs -text
37
+ block_sparse_attn-0.0.2-cp311-cp311-win_amd64.whl filter=lfs diff=lfs merge=lfs -text
38
+ flash_attn-2.8.2+cu128torch2.8-cp311-cp311-win_amd64.whl filter=lfs diff=lfs merge=lfs -text
39
+ flash_attn-2.8.3+cu128torch2.8-cp311-cp311-linux_x86_64.whl filter=lfs diff=lfs merge=lfs -text
40
+ flash_attn-2.8.3+cu130torch2.10-cp311-cp311-win_amd64.whl filter=lfs diff=lfs merge=lfs -text
41
+ q8_kernels-0.0.5-cp311-cp311-win_amd64.whl filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ tags:
4
+ - wheels
5
+ - cuda
6
+ - pytorch
7
+ - windows
8
+ - linux
9
+ ---
10
+
11
+ # image-server-wheels
12
+
13
+ Prebuilt Python wheels used by the [image-server](https://github.com/) install scripts
14
+ (`install.ps1` on Windows, `install.sh` on Linux). Hosted here because some wheels
15
+ exceed GitHub's 100 MB per-file limit.
16
+
17
+ All wheels target **Python 3.11**. The install scripts fall back to this repo
18
+ automatically when the wheel is not found locally in `scripts/publish/whl/`.
19
+
20
+ ## Contents
21
+
22
+ | File | OS | CUDA | Torch | Source | Notes |
23
+ |---|---|---|---|---|---|
24
+ | `ace_step-1.6.0-py3-none-any.whl` | any | — | — | built by us | Pure-Python, cross-platform |
25
+ | `block_sparse_attn-0.0.2-cp311-cp311-win_amd64.whl` | Windows x64 | 12.8 | 2.8 | built by us | Used by video pipeline |
26
+ | `q8_kernels-0.0.5-cp311-cp311-win_amd64.whl` | Windows x64 | 12.8 | 2.8 | built by us | Used by LTX video |
27
+ | `flash_attn-2.8.2+cu128torch2.8-cp311-cp311-win_amd64.whl` | Windows x64 | 12.8 | 2.8 | [mjun0812/flash-attention-prebuild-wheels](https://github.com/mjun0812/flash-attention-prebuild-wheels/releases/tag/v0.4.10) | Mirror of upstream release |
28
+ | `flash_attn-2.8.3+cu130torch2.10-cp311-cp311-win_amd64.whl` | Windows x64 | 13.0 | 2.10 | [mjun0812/flash-attention-prebuild-wheels](https://github.com/mjun0812/flash-attention-prebuild-wheels) | Mirror of upstream release |
29
+ | `flash_attn-2.8.3+cu128torch2.8-cp311-cp311-linux_x86_64.whl` | Linux x86_64 | 12.8 | 2.8 | [mjun0812/flash-attention-prebuild-wheels](https://github.com/mjun0812/flash-attention-prebuild-wheels/releases/tag/v0.7.16) | Mirror of upstream release |
30
+
31
+ ## Direct install
32
+
33
+ ```bash
34
+ BASE=https://huggingface.co/deAPI-ai/image-server-wheels/resolve/main
35
+
36
+ # Windows
37
+ pip install $BASE/q8_kernels-0.0.5-cp311-cp311-win_amd64.whl
38
+ pip install $BASE/block_sparse_attn-0.0.2-cp311-cp311-win_amd64.whl
39
+ pip install $BASE/flash_attn-2.8.2+cu128torch2.8-cp311-cp311-win_amd64.whl
40
+ pip install --no-deps $BASE/ace_step-1.6.0-py3-none-any.whl
41
+
42
+ # Linux
43
+ pip install --no-deps $BASE/flash_attn-2.8.3+cu128torch2.8-cp311-cp311-linux_x86_64.whl
44
+ ```
45
+
46
+ ## Credits
47
+
48
+ `flash_attn` wheels are mirrored from [mjun0812/flash-attention-prebuild-wheels](https://github.com/mjun0812/flash-attention-prebuild-wheels) — all credit for those builds goes to the upstream author. We mirror them here so the install scripts have a single source of truth and do not break if upstream release URLs change.
49
+
50
+ The remaining wheels (`ace_step`, `block_sparse_attn`, `q8_kernels`) were built in-house.
ace_step-1.6.0-py3-none-any.whl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c6b08dc35287e4bb6b519bedbb8de6f184bd283cd4c26713c5d9d732eb6c14ef
3
+ size 2477912
block_sparse_attn-0.0.2-cp311-cp311-win_amd64.whl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5518b9a92c53ff7540b0a091f2d35e4cf717fef4aec319599328856e0f0f3408
3
+ size 181538163
flash_attn-2.8.2+cu128torch2.8-cp311-cp311-win_amd64.whl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fe9b1b901ea3b60c10dac362a86d490d6385bdce0714b1be7311df7708c5e16f
3
+ size 250764791
flash_attn-2.8.3+cu128torch2.8-cp311-cp311-linux_x86_64.whl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3139fa0a11a517057336890376a00ae55ab6e87278c02a68387c24626e597bd9
3
+ size 253476553
flash_attn-2.8.3+cu130torch2.10-cp311-cp311-win_amd64.whl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b3fb6b6981d601b0c8d2f33ba2d4c69b8954490cbc152b9d8cdc2b6f9616f5a2
3
+ size 239527638
q8_kernels-0.0.5-cp311-cp311-win_amd64.whl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:95401ac0874c6b924ee0a6e6a7b3aa2cbfff260bd47ea4f0e0d92f887ae29f81
3
+ size 24962066