Fabrice-TIERCELIN commited on
Commit
e2935c9
·
verified ·
1 Parent(s): c80c226

Upload 10 files

Browse files
Files changed (10) hide show
  1. README.md +7 -8
  2. app.py +465 -464
  3. bird.webp +3 -0
  4. cat_window.webp +0 -0
  5. diffusers.zip +3 -0
  6. optimization.py +43 -122
  7. person1.webp +0 -0
  8. requirements.txt +4 -4
  9. woman1.webp +0 -0
  10. woman2.webp +0 -0
README.md CHANGED
@@ -1,13 +1,12 @@
1
  ---
2
- title: Qwen Image Edit 2511
3
- emoji: 🏆
4
- colorFrom: pink
5
- colorTo: red
6
  sdk: gradio
7
- sdk_version: 6.2.0
8
  app_file: app.py
9
- pinned: false
10
- license: apache-2.0
11
  ---
12
 
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
1
  ---
2
+ title: FLUX.2 [Klein] 4B
3
+ emoji: 💻
4
+ colorFrom: blue
5
+ colorTo: gray
6
  sdk: gradio
7
+ sdk_version: 5.29.1
8
  app_file: app.py
9
+ pinned: true
 
10
  ---
11
 
12
+ Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
app.py CHANGED
@@ -1,464 +1,465 @@
1
- import gradio as gr
2
- import numpy as np
3
- import random
4
- import math
5
- import time
6
- import torch
7
- import spaces
8
-
9
- from datetime import datetime
10
- import tempfile
11
- import zipfile
12
- from pathlib import Path
13
- from PIL import Image
14
- from diffusers import QwenImageEditPlusPipeline
15
-
16
- import os
17
- import base64
18
- import json
19
-
20
- SYSTEM_PROMPT = '''
21
- # Edit Prompt Enhancer
22
- You are a professional edit prompt enhancer. Your task is to generate a direct and specific edit prompt based on the user-provided instruction and the image input conditions.
23
-
24
- Please strictly follow the enhancing rules below:
25
-
26
- ## 1. General Principles
27
- - Keep the enhanced prompt **direct and specific**.
28
- - If the instruction is contradictory, vague, or unachievable, prioritize reasonable inference and correction, and supplement details when necessary.
29
- - Keep the core intention of the original instruction unchanged, only enhancing its clarity, rationality, and visual feasibility.
30
- - All added objects or modifications must align with the logic and style of the edited input image’s overall scene.
31
-
32
- ## 2. Task-Type Handling Rules
33
- ### 1. Add, Delete, Replace Tasks
34
- - If the instruction is clear (already includes task type, target entity, position, quantity, attributes), preserve the original intent and only refine the grammar.
35
- - If the description is vague, supplement with minimal but sufficient details (category, color, size, orientation, position, etc.). For example:
36
- > Original: "Add an animal"
37
- > Rewritten: "Add a light-gray cat in the bottom-right corner, sitting and facing the camera"
38
- - Remove meaningless instructions: e.g., "Add 0 objects" should be ignored or flagged as invalid.
39
- - For replacement tasks, specify "Replace Y with X" and briefly describe the key visual features of X.
40
-
41
- ### 2. Text Editing Tasks
42
- - All text content must be enclosed in English double quotes `" "`. Keep the original language of the text, and keep the capitalization.
43
- - Both adding new text and replacing existing text are text replacement tasks, For example:
44
- - Replace "xx" to "yy"
45
- - Replace the mask / bounding box to "yy"
46
- - Replace the visual object to "yy"
47
- - Specify text position, color, and layout only if user has required.
48
- - If font is specified, keep the original language of the font.
49
-
50
- ### 3. Human (ID) Editing Tasks
51
- - Emphasize maintaining the person’s core visual consistency (ethnicity, gender, age, hairstyle, expression, outfit, etc.).
52
- - If modifying appearance (e.g., clothes, hairstyle), ensure the new element is consistent with the original style.
53
- - **For expression changes / beauty / make up changes, they must be natural and subtle, never exaggerated.**
54
- - Example:
55
- > Original: "Change the person’s hat"
56
- > Rewritten: "Replace the man’s hat with a dark brown beret; keep smile, short hair, and gray jacket unchanged"
57
-
58
- ### 4. Style Conversion or Enhancement Tasks
59
- - If a style is specified, describe it concisely using key visual features. For example:
60
- > Original: "Disco style"
61
- > Rewritten: "1970s disco style: flashing lights, disco ball, mirrored walls, colorful tones"
62
- - For style reference, analyze the original image and extract key characteristics (color, composition, texture, lighting, artistic style, etc.), integrating them into the instruction.
63
- - **Colorization tasks (including old photo restoration) must use the fixed template:**
64
- "Restore and colorize the photo."
65
- - Clearly specify the object to be modified. For example:
66
- > Original: Modify the subject in Picture 1 to match the style of Picture 2.
67
- > Rewritten: Change the girl in Picture 1 to the ink-wash style of Picture 2 — rendered in black-and-white watercolor with soft color transitions.
68
-
69
- - If there are other changes, place the style description at the end.
70
-
71
- ### 5. Content Filling Tasks
72
- - For inpainting tasks, always use the fixed template: "Perform inpainting on this image. The original caption is: ".
73
- - For outpainting tasks, always use the fixed template: ""Extend the image beyond its boundaries using outpainting. The original caption is: ".
74
-
75
- ### 6. Multi-Image Tasks
76
- - Rewritten prompts must clearly point out which image’s element is being modified. For example:
77
- > Original: "Replace the subject of picture 1 with the subject of picture 2"
78
- > Rewritten: "Replace the girl of picture 1 with the boy of picture 2, keeping picture 2’s background unchanged"
79
- - For stylization tasks, describe the reference image’s style in the rewritten prompt, while preserving the visual content of the source image.
80
-
81
- ## 3. Rationale and Logic Checks
82
- - Resolve contradictory instructions: e.g., "Remove all trees but keep all trees" should be logically corrected.
83
- - Add missing key information: e.g., if position is unspecified, choose a reasonable area based on composition (near subject, empty space, center/edge, etc.).
84
-
85
- # Output Format Example
86
- ```json
87
- {
88
- "Rewritten": "..."
89
- }
90
- '''
91
-
92
- DEFAULT_TRUE_GUIDANCE_SCALE = 4.0
93
- DEFAULT_NUM_INFERENCE_STEPS = 40
94
-
95
- prompt_debug_value = [None]
96
- input_images_debug_value = [None]
97
- number_debug_value = [None]
98
-
99
-
100
- def encode_image(pil_image):
101
- import io
102
- buffered = io.BytesIO()
103
- pil_image.save(buffered, format="PNG")
104
- return base64.b64encode(buffered.getvalue()).decode("utf-8")
105
-
106
- # --- Model Loading ---
107
- dtype = torch.bfloat16
108
- device = "cuda" if torch.cuda.is_available() else "cpu"
109
-
110
- # Load the model pipeline
111
- pipe = QwenImageEditPlusPipeline.from_pretrained("Qwen/Qwen-Image-Edit-2511", torch_dtype=dtype).to(device)
112
-
113
- # --- UI Constants and Helpers ---
114
- MAX_SEED = np.iinfo(np.int32).max
115
-
116
- # --- Main Inference Function (with hardcoded negative prompt) ---
117
- def infer(
118
- images,
119
- prompt,
120
- seed=42,
121
- randomize_seed=True,
122
- true_guidance_scale=DEFAULT_TRUE_GUIDANCE_SCALE,
123
- num_inference_steps=DEFAULT_NUM_INFERENCE_STEPS,
124
- height=None,
125
- width=None,
126
- rewrite_prompt=False,
127
- num_images_per_prompt=1,
128
- progress=gr.Progress(track_tqdm=True),
129
- ):
130
- """
131
- Generates an image using the local Qwen-Image diffusers pipeline.
132
- """
133
- # Hardcode the negative prompt as requested
134
- negative_prompt = " "
135
-
136
- # Load input images into PIL Images
137
- pil_images = []
138
- if images is not None:
139
- for item in images:
140
- try:
141
- if isinstance(item[0], Image.Image):
142
- pil_images.append(item[0].convert("RGB"))
143
- elif isinstance(item[0], str):
144
- pil_images.append(Image.open(item[0]).convert("RGB"))
145
- elif hasattr(item, "name"):
146
- pil_images.append(Image.open(item.name).convert("RGB"))
147
- except Exception:
148
- continue
149
-
150
- if height==256 and width==256:
151
- height, width = None, None
152
-
153
- if randomize_seed:
154
- seed = random.randint(0, MAX_SEED)
155
-
156
- print(f"Calling pipeline with prompt: '{prompt}'")
157
- print(f"Negative Prompt: '{negative_prompt}'")
158
- print(f"Seed: {seed}, Steps: {num_inference_steps}, Guidance: {true_guidance_scale}, Size: {width}x{height}")
159
-
160
- return infer_on_gpu(
161
- pil_images if len(pil_images) > 0 else None,
162
- prompt,
163
- negative_prompt,
164
- seed,
165
- randomize_seed,
166
- true_guidance_scale,
167
- num_inference_steps,
168
- height,
169
- width,
170
- rewrite_prompt,
171
- num_images_per_prompt,
172
- progress
173
- ), seed
174
-
175
- def get_duration(
176
- pil_images,
177
- prompt,
178
- negative_prompt,
179
- seed,
180
- randomize_seed,
181
- true_guidance_scale,
182
- num_inference_steps,
183
- height,
184
- width,
185
- rewrite_prompt,
186
- num_images_per_prompt,
187
- progress,
188
- ):
189
- return 180 + (len(pil_images if pil_images is not None else 0) * 60)
190
-
191
- @spaces.GPU(duration=get_duration)
192
- def infer_on_gpu(
193
- pil_images,
194
- prompt,
195
- negative_prompt,
196
- seed,
197
- randomize_seed,
198
- true_guidance_scale,
199
- num_inference_steps,
200
- height,
201
- width,
202
- rewrite_prompt,
203
- num_images_per_prompt,
204
- progress,
205
- ):
206
- # Set up the generator for reproducibility
207
- generator = torch.Generator(device=device).manual_seed(seed)
208
-
209
- # Generate the image
210
- output_images = pipe(
211
- image=pil_images,
212
- prompt=prompt,
213
- height=height,
214
- width=width,
215
- negative_prompt=negative_prompt,
216
- num_inference_steps=num_inference_steps,
217
- generator=generator,
218
- true_cfg_scale=true_guidance_scale,
219
- num_images_per_prompt=num_images_per_prompt,
220
- ).images
221
-
222
- return output_images
223
-
224
- def export_images_to_zip(gallery) -> str:
225
- """
226
- Bundle compiled_transformer_1 and compiled_transformer_2 into a zip file and return the file path.
227
- """
228
-
229
- tmp_zip = tempfile.NamedTemporaryFile(suffix=".zip", delete=False)
230
- tmp_zip.close()
231
-
232
- with zipfile.ZipFile(tmp_zip.name, "w", compression=zipfile.ZIP_DEFLATED) as zf:
233
- for i in range(len(gallery)):
234
- image_path = gallery[i]
235
- zf.write(image_path, arcname=os.path.basename(image_path))
236
-
237
- print(str(len(gallery)) + " images zipped")
238
- return tmp_zip.name
239
-
240
- def save_on_path(img: Image, filename: str, format_: str = None) -> Path:
241
- tmp_dir = Path(tempfile.mkdtemp(prefix="pil_tmp_"))
242
- file_path = tmp_dir / filename
243
- if isinstance(img, np.ndarray):
244
- img = Image.fromarray(img)
245
- img.save(file_path, format=format_ or img.format)
246
-
247
- return file_path
248
-
249
- def infer_example(
250
- input_images,
251
- prompt,
252
- seed=42,
253
- randomize_seed=True,
254
- true_guidance_scale=DEFAULT_TRUE_GUIDANCE_SCALE,
255
- num_inference_steps=DEFAULT_NUM_INFERENCE_STEPS,
256
- height=None,
257
- width=None,
258
- rewrite_prompt=False,
259
- num_images_per_prompt=1,
260
- ):
261
- start = time.time()
262
- number=1
263
- if prompt_debug_value[0] is not None or input_images_debug_value[0] is not None or number_debug_value[0] is not None:
264
- prompt=prompt_debug_value[0]
265
- input_images=input_images_debug_value[0]
266
- number=number_debug_value[0]
267
-
268
- gallery = []
269
- for i in range(number):
270
- try:
271
- print("Generating #" + str(i + 1) + " image...")
272
- seed = random.randint(0, MAX_SEED)
273
- [output_images, seed] = infer(
274
- input_images,
275
- prompt,
276
- seed,
277
- randomize_seed,
278
- true_guidance_scale,
279
- num_inference_steps,
280
- height,
281
- width,
282
- rewrite_prompt,
283
- num_images_per_prompt
284
- )
285
- for output_image in output_images:
286
- image_filename = datetime.now().strftime("%Y-%m-%d_%H-%M-%S.%f") + '.webp'
287
- path = save_on_path(output_image, image_filename, format_="WEBP")
288
- print("Image #" + str(i + 1) + " generated: " + str(path))
289
- gallery.append(path)
290
- except Exception as e:
291
- print('Error: ' + e.message if e is not None else '')
292
- #raise e
293
- zip_path = export_images_to_zip(gallery)
294
- print("ZIP path: " + str(zip_path))
295
-
296
- end = time.time()
297
- secondes = int(end - start)
298
- minutes = math.floor(secondes / 60)
299
- secondes = secondes - (minutes * 60)
300
- hours = math.floor(minutes / 60)
301
- minutes = minutes - (hours * 60)
302
- information = ("Start the process again if you want a different result. " if randomize_seed else "") + \
303
- "The images have been generated in " + \
304
- ((str(hours) + " h, ") if hours != 0 else "") + \
305
- ((str(minutes) + " min, ") if hours != 0 or minutes != 0 else "") + \
306
- str(secondes) + " sec (including " + str("allocation_time") + " seconds of GPU). " + \
307
- "The images have " + str(num_images_per_prompt) + " num_images_per_prompt. " + \
308
- "The image resolution is " + str(width) + \
309
- " pixels large and " + str(height) + \
310
- " pixels high" + ((", so a resolution of " + f'{width * height:,}' + " pixels") if width is not None and height is not None else "") + "."
311
- return [seed, gallery, zip_path, information]
312
-
313
- # --- Examples and UI Layout ---
314
- examples = [
315
- [["kill_bill.jpeg"], "A brunette woman"]
316
- ]
317
-
318
- css = """
319
- #col-container {
320
- margin: 0 auto;
321
- max-width: 1024px;
322
- }
323
- #edit_text{margin-top: -62px !important}
324
- #default_examples {
325
- display:none;
326
- }
327
- """
328
-
329
- js = """
330
- function afterGeneration() {
331
- document.getElementById('download_btn').click();
332
- return 0;
333
- }
334
- """
335
-
336
- with gr.Blocks(css=css, js=js) as demo:
337
- with gr.Column(elem_id="col-container"):
338
- gr.HTML('<img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-Image/qwen_image_edit_logo.png" alt="Qwen-Image Logo" width="400" style="display: block; margin: 0 auto;">')
339
- gr.Markdown("[Learn more](https://github.com/QwenLM/Qwen-Image) about the Qwen-Image series. Try on [Qwen Chat](https://chat.qwen.ai/), or [download model](https://huggingface.co/Qwen/Qwen-Image-Edit) to run locally with ComfyUI or diffusers.")
340
- with gr.Row():
341
- with gr.Column():
342
- input_images = gr.Gallery(label="Input Images", show_label=False, type="pil", interactive=True)
343
-
344
- # result = gr.Image(label="Result", show_label=False, type="pil")
345
- result = gr.Gallery(label="Result", show_label=False, type="pil")
346
- with gr.Row():
347
- prompt = gr.Text(
348
- label="Prompt",
349
- show_label=False,
350
- placeholder="describe the edit instruction",
351
- container=False,
352
- )
353
- run_button = gr.Button("🚀 Edit!", variant="primary")
354
-
355
- with gr.Accordion("Advanced Settings", open=False):
356
- # Negative prompt UI element is removed here
357
-
358
- seed = gr.Slider(
359
- label="Seed",
360
- minimum=0,
361
- maximum=MAX_SEED,
362
- step=1,
363
- value=0,
364
- )
365
-
366
- randomize_seed = gr.Checkbox(label="Randomize seed", value=True)
367
-
368
- with gr.Row():
369
-
370
- true_guidance_scale = gr.Slider(
371
- label="True guidance scale",
372
- minimum=1.0,
373
- maximum=10.0,
374
- step=0.1,
375
- value=DEFAULT_TRUE_GUIDANCE_SCALE
376
- )
377
-
378
- num_inference_steps = gr.Slider(
379
- label="Number of inference steps",
380
- minimum=1,
381
- maximum=60,
382
- step=1,
383
- value=DEFAULT_NUM_INFERENCE_STEPS
384
- )
385
-
386
- height = gr.Slider(
387
- label="Height",
388
- minimum=256,
389
- maximum=2048,
390
- step=8,
391
- value=1024,
392
- )
393
-
394
- width = gr.Slider(
395
- label="Width",
396
- minimum=256,
397
- maximum=2048,
398
- step=8,
399
- value=1024,
400
- )
401
-
402
-
403
- rewrite_prompt = gr.Checkbox(label="Rewrite prompt", value=True)
404
-
405
- with gr.Row(elem_id="default_examples"):
406
- prompt_debug = gr.Text(
407
- max_lines=2,
408
- container=False,
409
- scale=3
410
- )
411
- input_images_debug = gr.Gallery(
412
- label="Input Image(s)",
413
- type="pil",
414
- columns=3,
415
- rows=1,
416
- )
417
- number_debug=gr.Slider(minimum=1, maximum=50, step=1, value=50)
418
- download_button = gr.DownloadButton(elem_id="download_btn", interactive = True)
419
- info_debug = gr.HTML(value = "")
420
- gr.Examples(
421
- examples=examples,
422
- inputs=[input_images, prompt],
423
- outputs=[seed, result, download_button, info_debug],
424
- fn=infer_example,
425
- cache_examples=True
426
- )
427
-
428
- def handle_field_debug_change(prompt_debug_data, input_images_debug_data, number_debug_data):
429
- prompt_debug_value[0] = prompt_debug_data
430
- input_images_debug_value[0] = input_images_debug_data
431
- number_debug_value[0] = number_debug_data
432
- return []
433
-
434
- inputs_debug=[prompt_debug, input_images_debug, number_debug]
435
-
436
- prompt_debug.change(fn=handle_field_debug_change, inputs=inputs_debug, outputs=[])
437
- input_images_debug.change(fn=handle_field_debug_change, inputs=inputs_debug, outputs=[])
438
- number_debug.change(fn=handle_field_debug_change, inputs=inputs_debug, outputs=[])
439
-
440
- gr.on(
441
- triggers=[run_button.click, prompt.submit],
442
- fn=infer,
443
- inputs=[
444
- input_images,
445
- prompt,
446
- seed,
447
- randomize_seed,
448
- true_guidance_scale,
449
- num_inference_steps,
450
- height,
451
- width,
452
- rewrite_prompt,
453
- ],
454
- outputs=[result, seed],
455
- )
456
- result.change(
457
- fn=lambda text : [],
458
- inputs=[prompt],
459
- outputs=[],
460
- js="afterGeneration()"
461
- )
462
-
463
- if __name__ == "__main__":
464
- demo.launch(mcp_server=True, share=True)
 
 
1
+ import os
2
+ from datetime import datetime
3
+ import tempfile
4
+ import zipfile
5
+ from pathlib import Path
6
+ import subprocess
7
+ import sys
8
+ import io
9
+ import gradio as gr
10
+ import numpy as np
11
+ import random
12
+ import spaces
13
+ import torch
14
+ from diffusers import Flux2KleinPipeline
15
+ import requests
16
+ from PIL import Image
17
+ import json
18
+ import base64
19
+ from huggingface_hub import InferenceClient
20
+
21
+ dtype = torch.bfloat16
22
+ device = "cuda" if torch.cuda.is_available() else "cpu"
23
+
24
+ MAX_SEED = np.iinfo(np.int32).max
25
+ MAX_IMAGE_SIZE = 1024
26
+
27
+ hf_client = InferenceClient(
28
+ api_key=os.environ.get("HF_TOKEN"),
29
+ )
30
+ VLM_MODEL = "baidu/ERNIE-4.5-VL-424B-A47B-Base-PT"
31
+
32
+ SYSTEM_PROMPT_TEXT_ONLY = """You are an expert prompt engineer for FLUX.2 by Black Forest Labs. Rewrite user prompts to be more descriptive while strictly preserving their core subject and intent.
33
+
34
+ Guidelines:
35
+ 1. Structure: Keep structured inputs structured (enhance within fields). Convert natural language to detailed paragraphs.
36
+ 2. Details: Add concrete visual specifics - form, scale, textures, materials, lighting (quality, direction, color), shadows, spatial relationships, and environmental context.
37
+ 3. Text in Images: Put ALL text in quotation marks, matching the prompt's language. Always provide explicit quoted text for objects that would contain text in reality (signs, labels, screens, etc.) - without it, the model generates gibberish.
38
+
39
+ Output only the revised prompt and nothing else."""
40
+
41
+ SYSTEM_PROMPT_WITH_IMAGES = """You are FLUX.2 by Black Forest Labs, an image-editing expert. You convert editing requests into one concise instruction (50-80 words, ~30 for brief requests).
42
+
43
+ Rules:
44
+ - Single instruction only, no commentary
45
+ - Use clear, analytical language (avoid "whimsical," "cascading," etc.)
46
+ - Specify what changes AND what stays the same (face, lighting, composition)
47
+ - Reference actual image elements
48
+ - Turn negatives into positives ("don't change X" "keep X")
49
+ - Make abstractions concrete ("futuristic" → "glowing cyan neon, metallic panels")
50
+ - Keep content PG-13
51
+
52
+ Output only the final instruction in plain text and nothing else."""
53
+
54
+ # Model repository IDs for 4B
55
+ REPO_ID_REGULAR = "black-forest-labs/FLUX.2-klein-base-4B"
56
+ REPO_ID_DISTILLED = "black-forest-labs/FLUX.2-klein-4B"
57
+
58
+ # Load both 4B models
59
+ print("Loading 4B Regular model...")
60
+ pipe_regular = Flux2KleinPipeline.from_pretrained(REPO_ID_REGULAR, torch_dtype=dtype)
61
+ pipe_regular.to("cuda")
62
+
63
+ print("Loading 4B Distilled model...")
64
+ pipe_distilled = Flux2KleinPipeline.from_pretrained(REPO_ID_DISTILLED, torch_dtype=dtype)
65
+ pipe_distilled.to("cuda")
66
+
67
+ # Dictionary for easy access
68
+ pipes = {
69
+ "Distilled (4 steps)": pipe_distilled,
70
+ "Base (50 steps)": pipe_regular,
71
+ }
72
+
73
+ # Default steps for each mode
74
+ DEFAULT_STEPS = {
75
+ "Distilled (4 steps)": 4,
76
+ "Base (50 steps)": 50,
77
+ }
78
+
79
+ # Default CFG for each mode
80
+ DEFAULT_CFG = {
81
+ "Distilled (4 steps)": 1.0,
82
+ "Base (50 steps)": 4.0,
83
+ }
84
+
85
+ prompt_debug_value = [None]
86
+ input_images_debug_value = [None]
87
+ number_debug_value = [None]
88
+
89
+ def image_to_data_uri(img):
90
+ buffered = io.BytesIO()
91
+ img.save(buffered, format="PNG")
92
+ img_str = base64.b64encode(buffered.getvalue()).decode("utf-8")
93
+ return f"data:image/png;base64,{img_str}"
94
+
95
+
96
+ def upsample_prompt_logic(prompt, image_list):
97
+ try:
98
+ if image_list and len(image_list) > 0:
99
+ # Image + Text Editing Mode
100
+ system_content = SYSTEM_PROMPT_WITH_IMAGES
101
+
102
+ # Construct user message with text and images
103
+ user_content = [{"type": "text", "text": prompt}]
104
+
105
+ for img in image_list:
106
+ data_uri = image_to_data_uri(img)
107
+ user_content.append({
108
+ "type": "image_url",
109
+ "image_url": {"url": data_uri}
110
+ })
111
+
112
+ messages = [
113
+ {"role": "system", "content": system_content},
114
+ {"role": "user", "content": user_content}
115
+ ]
116
+ else:
117
+ # Text Only Mode
118
+ system_content = SYSTEM_PROMPT_TEXT_ONLY
119
+ messages = [
120
+ {"role": "system", "content": system_content},
121
+ {"role": "user", "content": prompt}
122
+ ]
123
+
124
+ completion = hf_client.chat.completions.create(
125
+ model=VLM_MODEL,
126
+ messages=messages,
127
+ max_tokens=1024
128
+ )
129
+
130
+ return completion.choices[0].message.content
131
+ except Exception as e:
132
+ print(f"Upsampling failed: {e}")
133
+ return prompt
134
+
135
+
136
+ def update_dimensions_from_image(image_list):
137
+ """Update width/height sliders based on uploaded image aspect ratio.
138
+ Keeps one side at 1024 and scales the other proportionally, with both sides as multiples of 8."""
139
+ if image_list is None or len(image_list) == 0:
140
+ return 1024, 1024 # Default dimensions
141
+
142
+ # Get the first image to determine dimensions
143
+ img = image_list[0][0] # Gallery returns list of tuples (image, caption)
144
+ img_width, img_height = img.size
145
+
146
+ aspect_ratio = img_width / img_height
147
+
148
+ if aspect_ratio >= 1: # Landscape or square
149
+ new_width = 1024
150
+ new_height = int(1024 / aspect_ratio)
151
+ else: # Portrait
152
+ new_height = 1024
153
+ new_width = int(1024 * aspect_ratio)
154
+
155
+ # Round to nearest multiple of 8
156
+ new_width = round(new_width / 8) * 8
157
+ new_height = round(new_height / 8) * 8
158
+
159
+ # Ensure within valid range (minimum 256, maximum 1024)
160
+ new_width = max(256, min(1024, new_width))
161
+ new_height = max(256, min(1024, new_height))
162
+
163
+ return new_width, new_height
164
+
165
+
166
+ def update_steps_from_mode(mode_choice):
167
+ """Update the number of inference steps based on the selected mode."""
168
+ return DEFAULT_STEPS[mode_choice], DEFAULT_CFG[mode_choice]
169
+
170
+
171
+ @spaces.GPU(duration=85)
172
+ def infer(prompt, input_images=None, mode_choice="Distilled (4 steps)", seed=42, randomize_seed=False, width=1024, height=1024, num_inference_steps=4, guidance_scale=4.0, prompt_upsampling=False, progress=gr.Progress(track_tqdm=True)):
173
+
174
+ if randomize_seed:
175
+ seed = random.randint(0, MAX_SEED)
176
+
177
+ # Select the appropriate pipeline based on mode choice
178
+ pipe = pipes[mode_choice]
179
+
180
+ # Prepare image list (convert None or empty gallery to None)
181
+ image_list = None
182
+ if input_images is not None and len(input_images) > 0:
183
+ image_list = []
184
+ for item in input_images:
185
+ image_list.append(item[0])
186
+
187
+ # 1. Upsampling (Network bound)
188
+ final_prompt = prompt
189
+ if prompt_upsampling:
190
+ progress(0.1, desc="Upsampling prompt...")
191
+ final_prompt = upsample_prompt_logic(prompt, image_list)
192
+ print(f"Original Prompt: {prompt}")
193
+ print(f"Upsampled Prompt: {final_prompt}")
194
+
195
+ # 2. Image Generation
196
+ progress(0.2, desc=f"Generating image with 4B {mode_choice}...")
197
+
198
+ generator = torch.Generator(device=device).manual_seed(seed)
199
+
200
+ pipe_kwargs = {
201
+ "prompt": final_prompt,
202
+ "height": height,
203
+ "width": width,
204
+ "num_inference_steps": num_inference_steps,
205
+ "guidance_scale": guidance_scale,
206
+ "generator": generator,
207
+ }
208
+
209
+ # Add images if provided
210
+ if image_list is not None:
211
+ pipe_kwargs["image"] = image_list
212
+
213
+ image = pipe(**pipe_kwargs).images[0]
214
+
215
+ return image, seed
216
+
217
+ def export_images_to_zip(gallery) -> str:
218
+ """
219
+ Bundle compiled_transformer_1 and compiled_transformer_2 into a zip file and return the file path.
220
+ """
221
+
222
+ tmp_zip = tempfile.NamedTemporaryFile(suffix=".zip", delete=False)
223
+ tmp_zip.close()
224
+
225
+ with zipfile.ZipFile(tmp_zip.name, "w", compression=zipfile.ZIP_DEFLATED) as zf:
226
+ for i in range(len(gallery)):
227
+ image_path = gallery[i]
228
+ zf.write(image_path, arcname=os.path.basename(image_path))
229
+
230
+ print(str(len(gallery)) + " images zipped")
231
+ return tmp_zip.name
232
+
233
+ def save_on_path(img: Image, filename: str, format_: str = None) -> Path:
234
+ """
235
+ Save `img` in a unique temporary folder under the given `filename`
236
+ and return its absolute path.
237
+ """
238
+ # 1) unique temporary folder
239
+ tmp_dir = Path(tempfile.mkdtemp(prefix="pil_tmp_"))
240
+
241
+ # 2) full path of the future file
242
+ file_path = tmp_dir / filename
243
+
244
+ # 3) save
245
+ img.save(file_path, format=format_ or img.format)
246
+
247
+ return file_path
248
+
249
+ def infer_example(prompt, input_images=None, mode_choice="Distilled (4 steps)", seed=42, randomize_seed=False, width=1024, height=1024, num_inference_steps=4, guidance_scale=4.0, prompt_upsampling=False):
250
+ number=1
251
+ if prompt_debug_value[0] is not None or input_images_debug_value[0] is not None or number_debug_value[0] is not None:
252
+ prompt=prompt_debug_value[0]
253
+ input_images=input_images_debug_value[0]
254
+ number=number_debug_value[0]
255
+
256
+ gallery = []
257
+ for i in range(number):
258
+ try:
259
+ print("Generating #" + str(i + 1) + " image...")
260
+ seed = random.randint(0, MAX_SEED)
261
+ [image, seed] = infer(prompt, input_images, mode_choice, seed, True, width, height, num_inference_steps, guidance_scale, prompt_upsampling)
262
+ image_filename = datetime.now().strftime("%Y-%m-%d_%H-%M-%S.%f") + '.webp'
263
+ path = save_on_path(image, image_filename, format_="WEBP")
264
+ gallery.append(path)
265
+ except Exception as e:
266
+ print('Error: ' + e.message if e is not None else '')
267
+ #raise e
268
+ zip_path = export_images_to_zip(gallery)
269
+ return [seed, zip_path, "Done!"]
270
+
271
+
272
+ examples = [
273
+ ["Create a vase on a table in living room, the color of the vase is a gradient of color, starting with #02eb3c color and finishing with #edfa3c. The flowers inside the vase have the color #ff0088"],
274
+ ["Photorealistic infographic showing the complete Berlin TV Tower (Fernsehturm) from ground base to antenna tip, full vertical view with entire structure visible including concrete shaft, metallic sphere, and antenna spire. Slight upward perspective angle looking up toward the iconic sphere, perfectly centered on clean white background. Left side labels with thin horizontal connector lines: the text '368m' in extra large bold dark grey numerals (#2D3748) positioned at exactly the antenna tip with 'TOTAL HEIGHT' in small caps below. The text '207m' in extra large bold with 'TELECAFÉ' in small caps below, with connector line touching the sphere precisely at the window level. Right side label with horizontal connector line touching the sphere's equator: the text '32m' in extra large bold dark grey numerals with 'SPHERE DIAMETER' in small caps below. Bottom section arranged in three balanced columns: Left - Large text '986' in extra bold dark grey with 'STEPS' in caps below. Center - 'BERLIN TV TOWER' in bold caps with 'FERNSEHTURM' in lighter weight below. Right - 'INAUGURATED' in bold caps with 'OCTOBER 3, 1969' below. All typography in modern sans-serif font (such as Inter or Helvetica), color #2D3748, clean minimal technical diagram style. Horizontal connector lines are thin, precise, and clearly visible, touching the tower structure at exact corresponding measurement points. Professional architectural elevation drawing aesthetic with dynamic low angle perspective creating sense of height and grandeur, poster-ready infographic design with perfect visual hierarchy."],
275
+ ["Soaking wet capybara taking shelter under a banana leaf in the rainy jungle, close up photo"],
276
+ ["A kawaii die-cut sticker of a chubby orange cat, featuring big sparkly eyes and a happy smile with paws raised in greeting and a heart-shaped pink nose. The design should have smooth rounded lines with black outlines and soft gradient shading with pink cheeks."],
277
+ ]
278
+
279
+ examples_images = [
280
+ ["The person from image 1 is petting the cat from image 2, the bird from image 3 is next to them", ["woman1.webp", "cat_window.webp", "bird.webp"]]
281
+ ]
282
+
283
+ css = """
284
+ #col-container {
285
+ margin: 0 auto;
286
+ max-width: 1200px;
287
+ }
288
+ .gallery-container img{
289
+ object-fit: contain;
290
+ }
291
+ """
292
+
293
+ with gr.Blocks(css=css) as demo:
294
+
295
+ with gr.Column(elem_id="col-container"):
296
+ gr.Markdown(f"""# FLUX.2 [Klein] - 4B (Apache 2.0)
297
+ FLUX.2 [klein] is a fast, unified image generation and editing model designed for fast inference [[model](https://huggingface.co/black-forest-labs/FLUX.2-klein-4B)], [[blog](https://bfl.ai/blog/flux-2)]
298
+ """)
299
+ with gr.Row():
300
+ with gr.Column():
301
+ with gr.Row():
302
+ prompt = gr.Text(
303
+ label="Prompt",
304
+ show_label=False,
305
+ max_lines=2,
306
+ placeholder="Enter your prompt",
307
+ container=False,
308
+ scale=3
309
+ )
310
+
311
+ run_button = gr.Button("Run", scale=1)
312
+
313
+ with gr.Accordion("Input image(s) (optional)", open=False):
314
+ input_images = gr.Gallery(
315
+ label="Input Image(s)",
316
+ type="pil",
317
+ columns=3,
318
+ rows=1,
319
+ )
320
+ input_image = gr.Image(label="Upload the image for editing", type="pil")
321
+
322
+ mode_choice = gr.Radio(
323
+ label="Mode",
324
+ choices=["Distilled (4 steps)", "Base (50 steps)"],
325
+ value="Distilled (4 steps)",
326
+ )
327
+
328
+ with gr.Accordion("Advanced Settings", open=False):
329
+
330
+ prompt_upsampling = gr.Checkbox(
331
+ label="Prompt Upsampling",
332
+ value=False,
333
+ info="Automatically enhance the prompt using a VLM"
334
+ )
335
+
336
+ seed = gr.Slider(
337
+ label="Seed",
338
+ minimum=0,
339
+ maximum=MAX_SEED,
340
+ step=1,
341
+ value=0,
342
+ )
343
+
344
+ randomize_seed = gr.Checkbox(label="Randomize seed", value=True)
345
+
346
+ with gr.Row():
347
+
348
+ width = gr.Slider(
349
+ label="Width",
350
+ minimum=256,
351
+ maximum=MAX_IMAGE_SIZE,
352
+ step=8,
353
+ value=1024,
354
+ )
355
+
356
+ height = gr.Slider(
357
+ label="Height",
358
+ minimum=256,
359
+ maximum=MAX_IMAGE_SIZE,
360
+ step=8,
361
+ value=1024,
362
+ )
363
+
364
+ with gr.Row():
365
+
366
+ num_inference_steps = gr.Slider(
367
+ label="Number of inference steps",
368
+ minimum=1,
369
+ maximum=100,
370
+ step=1,
371
+ value=4,
372
+ )
373
+
374
+ guidance_scale = gr.Slider(
375
+ label="Guidance scale",
376
+ minimum=0.0,
377
+ maximum=10.0,
378
+ step=0.1,
379
+ value=1.0,
380
+ )
381
+
382
+
383
+ with gr.Column():
384
+ result = gr.Image(label="Result", show_label=False)
385
+
386
+
387
+ gr.Examples(
388
+ examples=examples,
389
+ fn=infer,
390
+ inputs=[prompt],
391
+ outputs=[result, seed],
392
+ cache_examples=True,
393
+ cache_mode="lazy"
394
+ )
395
+
396
+ gr.Examples(
397
+ examples=examples_images,
398
+ fn=infer,
399
+ inputs=[prompt, input_images],
400
+ outputs=[result, seed],
401
+ cache_examples=True,
402
+ cache_mode="lazy"
403
+ )
404
+
405
+ # Auto-update dimensions when images are uploaded
406
+ input_images.upload(
407
+ fn=update_dimensions_from_image,
408
+ inputs=[input_images],
409
+ outputs=[width, height]
410
+ )
411
+
412
+ # Auto-update steps when mode changes
413
+ mode_choice.change(
414
+ fn=update_steps_from_mode,
415
+ inputs=[mode_choice],
416
+ outputs=[num_inference_steps, guidance_scale]
417
+ )
418
+
419
+ gr.on(
420
+ triggers=[run_button.click, prompt.submit],
421
+ fn=infer,
422
+ inputs=[prompt, input_images, mode_choice, seed, randomize_seed, width, height, num_inference_steps, guidance_scale, prompt_upsampling],
423
+ outputs=[result, seed]
424
+ )
425
+
426
+ with gr.Row(visible=False):
427
+ download_button = gr.DownloadButton(elem_id="download_btn", interactive = True)
428
+ info_debug = gr.HTML(value = "")
429
+ prompt_debug = gr.Text(
430
+ max_lines=2,
431
+ container=False,
432
+ scale=3
433
+ )
434
+ input_images_debug = gr.Gallery(
435
+ label="Input Image(s)",
436
+ type="pil",
437
+ columns=3,
438
+ rows=1,
439
+ )
440
+ gr.Examples(
441
+ examples=[
442
+ ["A dog", "woman1.webp"]
443
+ ],
444
+ fn=infer_example,
445
+ inputs=[prompt, input_image],
446
+ outputs=[seed, download_button, info_debug],
447
+ run_on_click=True,
448
+ cache_examples=True,
449
+ cache_mode='lazy'
450
+ )
451
+ number_debug=gr.Slider(minimum=1, maximum=50, step=1, value=50)
452
+
453
+ def handle_field_debug_change(prompt_debug_data, input_images_debug_data, number_debug_data):
454
+ prompt_debug_value[0] = prompt_debug_data
455
+ input_images_debug_value[0] = input_images_debug_data
456
+ number_debug_value[0] = number_debug_data
457
+ return []
458
+
459
+ inputs_debug=[prompt_debug, input_images_debug, number_debug]
460
+
461
+ prompt_debug.change(fn=handle_field_debug_change, inputs=inputs_debug, outputs=[])
462
+ input_images_debug.change(fn=handle_field_debug_change, inputs=inputs_debug, outputs=[])
463
+ number_debug.change(fn=handle_field_debug_change, inputs=inputs_debug, outputs=[])
464
+
465
+ demo.launch()
bird.webp ADDED

Git LFS Details

  • SHA256: b9728196fd7c7a90cba78764fa66e909fb1bce298307f312e61b833545afe6f4
  • Pointer size: 131 Bytes
  • Size of remote file: 208 kB
cat_window.webp ADDED
diffusers.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4f21e689d4807674000c1da8d3a084f7d521e961ee26ebd77e7a55a8efc3b95d
3
+ size 5269929
optimization.py CHANGED
@@ -4,37 +4,34 @@
4
  from typing import Any
5
  from typing import Callable
6
  from typing import ParamSpec
7
-
8
- import os
9
  import spaces
10
  import torch
11
- from torch.utils._pytree import tree_map_only
12
- from torchao.quantization import quantize_
13
- from torchao.quantization import Float8DynamicActivationFloat8WeightConfig
14
- from torchao.quantization import Int8WeightOnlyConfig
15
- from huggingface_hub import hf_hub_download
16
-
17
- from optimization_utils import capture_component_call
18
- from optimization_utils import aoti_compile
19
- from optimization_utils import drain_module_parameters
20
- from optimization_utils import ZeroGPUCompiledModelFromDict # NEW
21
-
22
 
23
  P = ParamSpec('P')
24
 
25
- # Expose compiled models so app.py can offer them for download
26
- COMPILED_TRANSFORMER_1 = None
27
- COMPILED_TRANSFORMER_2 = None
28
-
29
- LATENT_FRAMES_DIM = torch.export.Dim('num_latent_frames', min=8, max=81)
30
- LATENT_PATCHED_HEIGHT_DIM = torch.export.Dim('latent_patched_height', min=30, max=52)
31
- LATENT_PATCHED_WIDTH_DIM = torch.export.Dim('latent_patched_width', min=30, max=52)
32
 
33
  TRANSFORMER_DYNAMIC_SHAPES = {
34
- 'hidden_states': {
35
- 2: LATENT_FRAMES_DIM,
36
- 3: 2 * LATENT_PATCHED_HEIGHT_DIM,
37
- 4: 2 * LATENT_PATCHED_WIDTH_DIM,
 
 
 
 
 
 
 
 
 
 
 
 
 
38
  },
39
  }
40
 
@@ -47,110 +44,34 @@ INDUCTOR_CONFIGS = {
47
  'triton.cudagraphs': True,
48
  }
49
 
50
-
51
- def load_compiled_transformers_from_hub(
52
- repo_id: str,
53
- filename_1: str = "compiled_transformer_1.pt",
54
- filename_2: str = "compiled_transformer_2.pt",
55
- device: str = "cuda",
56
- ):
57
- """
58
- Loads the payload dicts (created via ZeroGPUCompiledModel.to_serializable_dict() and torch.save)
59
- and rebuilds callable models that will move constants to CUDA on first call.
60
- """
61
- path_1 = hf_hub_download(repo_id=repo_id, filename=filename_1)
62
- path_2 = hf_hub_download(repo_id=repo_id, filename=filename_2)
63
-
64
- payload_1 = torch.load(path_1, map_location="cpu", weights_only=False)
65
- payload_2 = torch.load(path_2, map_location="cpu", weights_only=False)
66
-
67
- if not isinstance(payload_1, dict) or not isinstance(payload_2, dict):
68
- raise TypeError("Precompiled files are not payload dicts. Please re-export them with to_serializable_dict().")
69
-
70
- compiled_1 = ZeroGPUCompiledModelFromDict(payload_1, device=device)
71
- compiled_2 = ZeroGPUCompiledModelFromDict(payload_2, device=device)
72
- return compiled_1, compiled_2
73
-
74
-
75
- def _strtobool(v: str | None, default: bool = True) -> bool:
76
- if v is None:
77
- return default
78
- return v.strip().lower() in ("1", "true", "yes", "y", "on")
79
-
80
-
81
  def optimize_pipeline_(pipeline: Callable[P, Any], *args: P.args, **kwargs: P.kwargs):
82
- global COMPILED_TRANSFORMER_1, COMPILED_TRANSFORMER_2
83
 
84
- @spaces.GPU(duration=1500)
85
- def compile_transformer():
86
- pipeline.load_lora_weights(
87
- "Kijai/WanVideo_comfy",
88
- weight_name="Lightx2v/lightx2v_I2V_14B_480p_cfg_step_distill_rank128_bf16.safetensors",
89
- adapter_name="lightx2v",
90
- )
91
- kwargs_lora = {"load_into_transformer_2": True}
92
- pipeline.load_lora_weights(
93
- "Kijai/WanVideo_comfy",
94
- weight_name="Lightx2v/lightx2v_I2V_14B_480p_cfg_step_distill_rank128_bf16.safetensors",
95
- adapter_name="lightx2v_2",
96
- **kwargs_lora,
97
- )
98
- pipeline.set_adapters(["lightx2v", "lightx2v_2"], adapter_weights=[1.0, 1.0])
99
- pipeline.fuse_lora(adapter_names=["lightx2v"], lora_scale=3.0, components=["transformer"])
100
- pipeline.fuse_lora(adapter_names=["lightx2v_2"], lora_scale=1.0, components=["transformer_2"])
101
- pipeline.unload_lora_weights()
102
 
103
- with capture_component_call(pipeline, "transformer") as call:
 
 
 
104
  pipeline(*args, **kwargs)
105
 
106
- dynamic_shapes = tree_map_only((torch.Tensor, bool), lambda t: None, call.kwargs)
107
- dynamic_shapes |= TRANSFORMER_DYNAMIC_SHAPES
108
-
109
- quantize_(pipeline.transformer, Float8DynamicActivationFloat8WeightConfig())
110
- quantize_(pipeline.transformer_2, Float8DynamicActivationFloat8WeightConfig())
111
-
112
- exported_1 = torch.export.export(
113
- mod=pipeline.transformer,
114
- args=call.args,
115
- kwargs=call.kwargs,
116
- dynamic_shapes=dynamic_shapes,
117
- )
118
- exported_2 = torch.export.export(
119
- mod=pipeline.transformer_2,
120
- args=call.args,
121
- kwargs=call.kwargs,
122
- dynamic_shapes=dynamic_shapes,
123
- )
124
 
125
- compiled_1 = aoti_compile(exported_1, INDUCTOR_CONFIGS)
126
- compiled_2 = aoti_compile(exported_2, INDUCTOR_CONFIGS)
127
- return compiled_1, compiled_2
128
-
129
- # Quantize text encoder
130
- quantize_(pipeline.text_encoder, Int8WeightOnlyConfig())
131
-
132
- use_precompiled = False
133
- precompiled_repo = os.getenv("WAN_PRECOMPILED_REPO", "Fabrice-TIERCELIN/Wan_2.2_compiled")
134
-
135
- if use_precompiled:
136
- try:
137
- compiled_transformer_1, compiled_transformer_2 = load_compiled_transformers_from_hub(
138
- repo_id=precompiled_repo,
139
- device="cuda",
140
  )
141
- except Exception as e:
142
- # fallback if payload format is wrong / outdated
143
- print(f"[WARN] Failed to load precompiled artifacts ({e}). Falling back to GPU compilation.")
144
- compiled_transformer_1, compiled_transformer_2 = compile_transformer()
145
- else:
146
- compiled_transformer_1, compiled_transformer_2 = compile_transformer()
147
-
148
- # expose for downloads
149
- COMPILED_TRANSFORMER_1 = compiled_transformer_1
150
- COMPILED_TRANSFORMER_2 = compiled_transformer_2
151
 
152
- pipeline.transformer.forward = compiled_transformer_1
153
- drain_module_parameters(pipeline.transformer)
154
 
155
- pipeline.transformer_2.forward = compiled_transformer_2
156
- drain_module_parameters(pipeline.transformer_2)
 
 
 
 
4
  from typing import Any
5
  from typing import Callable
6
  from typing import ParamSpec
 
 
7
  import spaces
8
  import torch
9
+ from spaces.zero.torch.aoti import ZeroGPUCompiledModel
10
+ from spaces.zero.torch.aoti import ZeroGPUWeights
11
+ from torch.utils._pytree import tree_map
 
 
 
 
 
 
 
 
12
 
13
  P = ParamSpec('P')
14
 
15
+ TRANSFORMER_IMAGE_DIM = torch.export.Dim('image_seq_length', min=4096, max=16384) # min: 0 images, max: 3 (1024x1024) images
 
 
 
 
 
 
16
 
17
  TRANSFORMER_DYNAMIC_SHAPES = {
18
+ 'double': {
19
+ 'hidden_states': {
20
+ 1: TRANSFORMER_IMAGE_DIM,
21
+ },
22
+ 'image_rotary_emb': (
23
+ {0: TRANSFORMER_IMAGE_DIM + 512},
24
+ {0: TRANSFORMER_IMAGE_DIM + 512},
25
+ ),
26
+ },
27
+ 'single': {
28
+ 'hidden_states': {
29
+ 1: TRANSFORMER_IMAGE_DIM + 512,
30
+ },
31
+ 'image_rotary_emb': (
32
+ {0: TRANSFORMER_IMAGE_DIM + 512},
33
+ {0: TRANSFORMER_IMAGE_DIM + 512},
34
+ ),
35
  },
36
  }
37
 
 
44
  'triton.cudagraphs': True,
45
  }
46
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
47
  def optimize_pipeline_(pipeline: Callable[P, Any], *args: P.args, **kwargs: P.kwargs):
 
48
 
49
+ blocks = {
50
+ 'double': pipeline.transformer.transformer_blocks,
51
+ 'single': pipeline.transformer.single_transformer_blocks,
52
+ }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
53
 
54
+ @spaces.GPU(duration=1200)
55
+ def compile_block(blocks_kind: str):
56
+ block = blocks[blocks_kind][0]
57
+ with spaces.aoti_capture(block) as call:
58
  pipeline(*args, **kwargs)
59
 
60
+ dynamic_shapes = tree_map(lambda t: None, call.kwargs)
61
+ dynamic_shapes |= TRANSFORMER_DYNAMIC_SHAPES[blocks_kind]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
62
 
63
+ with torch.no_grad():
64
+ exported = torch.export.export(
65
+ mod=block,
66
+ args=call.args,
67
+ kwargs=call.kwargs,
68
+ dynamic_shapes=dynamic_shapes,
 
 
 
 
 
 
 
 
 
69
  )
 
 
 
 
 
 
 
 
 
 
70
 
71
+ return spaces.aoti_compile(exported, INDUCTOR_CONFIGS).archive_file
 
72
 
73
+ for blocks_kind in ('double', 'single'):
74
+ archive_file = compile_block(blocks_kind)
75
+ for block in blocks[blocks_kind]:
76
+ weights = ZeroGPUWeights(block.state_dict())
77
+ block.forward = ZeroGPUCompiledModel(archive_file, weights)
person1.webp ADDED
requirements.txt CHANGED
@@ -1,8 +1,8 @@
1
- git+https://github.com/huggingface/diffusers.git@973a077c6a4e7e7a7ea61a84bedd29ac24fb609a
2
  transformers
3
  accelerate
4
  safetensors
5
- sentencepiece
6
- dashscope
7
  kernels
8
- torchvision
 
1
+ git+https://github.com/huggingface/diffusers.git
2
  transformers
3
  accelerate
4
  safetensors
5
+ bitsandbytes
6
+ torchao
7
  kernels
8
+ spaces==0.43.0
woman1.webp ADDED
woman2.webp ADDED