Datasets:
image imagewidth (px) 332 818 |
|---|
TL;DR
This repo provides usage guidance and all data samples for our proposed dataset, ReD Bench, a new image drag-editing dataset supporting both region-based and point-based operations.
Introduction
Regional-based Dragging (ReD) Bench, consisting of 120 sample images annotated with precise drag instructions at both point and region levels. Each manipulation in the dataset is associated with an intention label, selected from relocation, deformation, or rotation.
For every image, we provide two complementary instruction sets corresponding to point-based and region-based dragging. The region-based annotations are supplied as multiple PNG masks, with each region uniquely represented by its centroid for cross-reference. The drag annotations include multiple start-to-target point pairs, which can be directly aligned with the region annotations, ensuring consistency in task intention. Additionally, we provide background prompts and editing intention prompts for each image to facilitate multimodal tasks, along with masks generated using the DragFlow automatic masker. More details can be found in our technical paper.
Each sample contains information provided in the following storage structure. You can find descriptions for each on the right-hand side.
sample
├── temp # (Optional) Extra information for operation masks:
│ │── mask_orig__0.png # Before-affine transformation operation mask;
│ ├── mask_proc__0.png # After-affine transformation operation mask;
│ ├── mask_overlap.png # Demo with both before- and After-affine transformation operation masks;
│ └── user_editing.png # Demo with both region-based and point-based user operations;
├── instruction.json # Contains operation guidance (coordinates, labels and text prompts);
├── mask.png # Mask for background;
├── operation.png # Mask for operation;
├── original_image.png # Given raw image;
├── user_dragging.png # Demo for the point-based user operations;
└── user_operation.png # Demo for the region-based user operations;
Instruction
This section provides some sample code for loading the dataset. We first clone the dataset repo to your local folder ./datasets:
git lfs install
cd ./datasets
git clone https://huggingface.co/datasets/Edennnnn/ReD_Bench
Then you can use the following Python function to load the dataset, by giving the dataset directory path (./datasets/ReD_Bench in the example):
import json
import os
from pathlib import Path
import cv2
import torch
from einops import rearrange
from PIL import Image
import numpy as np
def _get_independent_regions(instruction, operation_region_path):
operation_region = cv2.imread(operation_region_path, cv2.IMREAD_GRAYSCALE)
_, binary_mask = cv2.threshold(operation_region, 1, 255, cv2.THRESH_BINARY)
contours, _ = cv2.findContours(binary_mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
region_info = []
for i, contour in enumerate(contours):
region_mask = np.zeros_like(operation_region)
cv2.drawContours(region_mask, [contour], -1, 255, thickness=cv2.FILLED)
if len(region_mask.shape) == 3:
region_mask = region_mask[:, :, 0]
region_tensor = torch.tensor(region_mask)
region_info.append({
"contour": contour,
"tensor": region_tensor
})
for key in instruction["region_operations"].keys():
pt = instruction["region_operations"][key]["centroids"][0]
for region in region_info:
if cv2.pointPolygonTest(region["contour"], pt, False) >= 0:
region_tensor = region["tensor"]
region_tensor = region_tensor.float() / 255.
region_tensor[region_tensor > 0.0] = 1.0
region_tensor = rearrange(region_tensor, "h w -> 1 1 h w")
instruction["region_operations"][key]["region"] = region_tensor
break
return instruction
def load_data(folder_path):
raw_image_path = os.path.join(folder_path, 'original_image.png')
mask_path = os.path.join(folder_path, 'mask.png')
operation_region_path = os.path.join(folder_path, 'operation.png')
instruction_path = os.path.join(folder_path, 'instruction.json')
image = Image.open(raw_image_path).convert('RGB')
mask = Image.open(mask_path).convert('L')
with open(instruction_path, 'r') as f:
instruction = json.load(f)
# Operation regions will be extracted and appended to the instruction, following the existing records;
instruction = _get_independent_regions(instruction=instruction, operation_region_path=operation_region_path)
print(f"\t> Sample Loaded.")
return image, mask, instruction
if __name__ == '__main__':
# Todo: replace the input dataset_path with your dataset path if needed, by default: `./datasets/ReD_Bench`
dataset_path = Path("./datasets/ReD_Bench")
folders = sorted([f for f in dataset_path.iterdir() if f.is_dir()], key=lambda f: int(f.name[1:]))
for folder in folders:
print(f"\n>> Loading `{folder}`...")
image, mask, instruction = load_data(folder_path=folder)
Citation
If you find our work useful in your research, please consider citing our paper:
@article{zhou2025dragflow,
title={Dragflow: Unleashing dit priors with region based supervision for drag editing},
author={Zhou, Zihan and Lu, Shilin and Leng, Shuli and Zhang, Shaocong and Lian, Zhuming and Yu, Xinlei and Kong, Adams Wai-Kin},
journal={arXiv preprint arXiv:2510.02253},
year={2025}
}
- Downloads last month
- 2