--- license: mit language: - ase - bfi - gsg - sgd - fsl - lsf - lse - lis - lgp - ngt - asf - jsl - kvk - csl - aed - tsm - pjm - rsl - swl - dsl - fse - nsl - lsc - lsm - bzs task_categories: - other tags: - sign-language - pose-estimation - dwpose - multilingual - keypoint - video-understanding - sign-language-generation - sign-language-recognition - pose-native size_categories: - 1M **NeurIPS 2026 Evaluations & Datasets Track submission.** > [Project Page](https://signerx.github.io/SignVerse-2M) · [GitHub / Benchmark](https://github.com/SignerX/SignVerse-2M) --- ## Why pose-native? Existing large-scale sign language datasets are built around video–text alignment for recognition and translation tasks. Over the past three years, the broader vision community has converged on DWPose as a de-facto control interface for pose-driven image/video generation (ControlNet, MimicMotion, Wan, etc.). Sign language research still lacked a resource that plugs directly into this paradigm. SignVerse-2M fills that gap: | Property | SignVerse-2M | Typical video-text SL corpus | |---|---|---| | Representation | DWPose keypoints (body + hands + face) | Raw RGB video | | Scale | ~2 M clips, 39 K videos | Hundreds to tens of thousands of clips | | Languages | 25+ | 1–3 | | Compat. w/ modern generation | Direct | Requires re-processing | | Style-agnostic | Background / clothing removed | Mixed in | --- ## Dataset at a glance | Stat | Value | |---|---| | Videos | ~39,000 | | Clips / segments | ~2,000,000 | | Sign languages | 25+ | | Pose backend | DWPose (RTMPose-based) | | Keypoints per frame | 18 body + 21×2 hands + 68 face = 128 total | | Frame rate | 24 FPS | | Annotation | Automatic (no manual keypoint labeling) | | Supervision signal | Auto-structured subtitles (segment-level + document-level) | | Release format | Per-video `.tar` shards containing `poses.npz` + caption JSON | --- ## Languages covered The corpus inherits the language distribution of YouTube-SL-25 and other public multilingual sign language video sources. The table below lists the major languages; a long tail of additional languages is also present. | Code | Language | Code | Language | |---|---|---|---| | `ase` | American Sign Language (ASL) | `lsf` | French Sign Language (LSF) | | `bfi` | British Sign Language (BSL) | `lse` | Spanish Sign Language (LSE) | | `gsg` | German Sign Language (DGS) | `lis` | Italian Sign Language (LIS) | | `sgd` | Swiss German Sign Language (DSGS) | `lgp` | Portuguese Sign Language (LGP) | | `asf` | Australian Sign Language (Auslan) | `ngt` | Sign Language of the Netherlands (NGT) | | `jsl` | Japanese Sign Language (JSL) | `kvk` | Korean Sign Language (KSL) | | `csl` | Chinese Sign Language (CSL) | `bzs` | Brazilian Sign Language (Libras) | | `lsm` | Mexican Sign Language (LSM) | `pjm` | Polish Sign Language (PJM) | --- ## Data format Each video is packaged into numbered `.tar` shards with a per-video directory structure: ``` Sign_DWPose_NPZ_XXXXXX.tar └── {video_id}/ ├── poses.npz # DWPose keypoints for all frames ├── caption.json # Structured subtitles + English supervision └── {video_id}.complete ``` ### `poses.npz` schema ```python poses.npz = { "video_id": str, # YouTube video ID "fps": float, # sampling rate (24.0) "num_frames": int, "frame_ids": int[T], # 0-indexed frame indices "width": int, "height": int, "frames": [ # list of T per-frame payloads { "num_people": int, "frame_id": int, "width": int, "height": int, "person_0": { "body": float[18, 3], # (x, y, score) "face": float[68, 3], "left_hand": float[21, 3], "right_hand": float[21, 3], }, # "person_1": { ... }, # if multiple signers detected }, ... ] } ``` Keypoint coordinates are in **pixel space** (not normalized). Confidence scores are in `[0, 1]`. ### `caption.json` schema ```json { "video_id": "...", "sign_language": "ase", "title": "...", "duration_s": 312.4, "segments": [ {"start": 0.0, "end": 4.2, "text": "Hello, welcome to ..."}, ... ], "document_text": "Hello, welcome to ...", "english_source": "native" } ``` `english_source` is either `"native"` or `"translated_from:"`, indicating whether the English supervision came from a native English track or an auto-selected translation. --- ## Quick start ### Load a single shard ```python import tarfile, numpy as np, json with tarfile.open("Sign_DWPose_NPZ_000001.tar") as tar: tar.extractall("./extracted/") # Read poses for a video npz = np.load("extracted/{video_id}/poses.npz", allow_pickle=True) frames = npz["frames"].tolist() # list of per-frame dicts body_kps = frames[0]["person_0"]["body"] # shape (18, 3) → (x, y, score) # Read caption with open("extracted/{video_id}/caption.json") as f: caption = json.load(f) print(caption["segments"][0]["text"]) ``` ### Visualize poses ```bash python scripts/visualize_dwpose_npz.py \ --npz extracted/{video_id}/poses.npz \ --style openpose \ --out viz/ ``` ### Reproduce the full pipeline ```bash # Single machine bash reproduce_independently.sh # SLURM cluster bash reproduce_independently_slurm.sh ``` See `scripts/` for the three pipeline stages: (1) video download + caption structuring, (2) DWPose extraction, (3) upload to HuggingFace. --- ## Benchmark tasks and baseline The paper defines a **text-to-pose** sign language generation task evaluated via back-translation: > generated DWPose sequence → pose-space SL translator → spoken text → BLEU / ROUGE vs. original input A **SignDW Transformer** baseline (40M and 1.2B parameter variants) is provided. Benchmark code and model weights are at [github.com/SignerX/SignVerse-2M](https://github.com/SignerX/SignVerse-2M). --- ## Limitations - **Automatic pose extraction.** DWPose can produce missing or jittery keypoints on challenging frames (fast motion, partial occlusion, multiple signers). - **Fine-grained hand detail.** The 21-keypoint hand model does not fully capture handshape distinctions needed for lexical discrimination. - **Non-manual features.** Facial expressions and mouth patterns carry linguistic meaning in many sign languages; the 68-point face model is a partial proxy. - **Language imbalance.** The corpus follows a long-tail distribution; ASL and a few other languages dominate total hours. - **Subtitle quality.** Captions are automatically structured from platform exports; mistranslations and alignment errors propagate into the supervision signal. - **Primary-signer assumption.** The pipeline indexes `person_0` as the primary signer; multi-signer frames may be misattributed. - **No manual annotation.** No human-verified keypoints or signer identity metadata are included. --- ## Intended use **In scope:** - Multilingual sign language generation (text → pose → video) - Pose-space sign language recognition and translation research - Cross-lingual transfer and adaptation studies - Compatibility and benchmarking with modern pose-driven video generation models **Out of scope:** - Safety-critical deployment (medical or legal sign language interpretation) without additional validation - Re-identification of individual signers - Definitive linguistic completeness claims for any specific sign language --- ## Responsible AI **Personal information.** The corpus is derived from publicly posted YouTube videos. Only DWPose keypoint sequences and structured subtitle text are released; raw RGB frames are **not** distributed. Pose sequences may nonetheless allow re-identification of signers when combined with metadata. **Bias.** Content skews toward online educational and interpreter videos; spontaneous or conversational signing is underrepresented. High-resource languages (ASL, BSL) dominate total hours. **Consent.** Videos were publicly posted under terms that permit academic reuse. No raw video is redistributed. --- ## Citation ```bibtex @inproceedings{fang2026signverse2m, title = {{SignVerse-2M}: A Two-Million-Clip Pose-Native Universe of 25+ Sign Languages}, author = {Fang, Sen and Zhong, Hongbin and Zhang, Yanxin and Metaxas, Dimitris N.}, booktitle = {Advances in Neural Information Processing Systems (NeurIPS)}, year = {2026}, note = {Evaluations \& Datasets Track} } ``` --- ## License Released under the [MIT License](LICENSE). The underlying video content remains subject to YouTube's Terms of Service and the respective creators' rights. Only pose keypoints and structured caption text are distributed.