You need to agree to share your contact information to access this dataset
This repository is publicly accessible, but you have to accept the conditions to access its files and content.
TEDWB1k is derived from TED talks (https://www.ted.com), which are licensed under CC-BY-NC-ND 4.0 by TED Conferences, LLC. The dataset author claims no rights over the underlying TED content; this dataset is distributed for non-commercial academic research only and may be removed at any time at TED's request. By accessing TEDWB1k you agree to the terms below.
Log in or Sign Up to review the conditions and access this dataset content.
TEDWB1k
π Want to browse before requesting access? A public preview lives at
initialneil/TEDWB1k-preview. It has the same 5 split tabs in the HF Dataset Viewer with thumbnails for all 1,431 subjects, plus full downloadable data for 12 sample subjects β no agreement required. Use that to explore. Use this (gated) repo when you want the full training data.
1,431 TED-talk speaker videos with per-frame SMPL-X + FLAME tracking, ready for 3D human / avatar research.
TEDWB1k is a tracked subset of TED talks built with the HolisticTracker (ehm-tracker) pipeline. Every subject is shot-segmented, background-matted, and fitted with whole-body SMPL-X (body + hands), FLAME (face + jaw + eyes), and per-shot whole-image keypoints. It is the dataset used to train the HolisticAvatar feed-forward Gaussian avatar model.
License: TED talks on ted.com are CC-BY-NC-ND 4.0. This dataset matches the upstream license: CC-BY-NC-ND 4.0. Non-commercial research only, attribution required, no redistribution of modified or derivative versions.
At a glance
| Split | Subjects | Approx download | Notes |
|---|---|---|---|
train_subset_x1 |
1 | ~80 MB | tiny single-subject overfit (β train) |
train_subset_x12 |
12 | ~1 GB | 12-subject overfit (β train) |
train_val |
20 | ~2 GB | monitored during training (β train) |
test |
70 | ~10 GB | identity-disjoint evaluation set |
train |
1,361 | ~190 GB | full training pool |
| total | 1,431 | ~200 GB |
train (1,361) and test (70) are identity-disjoint and together cover all 1,431 subjects. train_subset_x1, train_subset_x12, and train_val are all subsets of train β train_val are the 20 subjects whose frames the original training run reserved for validation monitoring (see dataset_frames.json); the small overfit subsets are intended for debugging.
The HF Dataset Viewer above renders one row per subject with a thumbnail of the final tracked SMPL-X overlay (track_smplx.jpg) and the per-subject frame and shot counts. Switch between the 5 split tabs to browse each subset.
Quick start
pip install huggingface_hub
# Smallest possible test (1 subject):
python load_tedwb1k.py --split train_subset_x1 --out ./tedwb1k_x1
# 12-subject overfit set:
python load_tedwb1k.py --split train_subset_x12 --out ./tedwb1k_x12
# 20-subject training-monitor set (subset of train):
python load_tedwb1k.py --split train_val --out ./tedwb1k_train_val
# 70-subject test set:
python load_tedwb1k.py --split test --out ./tedwb1k_test
# Full training pool (1361 subjects):
python load_tedwb1k.py --split train --out ./tedwb1k_train
load_tedwb1k.py is included in this repo (or grab it from ehm-tracker/release/load_tedwb1k.py). It downloads only the matching subjects, merges per-subject tracking pickles into the format HolisticAvatar's TrackedData expects, extracts frames + mattes, and writes a fresh extra_info.json with absolute paths to the user's local data dir.
After it finishes, point your training config at --out:
DATASET:
data_path: ./tedwb1k_test
β¦and you can train / fine-tune HolisticAvatar with zero code changes to that codebase.
Repository layout
TEDWB1k/
βββ README.md this file
βββ train.txt 1,361 subject ids (full training pool)
βββ train_subset_x1.txt 1 subject id (single-subject overfit, β train)
βββ train_subset_x12.txt 12 subject ids (small overfit, β train)
βββ train_val.txt 20 subject ids (training monitor, β train)
βββ test.txt 70 subject ids (identity-disjoint evaluation)
βββ dataset_frames.json frame-level train/valid/test split used by HolisticAvatar
βββ metadata/
β βββ subjects_train.parquet per-split subject tables w/ embedded source-frame previews (HF Viewer)
β βββ subjects_train_subset_x1.parquet
β βββ subjects_train_subset_x12.parquet
β βββ subjects_train_val.parquet
β βββ subjects_test.parquet
β βββ subjects.csv all rows in one CSV (programmatic use)
β βββ skipped.txt (empty for the public release)
β βββ previews/<id>.jpg 1024Γ1024 first source frame per subject (also embedded in parquets)
β βββ ehm/<id>.jpg full-res SMPL-X overlay grid (final tracking stage, ~13 MB)
β βββ flame/<id>.jpg full-res FLAME overlay grid (intermediate stage, ~6 MB)
β βββ base/<id>.jpg full-res PIXIE+Sapiens overlay grid (stage 1, ~4 MB)
βββ subjects/<video_id>/
βββ tracking/
β βββ optim_tracking_ehm.pkl per-frame SMPL-X + FLAME parameters
β βββ id_share_params.pkl per-video shape / scale / joint offsets
β βββ videos_info.json frame-key listing for this video
βββ frames.tar per-shot RGB JPGs (no audio, no video)
βββ mattes.tar per-shot RMBG-v2 alpha mattes
Per-subject visualizations: each subject has 4 standalone files under
metadata/:
metadata/previews/<id>.jpgβ a clean 1024Γ1024 source frame (the first frame of the first shot). These are what the HF Dataset Viewer renders in thepreviewcolumn of the per-split parquets.metadata/ehm/<id>.jpgβ the full-resolution SMPL-X overlay grid from the final tracking stage (large vertical contact sheet).metadata/flame/<id>.jpgβ the FLAME overlay grid from the intermediate face-fitting stage.metadata/base/<id>.jpgβ the stage-1 PIXIE+Sapiens overlay grid.
You can fetch a single subject's QC visualizations without downloading the
heavy frames.tar/mattes.tar by hitting any of those paths directly via
huggingface_hub.hf_hub_download.
Each frames.tar unpacks to:
<shot_id>/000000.jpg, 000001.jpg, ..., 0000NN.jpg
where <shot_id> is NNNNNN_NNNNNN encoding start_frame_end_frame β the inclusive
keyframe indices of the shot inside the source TED talk, sampled at 0.5 fps (one
keyframe every 2 seconds). For example 000015_000019 is keyframes 15..19 (5 frames,
covering seconds 30..38 in the source video). The JPGs inside the directory are
indexed locally per shot starting at 000000.jpg.
We do not redistribute the original .mp4 clips or the per-shot audio.wav
extracts β those are direct excerpts of TED's source content. Only the per-frame JPGs
(plus their alpha mattes) and the SMPL-X / FLAME tracking parameters are included.
Tracking format
Per-frame data inside optim_tracking_ehm.pkl (after the loader merge, keyed by {video_id: {frame_key: ...}}):
{
'smplx_coeffs': {
'global_pose': (3,), # axis-angle
'body_pose': (21, 3), # axis-angle per joint
'left_hand_pose': (15, 3),
'right_hand_pose': (15, 3),
'exp': (50,), # SMPL-X expression
'body_cam': (3,),
'camera_RT_params': (3, 4),
},
'flame_coeffs': {
'pose_params': (3,),
'jaw_params': (3,),
'neck_pose_params': (3,), # all zero (not optimized)
'eye_pose_params': (6,), # optimized
'eyelid_params': (2,),
'expression_params': (50,),
'cam': (3,),
'camera_RT_params': (3, 4),
},
'body_crop': {'M_o2c': (3,3), 'M_c2o': (3,3), ...},
'head_crop': {'M_o2c': (3,3), 'M_c2o': (3,3)},
'left_hand_crop': {...},
'right_hand_crop': {...},
'body_lmk_rlt': {'keypoints': (133,2), 'scores': (133,)},
'dwpose_raw': {'keypoints': (133,2), 'scores': (133,), 'bbox': (4,)},
'head_lmk_203': {...},
'head_lmk_70': {...},
'head_lmk_mp': {...},
'left_mano_coeffs': {...},
'right_mano_coeffs': {...},
}
Per-video identity data inside id_share_params.pkl (keyed by {video_id: ...}):
{
'smplx_shape': (1, 200),
'flame_shape': (1, 300),
'left_mano_shape': (1, 10),
'right_mano_shape': (1, 10),
'head_scale': (1, 3),
'hand_scale': (1, 3),
'joints_offset': (1, 55, 3),
}
Pipeline
Tracking was produced by ehm-tracker (a fork of LHM_Track) in three stages:
track_baseβ per-frame perception:- PIXIE for SMPL-X body initialization
- Sapiens 1B for 133 whole-body keypoints
- HaMeR for per-hand MANO regression on left/right hand crops
- MediaPipe FaceMesh for 478-point face landmarks
- additional 70- and 203-point face landmark models for face fitting
- face / hand crop computation from the keypoints
flameβ 2-stage FLAME optimization for face, jaw, expression, eyes, eyelids.smplxβ 2-stage whole-body SMPL-X optimization (body, hands, expression) consistent with the FLAME face fit.
Each stage produces a sanity-check overlay grid (track_base.jpg, track_flame.jpg, track_smplx.jpg) that you can browse via the HF Dataset Viewer thumbnail column or in the per-subject directory.
Known issues / caveats
Please read these before training β they affect what is and isn't reliable in the data.
neck_pose_paramsis all zero. Not optimized by the pipeline; relying on neck rotation from FLAME will give you a static neck.- Eyes only live in
flame_coeffs.smplx_coeffshas noeye_posefield β theflame_coeffs.eye_pose_paramsis the source of truth. They are non-zero (range roughly[-0.54, 0.53]). - Per-subject pickles are flat. If you skip the loader and read
subjects/<id>/tracking/optim_tracking_ehm.pkldirectly, the top-level keys are frame keys (e.g.'000015_000019/000000'), NOT video ids. The loader wraps them under{video_id: ...}so the merged file matches the format HolisticAvatar'sdataset/data_loader.py::TrackedDataexpects. dataset_frames.jsontrain/valid/test split is shot-limited. During the original training run we limited val and test to the first 2 shots of each video to keep evaluation fast. The per-subjectvideos_info.jsonretains every shot, so the per-subjectoptim_tracking_ehm.pklhas all frames β only the mergeddataset_frames.jsonis restricted.- No videos, no audio. We do not redistribute the original TED
.mp4clips or the per-shotaudio.wavextracts. Only per-frame JPGs (and their alpha mattes) plus the SMPL-X / FLAME tracking parameters are shipped. - The original frames are stored as JPG at the source resolution from
yt-dlpof the TED talks. We did not re-encode.
License
CC-BY-NC-ND 4.0. Non-commercial research use only. Attribution required. No derivatives β you may not distribute modified or remixed versions of this dataset.
The tracking parameters, JPG frames, and mattes are all derived works of TED talk videos that are themselves CC-BY-NC-ND on ted.com. This dataset matches the upstream license to remain compatible with TED's source restrictions.
Links
- Tracking pipeline: https://github.com/initialneil/HolisticTracker
- HolisticAvatar (downstream model): https://github.com/initialneil/HolisticAvatar
- HF dataset: https://huggingface.co/datasets/initialneil/TEDWB1k
- Downloads last month
- 173