The dataset viewer is not available for this subset.
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
EgoExOR-HQ: An Ego-Exo-Centric Operating Room Dataset for Surgical Activity Understanding
EgoExOR-HQ — This repository hosts the enriched high-quality release of the EgoExOR dataset. For scene graph generation code, benchmarks, and pretrained models, see the main EgoExOR repository.
Authors: Ege Özsoy, Arda Mamur, Felix Tristram, Chantal Pellegrini, Magdalena Wysocki, Benjamin Busam, Nassir Navab
✨ What's New in EgoExOR-HQ
This release adds:
- High-quality images — 1344×1344 resolution (instead of 336×336)
- Raw depth images — From external RGB-D cameras (instead of pre-merged point clouds), so you can build merged or per-camera point clouds for your use case
- Per-device audios — Separate audio streams per microphone
Overview
Operating rooms (ORs) demand precise coordination among surgeons, nurses, and equipment in a fast-paced, occlusion-heavy environment, necessitating advanced perception models to enhance safety and efficiency. Existing datasets either provide partial egocentric views or sparse exocentric multi-view context, but do not explore the comprehensive combination of both.
We introduce EgoExOR, the first OR dataset and accompanying benchmark to fuse first-person and third-person perspectives. Spanning 94 minutes (84,553 frames at 15 FPS) of two emulated spine procedures—Ultrasound-Guided Needle Insertion and Minimally Invasive Spine Surgery—EgoExOR integrates:
- Egocentric: RGB, gaze, hand tracking, audio from wearable glasses
- Exocentric: RGB and depth from RGB-D cameras, ultrasound imagery
- Annotations: 36 entities, 22 relations (568,235 triplets) for scene graph generation
This dataset sets a new foundation for OR perception, offering a rich, multimodal resource for next-generation clinical perception.
🌟 Key Features
- Multiple modalities — RGB video, audio (full waveform + per-frame snippets, per-device), eye gaze, hand tracking, raw depth, and scene graph annotations
- Time-synchronized streams — All modalities aligned on a common timeline for precise cross-modal correlation
- High-resolution RGB — 1344×1344 frames for fine-grained visual analysis
- Raw depth — Build custom point clouds or depth-based models; depth from external RGB-D cameras only
- Per-device audio — Separate microphone streams for spatial or multi-channel audio processing
📂 Dataset Structure
The dataset is distributed as phase-level HDF5 files for efficient download:
| File | Description |
|---|---|
miss_1.h5 |
MISS procedure, phase 1 |
miss_2.h5 |
MISS procedure, phase 2 |
miss_3.h5 |
MISS procedure, phase 3 |
miss_4.h5 |
MISS procedure, phase 4 |
To obtain a single merged file (including splits), use the merge utility from the main EgoExOR repository (see data/README.md).
HDF5 Schema
/metadata
/vocabulary/entity — Entity names and IDs (instruments, anatomy, etc.)
/vocabulary/relation — Relation names and IDs (holding, cutting, etc.)
/sources/sources — Camera/source names and IDs (head_surgeon, external_1, etc.)
/dataset — version, creation_date, title
/procedures/{procedure}/phases/{phase}/takes/{take}/
/sources — source_count, source_0, source_1, … (camera roles)
/frames/rgb — (num_frames, num_cameras, H, W, 3) uint8 — 1344×1344
/eye_gaze/coordinates — (num_frames, num_ego_cameras, 3) float32 — gaze 2D + camera ID
/eye_gaze_depth/values — (num_frames, num_ego_cameras) float32
/hand_tracking/positions — (num_frames, num_ego_cameras, 17) float32
/audio/waveform — Full stereo waveform
/audio/snippets — 1-second snippets aligned to frames
/audio/per_device/ — Per-microphone waveform and snippets
/point_cloud/depth/values — Raw depth images (external cameras; others zero-filled)
/point_cloud/merged/ — Not populated; use raw depth to build point clouds yourself
/annotations/ — Scene graph annotations (frame_idx, rel_annotations, scene_graph)
/splits
train, validation, test — Split tables (procedure, phase, take, frame_id)
Note: Camera/source IDs in eye_gaze/coordinates map to metadata/sources for correct source names.
⚙️ Efficiency and Usability
- HDF5 — Hierarchical structure, partial loading, gzip compression
- Chunking — Efficient access to frame ranges for sequence-based training
- Logical layout —
procedures → phases → takes → modalityfor easy navigation
📜 License
Released under the Apache 2.0 License. Free for academic and commercial use with attribution.
📚 Citation
@misc{özsoy2025egoexoregoexocentricoperatingroom,
title={EgoExOR: An Ego-Exo-Centric Operating Room Dataset for Surgical Activity Understanding},
author={Ege Özsoy and Arda Mamur and Felix Tristram and Chantal Pellegrini and Magdalena Wysocki and Benjamin Busam and Nassir Navab},
year={2025},
eprint={2505.24287},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2505.24287},
}
🔗 Related Resources
- Original EgoExOR (v1) — ardamamur/EgoExOR — 336×336 images, pre-merged point clouds, merged audio
- Code, benchmarks, pretrained model — github.com/ardamamur/EgoExOR
Dataset: TUM/EgoExOR · Last Updated: February 2025
- Downloads last month
- 9