Add paper link and citation, and update code snippets

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +28 -146
README.md CHANGED
@@ -8,9 +8,11 @@ configs:
8
  - config_name: default
9
  data_files: FlattenFold/base/data/chunk-000/episode_000000.parquet
10
  ---
11
- # KAI0
12
- <div align="center">
13
- <a href="">
 
 
14
  <img src="https://img.shields.io/badge/GitHub-grey?logo=GitHub" alt="GitHub Badge">
15
  </a>
16
  <a href="https://huggingface.co/OpenDriveLab-org/Kai0">
@@ -19,8 +21,13 @@ configs:
19
  <a href="https://mmlab.hk/research/kai0">
20
  <img src="https://img.shields.io/badge/Research_Blog-grey?style=flat" alt="Research Blog Badge">
21
  </a>
 
 
 
22
  </div>
23
 
 
 
24
  # TODO
25
  - [ ] The advantage label will be coming soon.
26
 
@@ -34,7 +41,7 @@ configs:
34
  - [License and Citation](#license-and-citation)
35
 
36
  ## [About the Dataset](#contents)
37
- - **~134 hours** real world scenarios
38
  - **Main Tasks**
39
  - ***FlattenFold***
40
  - Single task
@@ -63,6 +70,7 @@ configs:
63
  ## [Load the dataset](#contents)
64
  - This dataset was created using [LeRobot](https://github.com/huggingface/lerobot)
65
  - The dataset's version is LeRobotDataset v2.1
 
66
  ### For LeRobot version < 0.4.0
67
  Choose the appropriate import based on your version:
68
 
@@ -79,7 +87,7 @@ from lerobot.common.datasets.lerobot_dataset import LeRobotDataset
79
  from lerobot.datasets.lerobot_dataset import LeRobotDataset
80
 
81
  # Load the dataset
82
- dataset = LeRobotDataset(repo_id='where/the/dataset/you/stored')
83
  ```
84
 
85
  ### For LeRobot version >= 0.4.0
@@ -87,7 +95,7 @@ dataset = LeRobotDataset(repo_id='where/the/dataset/you/stored')
87
  You need to migrate the dataset from v2.1 to v3.0 first. See the official documentation: [Migrate the dataset from v2.1 to v3.0](https://huggingface.co/docs/lerobot/lerobot-dataset-v3)
88
 
89
  ```bash
90
- python -m lerobot.datasets.v30.convert_dataset_v21_to_v30 --repo-id=<HF_USER/DATASET_ID>
91
  ```
92
 
93
  ## [Download the Dataset](#contents)
@@ -141,42 +149,16 @@ hf download OpenDriveLab-org/kai0 \
141
 
142
  ### [Folder hierarchy](#contents)
143
  Under each task directory, data is partitioned into two subsets: base and dagger.
144
- - base
145
- contains
146
- original demonstration trajectories of robotic arm manipulation for garment arrangement tasks.
147
- - dagger
148
- contains on-policy recovery trajectories collected via iterative DAgger, designed to populate failure recovery modes absent in static demonstrations.
149
  ```text
150
  Kai0-data/
151
  ├── FlattenFold/
152
  │ ├── base/
153
  │ │ ├── data/
154
- │ │ │ ├── chunk-000/
155
- │ │ │ │ ├── episode_000000.parquet
156
- │ │ │ │ ├── episode_000001.parquet
157
- │ │ │ │ └── ...
158
- │ │ │ └── ...
159
  │ │ ├── videos/
160
- │ │ │ ├── chunk-000/
161
- │ │ │ │ ├── observation.images.hand_left/
162
- │ │ │ │ │ ├── episode_000000.mp4
163
- │ │ │ │ │ ├── episode_000001.mp4
164
- │ │ │ │ │ └── ...
165
- │ │ │ │ ├── observation.images.hand_right/
166
- │ │ │ │ │ ├── episode_000000.mp4
167
- │ │ │ │ │ ├── episode_000001.mp4
168
- │ │ │ │ │ └── ...
169
- │ │ │ │ ├── observation.images.top_head/
170
- │ │ │ │ │ ├── episode_000000.mp4
171
- │ │ │ │ │ ├── episode_000001.mp4
172
- │ │ │ │ │ └── ...
173
- │ │ │ │ └── ...
174
- │ │ │ └── ...
175
  │ │ └── meta/
176
- │ │ ├── info.json
177
- │ │ ├── episodes.jsonl
178
- │ │ ├── tasks.jsonl
179
- │ │ └── episodes_stats.jsonl
180
  │ └── dagger/
181
  ├── HangCloth/
182
  │ ├── base/
@@ -190,105 +172,7 @@ Kai0-data/
190
  <a id='Details'></a>
191
  ### [Details](#contents)
192
  #### info.json
193
- the basic struct of the [info.json](#meta/info.json)
194
- ```json
195
- {
196
- "codebase_version": "v2.1",
197
- "robot_type": "agilex",
198
- "total_episodes": ..., # the total episodes in the dataset
199
- "total_frames": ..., # The total number of video frames in any single camera perspective
200
- "total_tasks": ..., # Total number of tasks
201
- "total_videos": ..., # The total number of videos from all camera perspectives in the dataset
202
- "total_chunks": ..., # The number of chunks in the dataset
203
- "chunks_size": ..., # The max number of episodes in a chunk
204
- "fps": ..., # Video frame rate per second
205
- "splits": { # how to split the dataset
206
- "train": ...
207
- },
208
- "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
209
- "video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
210
- "features": {
211
- "observation.images.top_head": { # the camera perspective
212
- "dtype": "video",
213
- "shape": [
214
- 480,
215
- 640,
216
- 3
217
- ],
218
- "names": [
219
- "height",
220
- "width",
221
- "channel"
222
- ],
223
- "info": {
224
- "video.height": 480,
225
- "video.width": 640,
226
- "video.codec": "av1",
227
- "video.pix_fmt": "yuv420p",
228
- "video.is_depth_map": false,
229
- "video.fps": 30,
230
- "video.channels": 3,
231
- "has_audio": false
232
- }
233
- },
234
- "observation.images.hand_left": { # the camera perspective
235
- ...
236
- },
237
- "observation.images.hand_right": { # the camera perspective
238
- ...
239
- },
240
- "observation.state": {
241
- "dtype": "float32",
242
- "shape": [
243
- 14
244
- ],
245
- "names": null
246
- },
247
- "action": {
248
- "dtype": "float32",
249
- "shape": [
250
- 14
251
- ],
252
- "names": null
253
- },
254
- "timestamp": {
255
- "dtype": "float32",
256
- "shape": [
257
- 1
258
- ],
259
- "names": null
260
- },
261
- "frame_index": {
262
- "dtype": "int64",
263
- "shape": [
264
- 1
265
- ],
266
- "names": null
267
- },
268
- "episode_index": {
269
- "dtype": "int64",
270
- "shape": [
271
- 1
272
- ],
273
- "names": null
274
- },
275
- "index": {
276
- "dtype": "int64",
277
- "shape": [
278
- 1
279
- ],
280
- "names": null
281
- },
282
- "task_index": {
283
- "dtype": "int64",
284
- "shape": [
285
- 1
286
- ],
287
- "names": null
288
- }
289
- }
290
- }
291
- ```
292
 
293
  #### [Parquet file format](#contents)
294
  | Field Name | shape | Meaning |
@@ -301,16 +185,14 @@ the basic struct of the [info.json](#meta/info.json)
301
  | index | [N, 1] | Global unique index across all frames in the dataset |
302
  | task_index | [N, 1] | Index identifying the task type being performed |
303
 
304
- ### [tasks.jsonl](#FlattenFold/meta/tasks.jsonl)
305
- Contains task language prompts (natural language instructions) that specify the manipulation task to be performed. Each entry maps a task_index to its corresponding task description, which can be used for language-conditioned policy training.
306
 
307
- # License and Citation
308
- All the data and code within this repo are under [](). Please consider citing our project if it helps your research.
309
-
310
- ```BibTeX
311
- @misc{,
312
- title={},
313
- author={},
314
- howpublished={\url{}},
315
- year={}
316
- }
 
8
  - config_name: default
9
  data_files: FlattenFold/base/data/chunk-000/episode_000000.parquet
10
  ---
11
+
12
+ # χ₀ (KAI0)
13
+
14
+ <div align="center\">
15
+ <a href="https://github.com/OpenDriveLab/KAI0">
16
  <img src="https://img.shields.io/badge/GitHub-grey?logo=GitHub" alt="GitHub Badge">
17
  </a>
18
  <a href="https://huggingface.co/OpenDriveLab-org/Kai0">
 
21
  <a href="https://mmlab.hk/research/kai0">
22
  <img src="https://img.shields.io/badge/Research_Blog-grey?style=flat" alt="Research Blog Badge">
23
  </a>
24
+ <a href="https://huggingface.co/papers/2602.09021">
25
+ <img src="https://img.shields.io/badge/Paper-blue?logo=arxiv" alt="Paper Badge">
26
+ </a>
27
  </div>
28
 
29
+ χ₀ (**kai0**) is a resource-efficient framework for achieving production-level robustness in robotic manipulation by taming distributional inconsistencies. It enables long-horizon garment manipulation tasks such as flattening, folding, and hanging using dual-arm robots.
30
+
31
  # TODO
32
  - [ ] The advantage label will be coming soon.
33
 
 
41
  - [License and Citation](#license-and-citation)
42
 
43
  ## [About the Dataset](#contents)
44
+ - **~181 hours** real world scenarios
45
  - **Main Tasks**
46
  - ***FlattenFold***
47
  - Single task
 
70
  ## [Load the dataset](#contents)
71
  - This dataset was created using [LeRobot](https://github.com/huggingface/lerobot)
72
  - The dataset's version is LeRobotDataset v2.1
73
+
74
  ### For LeRobot version < 0.4.0
75
  Choose the appropriate import based on your version:
76
 
 
87
  from lerobot.datasets.lerobot_dataset import LeRobotDataset
88
 
89
  # Load the dataset
90
+ dataset = LeRobotDataset(repo_id='OpenDriveLab-org/kai0')
91
  ```
92
 
93
  ### For LeRobot version >= 0.4.0
 
95
  You need to migrate the dataset from v2.1 to v3.0 first. See the official documentation: [Migrate the dataset from v2.1 to v3.0](https://huggingface.co/docs/lerobot/lerobot-dataset-v3)
96
 
97
  ```bash
98
+ python -m lerobot.datasets.v30.convert_dataset_v21_to_v30 --repo-id=OpenDriveLab-org/kai0
99
  ```
100
 
101
  ## [Download the Dataset](#contents)
 
149
 
150
  ### [Folder hierarchy](#contents)
151
  Under each task directory, data is partitioned into two subsets: base and dagger.
152
+ - base contains original demonstration trajectories.
153
+ - dagger contains on-policy recovery trajectories collected via iterative DAgger.
154
+
 
 
155
  ```text
156
  Kai0-data/
157
  ├── FlattenFold/
158
  │ ├── base/
159
  │ │ ├── data/
 
 
 
 
 
160
  │ │ ├── videos/
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
161
  │ │ └── meta/
 
 
 
 
162
  │ └── dagger/
163
  ├── HangCloth/
164
  │ ├── base/
 
172
  <a id='Details'></a>
173
  ### [Details](#contents)
174
  #### info.json
175
+ The basic structure of `info.json` includes metadata about robot types, frames, tasks, and data features like camera perspectives (`top_head`, `hand_left`, `hand_right`).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
176
 
177
  #### [Parquet file format](#contents)
178
  | Field Name | shape | Meaning |
 
185
  | index | [N, 1] | Global unique index across all frames in the dataset |
186
  | task_index | [N, 1] | Index identifying the task type being performed |
187
 
188
+ ## License and Citation
189
+ The data and checkpoints are licensed under [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/).
190
 
191
+ ```bibtex
192
+ @article{sima2026kai0,
193
+ title={$\chi_{0}$: Resource-Aware Robust Manipulation via Taming Distributional Inconsistencies},
194
+ author={Yu, Checheng and Sima, Chonghao and Jiang, Gangcheng and Zhang, Hai and Mai, Haoguang and Li, Hongyang and Wang, Huijie and Chen, Jin and Wu, Kaiyang and Chen, Li and Zhao, Lirui and Shi, Modi and Luo, Ping and Bu, Qingwen and Peng, Shijia and Li, Tianyu and Yuan, Yibo},
195
+ journal={arXiv preprint arXiv:2602.09021},
196
+ year={2026}
197
+ }
198
+ ```