Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    CastError
Message:      Couldn't cast
description: string
time: double
set_target_data: struct<antennas: list<item: double>, body_yaw: double, check_collision: bool, head: list<item: list< (... 15 chars omitted)
  child 0, antennas: list<item: double>
      child 0, item: double
  child 1, body_yaw: double
  child 2, check_collision: bool
  child 3, head: list<item: list<item: double>>
      child 0, item: list<item: double>
          child 0, item: double
move_id: string
duration_seconds: double
has_audio: bool
-- schema metadata --
pandas: '{"index_columns": [], "column_indexes": [], "columns": [{"name":' + 452
to
{'move_id': Value('string'), 'description': Value('string'), 'duration_seconds': Value('float64'), 'has_audio': Value('bool')}
because column names don't match
Traceback:    Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 236, in _generate_tables
                  pa_table = paj.read_json(
                             ^^^^^^^^^^^^^^
                File "pyarrow/_json.pyx", line 342, in pyarrow._json.read_json
                File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: JSON parse error: The document is empty.
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
                  return get_rows(
                         ^^^^^^^^^
                File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
                  return func(*args, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2690, in __iter__
                  for key, example in ex_iterable:
                                      ^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2227, in __iter__
                  for key, pa_table in self._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2251, in _iter_arrow
                  for key, pa_table in self.ex_iterable._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 494, in _iter_arrow
                  for key, pa_table in iterator:
                                       ^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 384, in _iter_arrow
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 285, in _generate_tables
                  yield Key(shard_idx, 0), self._cast_table(pa_table)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 124, in _cast_table
                  pa_table = table_cast(pa_table, self.info.features.arrow_schema)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2272, in table_cast
                  return cast_table_to_schema(table, schema)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              description: string
              time: double
              set_target_data: struct<antennas: list<item: double>, body_yaw: double, check_collision: bool, head: list<item: list< (... 15 chars omitted)
                child 0, antennas: list<item: double>
                    child 0, item: double
                child 1, body_yaw: double
                child 2, check_collision: bool
                child 3, head: list<item: list<item: double>>
                    child 0, item: list<item: double>
                        child 0, item: double
              move_id: string
              duration_seconds: double
              has_audio: bool
              -- schema metadata --
              pandas: '{"index_columns": [], "column_indexes": [], "columns": [{"name":' + 452
              to
              {'move_id': Value('string'), 'description': Value('string'), 'duration_seconds': Value('float64'), 'has_audio': Value('bool')}
              because column names don't match

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Local dataset • Reachy Mini Moves

Community-contributed Marionette recordings captured on Reachy Mini.

  • Moves uploaded: 1
  • Total motion time: 5.0 seconds
  • Audio tracks: 0
  • Last updated: 2026-04-08T13:51:11Z

Files live under data/ — each move ships as a JSON trajectory (Reachy Mini emotions schema) plus an optional WAV recorded directly from the robot.

How this dataset was produced

These takes were recorded with the Marionette Reachy Mini app. Pick the moves to share, set your Hugging Face username, run huggingface-cli login once locally, then hit Synchronize to Hugging Face dataset inside Marionette. The app packages the selected files, generates this README, and uploads them to cdeplanne/local-dataset.

Selected moves

Move Duration Audio Recorded at (UTC)
take-20260408-135103 5.0s No 2026-04-08 13:51

Reuse

  • Cite this dataset as cdeplanne/local-dataset.
  • Keep the reachy_mini_community_moves tag when sharing derivatives so the community can discover related sets.
Downloads last month
9