Abstract
Level-of-Semantics Tokenization (LoST) improves 3D shape generation by ordering tokens based on semantic salience and using a novel relational alignment loss for better reconstruction and efficiency.
Tokenization is a fundamental technique in the generative modeling of various modalities. In particular, it plays a critical role in autoregressive (AR) models, which have recently emerged as a compelling option for 3D generation. However, optimal tokenization of 3D shapes remains an open question. State-of-the-art (SOTA) methods primarily rely on geometric level-of-detail (LoD) hierarchies, originally designed for rendering and compression. These spatial hierarchies are often token-inefficient and lack semantic coherence for AR modeling. We propose Level-of-Semantics Tokenization (LoST), which orders tokens by semantic salience, such that early prefixes decode into complete, plausible shapes that possess principal semantics, while subsequent tokens refine instance-specific geometric and semantic details. To train LoST, we introduce Relational Inter-Distance Alignment (RIDA), a novel 3D semantic alignment loss that aligns the relational structure of the 3D shape latent space with that of the semantic DINO feature space. Experiments show that LoST achieves SOTA reconstruction, surpassing previous LoD-based 3D shape tokenizers by large margins on both geometric and semantic reconstruction metrics. Moreover, LoST achieves efficient, high-quality AR 3D generation and enables downstream tasks like semantic retrieval, while using only 0.1%-10% of the tokens needed by prior AR models.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- VAR-3D: View-aware Auto-Regressive Model for Text-to-3D Generation via a 3D Tokenizer (2026)
- Semantic One-Dimensional Tokenizer for Image Reconstruction and Generation (2026)
- Cog2Gen3D: Sculpturing 3D Semantic-Geometric Cognition for 3D Generation (2026)
- OneWorld: Taming Scene Generation with 3D Unified Representation Autoencoder (2026)
- EvoTok: A Unified Image Tokenizer via Residual Latent Evolution for Visual Understanding and Generation (2026)
- CG-MLLM: Captioning and Generating 3D content via Multi-modal Large Language Models (2026)
- Soft Tail-dropping for Adaptive Visual Tokenization (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper