---
viewer: false
task_categories:
- image-to-text
tags:
- uv-script
- ocr
- vision-language-model
- document-processing
- hf-jobs
---
# OCR UV Scripts
> Part of [uv-scripts](https://huggingface.co/uv-scripts) - ready-to-run ML tools powered by UV and HuggingFace Jobs.
19 OCR scripts covering models from 0.9B to 8B parameters. Pick a model, point at your dataset, get markdown â no setup required.
## ð Quick Start
Run OCR on any dataset without needing your own GPU:
```bash
# Quick test with 10 samples
hf jobs uv run --flavor l4x1 \
--secrets HF_TOKEN \
https://huggingface.co/datasets/uv-scripts/ocr/raw/main/glm-ocr.py \
your-input-dataset your-output-dataset \
--max-samples 10
```
That's it! The script will:
- Process first 10 images from your dataset
- Add OCR results as a new `markdown` column
- Push the results to a new dataset
- View results at: `https://huggingface.co/datasets/[your-output-dataset]`
All scripts at a glance (sorted by model size)
| Script | Model | Size | Backend | Notes |
|--------|-------|------|---------|-------|
| `smoldocling-ocr.py` | [SmolDocling](https://huggingface.co/ds4sd/SmolDocling-256M-preview) | 256M | Transformers | DocTags structured output |
| `glm-ocr.py` | [GLM-OCR](https://huggingface.co/zai-org/GLM-OCR) | 0.9B | vLLM | 94.62% OmniDocBench V1.5 |
| `paddleocr-vl.py` | [PaddleOCR-VL](https://huggingface.co/PaddlePaddle/PaddleOCR-VL) | 0.9B | Transformers | 4 task modes (ocr/table/formula/chart) |
| `paddleocr-vl-1.5.py` | [PaddleOCR-VL-1.5](https://huggingface.co/PaddlePaddle/PaddleOCR-VL-1.5) | 0.9B | Transformers | 94.5% OmniDocBench, 6 task modes |
| `lighton-ocr.py` | [LightOnOCR-1B](https://huggingface.co/lightonai/LightOnOCR-1B-1025) | 1B | vLLM | Fast, 3 vocab sizes |
| `lighton-ocr2.py` | [LightOnOCR-2-1B](https://huggingface.co/lightonai/LightOnOCR-2-1B) | 1B | vLLM | 7Ã faster than v1, RLVR trained |
| `hunyuan-ocr.py` | [HunyuanOCR](https://huggingface.co/tencent/HunyuanOCR) | 1B | vLLM | Lightweight VLM |
| `dots-ocr.py` | [DoTS.ocr](https://huggingface.co/Tencent/DoTS.ocr) | 1.7B | vLLM | 100+ languages |
| `firered-ocr.py` | [FireRed-OCR](https://huggingface.co/FireRedTeam/FireRed-OCR) | 2.1B | vLLM | Qwen3-VL fine-tune, Apache 2.0 |
| `nanonets-ocr.py` | [Nanonets-OCR-s](https://huggingface.co/nanonets/Nanonets-OCR-s) | 2B | vLLM | LaTeX, tables, forms |
| `dots-mocr.py` | [dots.mocr](https://huggingface.co/rednote-hilab/dots.mocr) | 3B | vLLM | 8 prompt modes incl. SVG generation, layout + bbox, 100+ languages |
| `nanonets-ocr2.py` | [Nanonets-OCR2-3B](https://huggingface.co/nanonets/Nanonets-OCR2-s) | 3B | vLLM | Next-gen, Qwen2.5-VL base |
| `deepseek-ocr-vllm.py` | [DeepSeek-OCR](https://huggingface.co/deepseek-ai/DeepSeek-OCR) | 4B | vLLM | 5 resolution + 5 prompt modes |
| `deepseek-ocr.py` | [DeepSeek-OCR](https://huggingface.co/deepseek-ai/DeepSeek-OCR) | 4B | Transformers | Same model, Transformers backend |
| `deepseek-ocr2-vllm.py` | [DeepSeek-OCR-2](https://huggingface.co/deepseek-ai/DeepSeek-OCR-2) | 3B | vLLM | Newer, requires nightly vLLM |
| `qianfan-ocr.py` | [Qianfan-OCR](https://huggingface.co/baidu/Qianfan-OCR) | 4.7B | vLLM | #1 OmniDocBench v1.5 (93.12), Layout-as-Thought, 192 languages |
| `olmocr2-vllm.py` | [olmOCR-2-7B](https://huggingface.co/allenai/olmOCR-2-7B-1025-FP8) | 7B | vLLM | 82.4% olmOCR-Bench |
| `rolm-ocr.py` | [RolmOCR](https://huggingface.co/reducto/RolmOCR) | 7B | vLLM | Qwen2.5-VL based, general-purpose |
| `numarkdown-ocr.py` | [NuMarkdown-8B](https://huggingface.co/numind/NuMarkdown-8B-Thinking) | 8B | vLLM | Reasoning-based OCR |
## Common Options
All scripts accept the same core flags. Model-specific defaults (batch size, context length, temperature) are tuned per model based on model card recommendations and can be overridden.
| Option | Description |
|--------|-------------|
| `--image-column` | Column containing images (default: `image`) |
| `--output-column` | Output column name (default: `markdown`) |
| `--split` | Dataset split (default: `train`) |
| `--max-samples` | Limit number of samples (useful for testing) |
| `--private` | Make output dataset private |
| `--shuffle` | Shuffle dataset before processing |
| `--seed` | Random seed for shuffling (default: `42`) |
| `--batch-size` | Images per batch (default varies per model) |
| `--max-model-len` | Max context length (default varies per model) |
| `--max-tokens` | Max output tokens (default varies per model) |
| `--gpu-memory-utilization` | GPU memory fraction (default: `0.8`) |
| `--config` | Config name for Hub push (for benchmarking) |
| `--create-pr` | Push as PR instead of direct commit |
| `--verbose` | Log resolved package versions after run |
Every script supports `--help` to see all available options:
```bash
uv run glm-ocr.py --help
```
## Example: GLM-OCR
[GLM-OCR](https://huggingface.co/zai-org/GLM-OCR) (0.9B) scores 94.62% on OmniDocBench V1.5 and supports OCR, formula, and table extraction:
```bash
# Basic OCR
hf jobs uv run --flavor l4x1 -s HF_TOKEN \
https://huggingface.co/datasets/uv-scripts/ocr/raw/main/glm-ocr.py \
my-documents my-ocr-output
# Table extraction
hf jobs uv run --flavor l4x1 -s HF_TOKEN \
https://huggingface.co/datasets/uv-scripts/ocr/raw/main/glm-ocr.py \
my-documents my-tables --task table
# Test on 10 samples first
hf jobs uv run --flavor l4x1 -s HF_TOKEN \
https://huggingface.co/datasets/uv-scripts/ocr/raw/main/glm-ocr.py \
my-documents my-test --max-samples 10
```
Detailed per-model documentation
### PaddleOCR-VL-1.5 (`paddleocr-vl-1.5.py`) â 6 task modes
OCR using [PaddlePaddle/PaddleOCR-VL-1.5](https://huggingface.co/PaddlePaddle/PaddleOCR-VL-1.5) with 94.5% accuracy:
- **94.5% on OmniDocBench v1.5** (0.9B parameters)
- ð§Đ **Ultra-compact** - Only 0.9B parameters
- ð **OCR mode** - General text extraction to markdown
- ð **Table mode** - HTML table recognition
- ð **Formula mode** - LaTeX mathematical notation
- ð **Chart mode** - Chart and diagram analysis
- ð **Spotting mode** - Text spotting with localization (higher resolution)
- ð **Seal mode** - Seal and stamp recognition
- ð **Multilingual** - Support for multiple languages
**Task Modes:**
- `ocr`: General text extraction (default)
- `table`: Table extraction to HTML
- `formula`: Mathematical formula to LaTeX
- `chart`: Chart and diagram analysis
- `spotting`: Text spotting with localization
- `seal`: Seal and stamp recognition
**Quick start:**
```bash
# Basic OCR mode
hf jobs uv run --flavor l4x1 \
-s HF_TOKEN \
https://huggingface.co/datasets/uv-scripts/ocr/raw/main/paddleocr-vl-1.5.py \
your-input-dataset your-output-dataset \
--max-samples 100
# Table extraction
hf jobs uv run --flavor l4x1 \
-s HF_TOKEN \
https://huggingface.co/datasets/uv-scripts/ocr/raw/main/paddleocr-vl-1.5.py \
documents tables-extracted \
--task-mode table
# Seal recognition
hf jobs uv run --flavor l4x1 \
-s HF_TOKEN \
https://huggingface.co/datasets/uv-scripts/ocr/raw/main/paddleocr-vl-1.5.py \
documents seals-extracted \
--task-mode seal
```
### PaddleOCR-VL (`paddleocr-vl.py`) ðŊ Smallest model with task-specific modes!
Ultra-compact OCR using [PaddlePaddle/PaddleOCR-VL](https://huggingface.co/PaddlePaddle/PaddleOCR-VL) with only 0.9B parameters:
- ðŊ **Smallest model** - Only 0.9B parameters (even smaller than LightOnOCR!)
- ð **OCR mode** - General text extraction to markdown
- ð **Table mode** - HTML table recognition and extraction
- ð **Formula mode** - LaTeX mathematical notation
- ð **Chart mode** - Structured chart and diagram analysis
- ð **Multilingual** - Support for multiple languages
- ⥠**Fast initialization** - Tiny model size for quick startup
- ð§ **ERNIE-4.5 based** - Different architecture from Qwen models
**Task Modes:**
- `ocr`: General text extraction (default)
- `table`: Table extraction to HTML
- `formula`: Mathematical formula to LaTeX
- `chart`: Chart and diagram analysis
**Quick start:**
```bash
# Basic OCR mode
hf jobs uv run --flavor l4x1 \
-s HF_TOKEN \
https://huggingface.co/datasets/uv-scripts/ocr/raw/main/paddleocr-vl.py \
your-input-dataset your-output-dataset \
--max-samples 100
# Table extraction
hf jobs uv run --flavor l4x1 \
-s HF_TOKEN \
https://huggingface.co/datasets/uv-scripts/ocr/raw/main/paddleocr-vl.py \
documents tables-extracted \
--task-mode table \
--batch-size 32
```
### GLM-OCR (`glm-ocr.py`) ð SOTA on OmniDocBench V1.5!
Compact high-performance OCR using [zai-org/GLM-OCR](https://huggingface.co/zai-org/GLM-OCR) with 0.9B parameters:
- ð **94.62% on OmniDocBench V1.5** - #1 overall ranking
- ð§ **Multi-Token Prediction** - MTP loss + stable full-task RL for quality
- ð **Text recognition** - Clean markdown output
- ð **Formula recognition** - LaTeX mathematical notation
- ð **Table recognition** - Structured table extraction
- ð **Multilingual** - zh, en, fr, es, ru, de, ja, ko
- ⥠**Compact** - Only 0.9B parameters, MIT licensed
- ð§ **CogViT + GLM** - Visual encoder with efficient token downsampling
**Task Modes:**
- `ocr`: Text recognition (default)
- `formula`: LaTeX formula recognition
- `table`: Table extraction
**Quick start:**
```bash
# Basic OCR
hf jobs uv run --flavor l4x1 \
-s HF_TOKEN \
https://huggingface.co/datasets/uv-scripts/ocr/raw/main/glm-ocr.py \
your-input-dataset your-output-dataset \
--max-samples 100
# Formula recognition
hf jobs uv run --flavor l4x1 \
-s HF_TOKEN \
https://huggingface.co/datasets/uv-scripts/ocr/raw/main/glm-ocr.py \
scientific-papers formulas-extracted \
--task formula
# Table extraction
hf jobs uv run --flavor l4x1 \
-s HF_TOKEN \
https://huggingface.co/datasets/uv-scripts/ocr/raw/main/glm-ocr.py \
documents tables-extracted \
--task table
```
### LightOnOCR (`lighton-ocr.py`) ⥠Good one to test first since it's small and fast!
Fast and compact OCR using [lightonai/LightOnOCR-1B-1025](https://huggingface.co/lightonai/LightOnOCR-1B-1025):
- ⥠**Fastest**: 5.71 pages/sec on H100, ~6.25 images/sec on A100 with batch_size=4096
- ðŊ **Compact**: Only 1B parameters - quick to download and initialize
- ð **Multilingual**: 3 vocabulary sizes for different use cases
- ð **LaTeX formulas**: Mathematical notation in LaTeX format
- ð **Table extraction**: Markdown table format
- ð **Document structure**: Preserves hierarchy and layout
- ð **Production-ready**: 76.1% benchmark score, used in production
**Vocabulary sizes:**
- `151k`: Full vocabulary, all languages (default)
- `32k`: European languages, ~12% faster decoding
- `16k`: European languages, ~12% faster decoding
**Quick start:**
```bash
# Test on 100 samples with English text (32k vocab is fastest for European languages)
hf jobs uv run --flavor l4x1 \
-s HF_TOKEN \
https://huggingface.co/datasets/uv-scripts/ocr/raw/main/lighton-ocr.py \
your-input-dataset your-output-dataset \
--vocab-size 32k \
--batch-size 32 \
--max-samples 100
# Full production run on A100 (can handle huge batches!)
hf jobs uv run --flavor a100-large \
-s HF_TOKEN \
https://huggingface.co/datasets/uv-scripts/ocr/raw/main/lighton-ocr.py \
your-input-dataset your-output-dataset \
--vocab-size 32k \
--batch-size 4096 \
--temperature 0.0
```
### LightOnOCR-2 (`lighton-ocr2.py`) ⥠Fastest OCR model!
Next-generation fast OCR using [lightonai/LightOnOCR-2-1B](https://huggingface.co/lightonai/LightOnOCR-2-1B) with RLVR training:
- ⥠**7à faster than v1**: 42.8 pages/sec on H100 (vs 5.71 for v1)
- ðŊ **Higher accuracy**: 83.2% on OlmOCR-Bench (+7.1% vs v1)
- ð§ **RLVR trained**: Eliminates repetition loops and formatting errors
- ð **Better dataset**: 2.5Ã larger training data with cleaner annotations
- ð **Multilingual**: Optimized for European languages
- ð **LaTeX formulas**: Mathematical notation support
- ð **Table extraction**: Markdown table format
- ðŠ **Production-ready**: Outperforms models 9à larger
**Quick start:**
```bash
# Test on 100 samples
hf jobs uv run --flavor a100-large \
-s HF_TOKEN \
https://huggingface.co/datasets/uv-scripts/ocr/raw/main/lighton-ocr2.py \
your-input-dataset your-output-dataset \
--batch-size 32 \
--max-samples 100
# Full production run
hf jobs uv run --flavor a100-large \
-s HF_TOKEN \
https://huggingface.co/datasets/uv-scripts/ocr/raw/main/lighton-ocr2.py \
your-input-dataset your-output-dataset \
--batch-size 32
```
### DeepSeek-OCR (`deepseek-ocr-vllm.py`)
Advanced document OCR using [deepseek-ai/DeepSeek-OCR](https://huggingface.co/deepseek-ai/DeepSeek-OCR) with visual-text compression:
- ð **LaTeX equations** - Mathematical formulas in LaTeX format
- ð **Tables** - Extracted as HTML/markdown
- ð **Document structure** - Headers, lists, formatting preserved
- ðžïļ **Image grounding** - Spatial layout with bounding boxes
- ð **Complex layouts** - Multi-column and hierarchical structures
- ð **Multilingual** - Multiple language support
- ðïļ **Resolution modes** - 5 presets for speed/quality trade-offs
- ðŽ **Prompt modes** - 5 presets for different OCR tasks
- ⥠**Fast batch processing** - vLLM acceleration
**Resolution Modes:**
- `tiny` (512Ã512): Fast, 64 vision tokens
- `small` (640Ã640): Balanced, 100 vision tokens
- `base` (1024Ã1024): High quality, 256 vision tokens
- `large` (1280Ã1280): Maximum quality, 400 vision tokens
- `gundam` (dynamic): Adaptive multi-tile (default)
**Prompt Modes:**
- `document`: Convert to markdown with grounding (default)
- `image`: OCR any image with grounding
- `free`: Fast OCR without layout
- `figure`: Parse figures from documents
- `describe`: Detailed image descriptions
### RolmOCR (`rolm-ocr.py`)
Fast general-purpose OCR using [reducto/RolmOCR](https://huggingface.co/reducto/RolmOCR) based on Qwen2.5-VL-7B:
- ð **Fast extraction** - Optimized for speed and efficiency
- ð **Plain text output** - Clean, natural text representation
- ðŠ **General-purpose** - Works well on various document types
- ðĨ **Large context** - Handles up to 16K tokens
- ⥠**Batch optimized** - Efficient processing with vLLM
### Nanonets OCR (`nanonets-ocr.py`)
State-of-the-art document OCR using [nanonets/Nanonets-OCR-s](https://huggingface.co/nanonets/Nanonets-OCR-s) that handles:
- ð **LaTeX equations** - Mathematical formulas preserved
- ð **Tables** - Extracted as HTML format
- ð **Document structure** - Headers, lists, formatting maintained
- ðžïļ **Images** - Captions and descriptions included
- âïļ **Forms** - Checkboxes rendered as â/â
### Nanonets OCR2 (`nanonets-ocr2.py`)
Next-generation Nanonets OCR using [nanonets/Nanonets-OCR2-3B](https://huggingface.co/nanonets/Nanonets-OCR2-3B) with improved accuracy:
- ðŊ **Enhanced quality** - 3.75B parameters for superior OCR accuracy
- ð **LaTeX equations** - Mathematical formulas preserved in LaTeX format
- ð **Advanced tables** - Improved HTML table extraction
- ð **Document structure** - Headers, lists, formatting maintained
- ðžïļ **Smart image captions** - Intelligent descriptions and captions
- âïļ **Forms** - Checkboxes rendered as â/â
- ð **Multilingual** - Enhanced language support
- ð§ **Based on Qwen2.5-VL** - Built on state-of-the-art vision-language model
### SmolDocling (`smoldocling-ocr.py`)
Ultra-compact document understanding using [ds4sd/SmolDocling-256M-preview](https://huggingface.co/ds4sd/SmolDocling-256M-preview) with only 256M parameters:
- ð·ïļ **DocTags format** - Efficient XML-like representation
- ðŧ **Code blocks** - Preserves indentation and syntax
- ðĒ **Formulas** - Mathematical expressions with layout
- ð **Tables & charts** - Structured data extraction
- ð **Layout preservation** - Bounding boxes and spatial info
- ⥠**Ultra-fast** - Tiny model size for quick inference
### NuMarkdown (`numarkdown-ocr.py`)
Advanced reasoning-based OCR using [numind/NuMarkdown-8B-Thinking](https://huggingface.co/numind/NuMarkdown-8B-Thinking) that analyzes documents before converting to markdown:
- ð§ **Reasoning Process** - Thinks through document layout before generation
- ð **Complex Tables** - Superior table extraction and formatting
- ð **Mathematical Formulas** - Accurate LaTeX/math notation preservation
- ð **Multi-column Layouts** - Handles complex document structures
- âĻ **Thinking Traces** - Optional inclusion of reasoning process with `--include-thinking`
### dots.mocr (`dots-mocr.py`) â SVG generation + SOTA OCR
Advanced multilingual OCR and SVG generation using [rednote-hilab/dots.mocr](https://huggingface.co/rednote-hilab/dots.mocr) with 3B parameters:
- ð **100+ Languages** - Extensive multilingual support
- ð **Document OCR** - Clean text extraction (default mode)
- ð **Layout Analysis** - Structured output with bboxes and categories
- ð **Formula recognition** - LaTeX format support
- ðžïļ **SVG generation** - Convert charts, UI layouts, figures to editable SVG code
- ð **8 prompt modes** - OCR, layout-all, layout-only, web-parsing, scene-spotting, grounding-ocr, svg, general
- ð **[Paper](https://arxiv.org/abs/2603.13032)** - 83.9% on olmOCR-Bench
**SVG variant:** Use `--model rednote-hilab/dots.mocr-svg` with `--prompt-mode svg` for best SVG results.
**Quick start:**
```bash
# Basic OCR
hf jobs uv run --flavor l4x1 \
-s HF_TOKEN \
https://huggingface.co/datasets/uv-scripts/ocr/raw/main/dots-mocr.py \
your-input-dataset your-output-dataset \
--max-samples 100
# SVG generation from charts/figures
hf jobs uv run --flavor l4x1 \
-s HF_TOKEN \
https://huggingface.co/datasets/uv-scripts/ocr/raw/main/dots-mocr.py \
your-charts svg-output \
--prompt-mode svg --model rednote-hilab/dots.mocr-svg
# Layout analysis with bounding boxes
hf jobs uv run --flavor l4x1 \
-s HF_TOKEN \
https://huggingface.co/datasets/uv-scripts/ocr/raw/main/dots-mocr.py \
your-documents layout-output \
--prompt-mode layout-all
```
### DoTS.ocr v1 (`dots-ocr.py`)
Compact multilingual OCR using [rednote-hilab/dots.ocr](https://huggingface.co/rednote-hilab/dots.ocr) with only 1.7B parameters:
- ð **100+ Languages** - Extensive multilingual support
- ð **Simple OCR** - Clean text extraction (default mode)
- ð **Layout Analysis** - Optional structured output with bboxes and categories
- ð **Formula recognition** - LaTeX format support
- ðŊ **Compact** - Only 1.7B parameters, efficient on smaller GPUs
- ð **Flexible prompts** - Switch between OCR, layout-all, and layout-only modes
### FireRed-OCR (`firered-ocr.py`)
Document OCR using [FireRedTeam/FireRed-OCR](https://huggingface.co/FireRedTeam/FireRed-OCR), a 2.1B model fine-tuned from Qwen3-VL-2B-Instruct:
- ð **Structured Markdown** - Preserves headings, paragraphs, lists
- ð **LaTeX formulas** - Inline and block math support
- ð **HTML tables** - Table extraction with `` tags
- ðŠķ **Lightweight** - 2.1B parameters, runs on L4 GPU
- ð **Apache 2.0** - Permissive license
**Quick start:**
```bash
hf jobs uv run --flavor l4x1 \
-s HF_TOKEN \
https://huggingface.co/datasets/uv-scripts/ocr/raw/main/firered-ocr.py \
your-input-dataset your-output-dataset \
--max-samples 100
```
### Qianfan-OCR (`qianfan-ocr.py`) â #1 on OmniDocBench v1.5
End-to-end document intelligence using [baidu/Qianfan-OCR](https://huggingface.co/baidu/Qianfan-OCR) with 4.7B parameters:
- **93.12 on OmniDocBench v1.5** â #1 end-to-end model
- **79.8 on OlmOCR Bench** â #1 end-to-end model
- ð§ **Layout-as-Thought** â Optional reasoning phase for complex layouts (`--think`)
- ð **192 languages** â Latin, CJK, Arabic, Cyrillic, and more
- ð **OCR mode** â Document parsing to markdown (default)
- ð **Table mode** â HTML table extraction
- ð **Formula mode** â LaTeX recognition
- ð **Chart mode** â Chart understanding and analysis
- ð **Scene mode** â Scene text extraction
- ð **KIE mode** â Key information extraction with custom prompts
**Prompt Modes:**
- `ocr`: Document parsing to markdown (default)
- `table`: Table extraction to HTML
- `formula`: Formula recognition to LaTeX
- `chart`: Chart understanding
- `scene`: Scene text extraction
- `kie`: Key information extraction (requires `--custom-prompt`)
**Quick start:**
```bash
# Basic OCR
hf jobs uv run --flavor l4x1 \
-s HF_TOKEN \
https://huggingface.co/datasets/uv-scripts/ocr/raw/main/qianfan-ocr.py \
your-input-dataset your-output-dataset \
--max-samples 100
# Layout-as-Thought for complex documents
hf jobs uv run --flavor l4x1 \
-s HF_TOKEN \
https://huggingface.co/datasets/uv-scripts/ocr/raw/main/qianfan-ocr.py \
your-input-dataset your-output-dataset \
--think --max-samples 50
# Key information extraction
hf jobs uv run --flavor l4x1 \
-s HF_TOKEN \
https://huggingface.co/datasets/uv-scripts/ocr/raw/main/qianfan-ocr.py \
invoices extracted-fields \
--prompt-mode kie --custom-prompt "Extract: name, date, total. Output as JSON."
```
### olmOCR2 (`olmocr2-vllm.py`)
High-quality document OCR using [allenai/olmOCR-2-7B-1025-FP8](https://huggingface.co/allenai/olmOCR-2-7B-1025-FP8) optimized with GRPO reinforcement learning:
- ðŊ **High accuracy** - 82.4 Âą 1.1 on olmOCR-Bench (84.9% on math)
- ð **LaTeX equations** - Mathematical formulas in LaTeX format
- ð **Table extraction** - Structured table recognition
- ð **Multi-column layouts** - Complex document structures
- ðïļ **FP8 quantized** - Efficient 8B model for faster inference
- ð **Degraded scans** - Works well on old/historical documents
- ð **Long text extraction** - Headers, footers, and full document content
- ð§Đ **YAML metadata** - Structured front matter (language, rotation, content type)
- ð **Based on Qwen2.5-VL-7B** - Fine-tuned with reinforcement learning
## ð New Features
### Multi-Model Comparison Support
All scripts now include `inference_info` tracking for comparing multiple OCR models:
```bash
# First model
uv run rolm-ocr.py my-dataset my-dataset --max-samples 100
# Second model (appends to same dataset)
uv run nanonets-ocr.py my-dataset my-dataset --max-samples 100
# View all models used
python -c "import json; from datasets import load_dataset; ds = load_dataset('my-dataset'); print(json.loads(ds[0]['inference_info']))"
```
### Random Sampling
Get representative samples with the new `--shuffle` flag:
```bash
# Random 50 samples instead of first 50
uv run rolm-ocr.py ordered-dataset output --max-samples 50 --shuffle
# Reproducible random sampling
uv run nanonets-ocr.py dataset output --max-samples 100 --shuffle --seed 42
```
### Automatic Dataset Cards
Every OCR run now generates comprehensive dataset documentation including:
- Model configuration and parameters
- Processing statistics
- Column descriptions
- Reproduction instructions
## ðŧ Usage Examples
### Run on HuggingFace Jobs (Recommended)
No GPU? No problem! Run on HF infrastructure:
```bash
# PaddleOCR-VL - Smallest model (0.9B) with task modes
hf jobs uv run --flavor l4x1 \
--secrets HF_TOKEN \
https://huggingface.co/datasets/uv-scripts/ocr/raw/main/paddleocr-vl.py \
your-input-dataset your-output-dataset \
--task-mode ocr \
--max-samples 100
# PaddleOCR-VL - Extract tables from documents
hf jobs uv run --flavor l4x1 \
--secrets HF_TOKEN \
https://huggingface.co/datasets/uv-scripts/ocr/raw/main/paddleocr-vl.py \
documents tables-dataset \
--task-mode table
# PaddleOCR-VL - Formula recognition
hf jobs uv run --flavor l4x1 \
--secrets HF_TOKEN \
https://huggingface.co/datasets/uv-scripts/ocr/raw/main/paddleocr-vl.py \
scientific-papers formulas-extracted \
--task-mode formula \
--batch-size 32
# GLM-OCR - SOTA 0.9B model (94.62% OmniDocBench)
hf jobs uv run --flavor l4x1 \
-s HF_TOKEN \
https://huggingface.co/datasets/uv-scripts/ocr/raw/main/glm-ocr.py \
your-input-dataset your-output-dataset \
--batch-size 16 \
--max-samples 100
# DeepSeek-OCR - Real-world example (National Library of Scotland handbooks)
hf jobs uv run --flavor a100-large \
-s HF_TOKEN \
-e UV_TORCH_BACKEND=auto \
https://huggingface.co/datasets/uv-scripts/ocr/raw/main/deepseek-ocr-vllm.py \
NationalLibraryOfScotland/Britain-and-UK-Handbooks-Dataset \
davanstrien/handbooks-deep-ocr \
--max-samples 100 \
--shuffle \
--resolution-mode large
# DeepSeek-OCR - Fast testing with tiny mode
hf jobs uv run --flavor l4x1 \
-s HF_TOKEN \
-e UV_TORCH_BACKEND=auto \
https://huggingface.co/datasets/uv-scripts/ocr/raw/main/deepseek-ocr-vllm.py \
your-input-dataset your-output-dataset \
--max-samples 10 \
--resolution-mode tiny
# DeepSeek-OCR - Parse figures from scientific papers
hf jobs uv run --flavor a100-large \
-s HF_TOKEN \
-e UV_TORCH_BACKEND=auto \
https://huggingface.co/datasets/uv-scripts/ocr/raw/main/deepseek-ocr-vllm.py \
scientific-papers figures-extracted \
--prompt-mode figure
# Basic OCR job with Nanonets
hf jobs uv run --flavor l4x1 \
--secrets HF_TOKEN \
https://huggingface.co/datasets/uv-scripts/ocr/raw/main/nanonets-ocr.py \
your-input-dataset your-output-dataset
# DoTS.ocr - Multilingual OCR with compact 1.7B model
hf jobs uv run --flavor a100-large \
--secrets HF_TOKEN \
https://huggingface.co/datasets/uv-scripts/ocr/raw/main/dots-ocr.py \
davanstrien/ufo-ColPali \
your-username/ufo-ocr \
--batch-size 256 \
--max-samples 1000 \
--shuffle
# Real example with UFO dataset ðļ
hf jobs uv run \
--flavor a10g-large \
--secrets HF_TOKEN \
https://huggingface.co/datasets/uv-scripts/ocr/raw/main/nanonets-ocr.py \
davanstrien/ufo-ColPali \
your-username/ufo-ocr \
--image-column image \
--max-model-len 16384 \
--batch-size 128
# Nanonets OCR2 - Next-gen quality with 3B model
hf jobs uv run \
--flavor l4x1 \
--secrets HF_TOKEN \
https://huggingface.co/datasets/uv-scripts/ocr/raw/main/nanonets-ocr2.py \
your-input-dataset \
your-output-dataset \
--batch-size 16
# NuMarkdown with reasoning traces for complex documents
hf jobs uv run \
--flavor l4x4 \
--secrets HF_TOKEN \
https://huggingface.co/datasets/uv-scripts/ocr/raw/main/numarkdown-ocr.py \
your-input-dataset your-output-dataset \
--max-samples 50 \
--include-thinking \
--shuffle
# olmOCR2 - High-quality OCR with YAML metadata
hf jobs uv run \
--flavor a100-large \
--secrets HF_TOKEN \
https://huggingface.co/datasets/uv-scripts/ocr/raw/main/olmocr2-vllm.py \
your-input-dataset your-output-dataset \
--batch-size 16 \
--max-samples 100
# Private dataset with custom settings
hf jobs uv run --flavor l40sx1 \
--secrets HF_TOKEN \
https://huggingface.co/datasets/uv-scripts/ocr/raw/main/nanonets-ocr.py \
private-input private-output \
--private \
--batch-size 32
```
### Python API
```python
from huggingface_hub import run_uv_job
job = run_uv_job(
"https://huggingface.co/datasets/uv-scripts/ocr/raw/main/nanonets-ocr.py",
args=["input-dataset", "output-dataset", "--batch-size", "16"],
flavor="l4x1"
)
```
### Run Locally (Requires GPU)
```bash
# Clone and run
git clone https://huggingface.co/datasets/uv-scripts/ocr
cd ocr
uv run nanonets-ocr.py input-dataset output-dataset
# Or run directly from URL
uv run https://huggingface.co/datasets/uv-scripts/ocr/raw/main/nanonets-ocr.py \
input-dataset output-dataset
# PaddleOCR-VL for task-specific OCR (smallest model!)
uv run paddleocr-vl.py documents extracted --task-mode ocr
uv run paddleocr-vl.py papers tables --task-mode table # Extract tables
uv run paddleocr-vl.py textbooks formulas --task-mode formula # LaTeX formulas
# RolmOCR for fast text extraction
uv run rolm-ocr.py documents extracted-text
uv run rolm-ocr.py images texts --shuffle --max-samples 100 # Random sample
# Nanonets OCR2 for highest quality
uv run nanonets-ocr2.py documents ocr-results
```
Works with any HuggingFace dataset containing images â documents, forms, receipts, books, handwriting.
## Citation
```bibtex
@misc{zheng2026multimodalocrparsedocuments,
title={Multimodal OCR: Parse Anything from Documents},
author={Handong Zheng and Yumeng Li and Kaile Zhang and Liang Xin and Guangwei Zhao and Hao Liu and Jiayu Chen and Jie Lou and Jiyu Qiu and Qi Fu and Rui Yang and Shuo Jiang and Weijian Luo and Weijie Su and Weijun Zhang and Xingyu Zhu and Yabin Li and Yiwei ma and Yu Chen and Zhaohui Yu and Guang Yang and Colin Zhang and Lei Zhang and Yuliang Liu and Xiang Bai},
year={2026},
eprint={2603.13032},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2603.13032},
}
@misc{li2025dotsocrmultilingualdocumentlayout,
title={dots.ocr: Multilingual Document Layout Parsing in a Single Vision-Language Model},
author={Yumeng Li and Guang Yang and Hao Liu and Bowen Wang and Colin Zhang},
year={2025},
eprint={2512.02498},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2512.02498},
}
```