This is a copy of the model weights from the https://huggingface.co/deepseek-ai/DeepSeek-OCR-2 model. These weights cannot be used for other purposes. If you wish to do so, please visit the original model page.
Previously, inference with the model https://huggingface.co/deepseek-ai/DeepSeek-OCR-2 ran smoothly on transformers==4.46.3. However, running it on newer versions of transformers caused compatibility issues. We have identified and fixed the problem, and the model now runs smoothly with the latest transformers (v4.57.1) and other compatible versions.
This page includes the updated model weights and corrected configuration, which resolve the issue and allow transformers inference to run smoothly.
Last updated: 11:50 AM (IST), February 12, 2026.
Quick Start with Transformers
Install the required packages
torch
torchvision
transformers==4.57.1
accelerate
matplotlib
einops
addict
easydict
Usage
from transformers import AutoModel, AutoTokenizer
import torch
import os
os.environ["CUDA_VISIBLE_DEVICES"] = '0'
model_name = 'strangervisionhf/deepseek-ocr-2-transformers-v4.57.1'
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
model = AutoModel.from_pretrained(model_name,
#_attn_implementation='flash_attention_2',
trust_remote_code=True,
use_safetensors=True)
model = model.eval().cuda().to(torch.bfloat16)
# prompt = "<image>\nFree OCR. "
prompt = "<image>\n<|grounding|>Convert the document to markdown. "
image_file = 'your_image.jpg'
output_path = 'your/output/dir'
res = model.infer(tokenizer, prompt=prompt, image_file=image_file, output_path = output_path, base_size = 1024, image_size = 768, crop_mode=True, save_results = True)
- Downloads last month
- -