Documenting my experience with running glm-ocr-v2.py

#2
by willwhim - opened

Overall, there have been fewer hiccups than I thought. I am trying to improve the OCR on the collected works of Isaac Watts, about 5.52 pages in five PDFs. I have a lot of experience with python but not a lot of experience with huggingface. PDFs were downloaded from Internet Archive.

Dataset: https://huggingface.co/datasets/willwhim/wattsocr-md

  • I wanted to include the original OCR'd text and other metadata, so I vibecoded a script to do this, and create the data locally so I could inspect it
  • I vibecoded a simple uploader, too
  • The test sample run looked great, and ran the first time. It took a while for the data to show up . I'm sure this is normal, but it's all new to me
  • I didn't set a timeout, and so it timed out after a couple of hours.
  • The glm-ocr.py script DOES NOT have a --timeout parameter so either the script or the doc should be updated
  • I'm still a little unclear the best way to set the HF_TOKEN
  • I didn't set a batch-size, and it defaults to 16; Perhaps I should have set this to 64, but apparently you can't --resume and also reset the batch size

I'm now on my first working --resume run. More details as they arise.

Sign up or log in to comment