ArXiv TLDR

Adapting TrOCR for Printed Tigrinya Text Recognition: Word-Aware Loss Weighting for Cross-Script Transfer Learning

🐦 Tweet
2604.20813

Yonatan Haile Medhanie, Yuanhua Ni

cs.CV

TLDR

This paper adapts TrOCR for printed Tigrinya text recognition using the Ge'ez script, introducing Word-Aware Loss Weighting to achieve high accuracy.

Key contributions

  • First adaptation of TrOCR for printed Tigrinya (Ge'ez script), addressing limitations for African syllabic systems.
  • Introduces Word-Aware Loss Weighting to resolve systematic word-boundary failures in cross-script transfer learning.
  • Achieves 0.22% Character Error Rate and 97.20% exact match accuracy on printed Tigrinya text.
  • Ablation study confirms Word-Aware Loss Weighting is critical, reducing CER by two orders of magnitude.

Why it matters

This paper provides a crucial breakthrough for OCR of African syllabic writing systems, specifically Tigrinya, where existing models struggle. The novel Word-Aware Loss Weighting offers a generalizable solution for cross-script transfer learning, enabling high-accuracy, efficient OCR for under-resourced languages.

Original Abstract

Transformer-based OCR models have shown strong performance on Latin and CJK scripts, but their application to African syllabic writing systems remains limited. We present the first adaptation of TrOCR for printed Tigrinya using the Ge'ez script. Starting from a pre-trained model, we extend the byte-level BPE tokenizer to cover 230 Ge'ez characters and introduce Word-Aware Loss Weighting to resolve systematic word-boundary failures that arise when applying Latin-centric BPE conventions to a new script. The unmodified model produces no usable output on Ge'ez text. After adaptation, the TrOCR-Printed variant achieves 0.22% Character Error Rate and 97.20% exact match accuracy on a held-out test set of 5,000 synthetic images from the GLOCR dataset. An ablation study confirms that Word-Aware Loss Weighting is the critical component, reducing CER by two orders of magnitude compared to vocabulary extension alone. The full pipeline trains in under three hours on a single 8 GB consumer GPU. All code, model weights, and evaluation scripts are publicly released.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.