ArXiv TLDR

Confidence-Guided Diffusion Augmentation for Enhanced Bangla Compound Character Recognition

🐦 Tweet
2605.10916

Md. Sultan Al Rayhan, Maheen Islam

cs.CVcs.AI

TLDR

A new confidence-guided diffusion augmentation framework significantly boosts Bangla compound character recognition by synthesizing and filtering high-quality data.

Key contributions

  • Proposes a confidence-guided diffusion augmentation framework for Bangla compound character recognition.
  • Synthesizes high-quality samples using class-conditional diffusion with classifier guidance and SE-enhanced U-Net.
  • Introduces a confidence-based filtering mechanism to retain only highly class-consistent synthetic samples.
  • Achieves 89.2% accuracy on AIBangla, substantially surpassing previous benchmarks across multiple architectures.

Why it matters

This paper addresses the critical challenge of recognizing complex Bangla compound characters, a low-resource script. By leveraging quality-aware diffusion augmentation, it provides a robust method to overcome data scarcity and improve generalization. This approach has significant implications for digitalizing and preserving diverse scripts.

Original Abstract

Recognition of handwritten Bangla compound characters remains a challenging problem due to complex character structures, large intra-class variation, and limited availability of high-quality annotated data. Existing Bangla handwritten character recognition systems often struggle to generalize across diverse writing styles, particularly for compound characters containing intricate ligatures and diacritical variations. In this work, we propose a confidence-guided diffusion augmentation framework for low-resolution Bangla compound character recognition. Our framework combines class-conditional diffusion modeling with classifier guidance to synthesize high-quality handwritten compound character samples. To further improve generation quality, we introduce Squeeze-and-Excitation enhanced residual blocks within the diffusion model's U-Net backbone. We additionally propose a confidence-based filtering mechanism where pre-trained classifiers act as quality gates to retain only highly class-consistent synthetic samples. The filtered synthetic images are fused with the original training data and used to retrain multiple classification architectures. Experiments conducted on the AIBangla compound character dataset demonstrate consistent performance improvements across ResNet50, DenseNet121, VGG16, and Vision Transformer architectures. Our best-performing model achieves 89.2\% classification accuracy, surpassing the previously published AIBangla benchmark by a substantial margin. The results demonstrate that quality-aware diffusion augmentation can effectively enhance handwritten character recognition performance in low-resource script domains.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.