ArXiv TLDR

Efficient Provably Secure Linguistic Steganography via Range Coding

🐦 Tweet
2604.08052

Ruiyi Yan, Yugo Murawaki

cs.CLcs.CR

TLDR

This paper introduces an efficient, provably secure linguistic steganography method using range coding, achieving high embedding capacity and speed.

Key contributions

  • Proposes an efficient, provably secure linguistic steganography method via range coding with a rotation mechanism.
  • Achieves near 100% entropy utilization for embedding capacity, outperforming existing baseline methods.
  • Demonstrates high embedding speeds, up to 1554.66 bits/s on GPT-2, while maintaining provable security.

Why it matters

This paper significantly advances linguistic steganography by overcoming the long-standing trade-off between provable security and embedding capacity. Its novel range coding approach offers a practical and highly efficient solution for covert communication. This work is crucial for developing robust and undetectable information hiding techniques.

Original Abstract

Linguistic steganography involves embedding secret messages within seemingly innocuous texts to enable covert communication. Provable security, which is a long-standing goal and key motivation, has been extended to language-model-based steganography. Previous provably secure approaches have achieved perfect imperceptibility, measured by zero Kullback-Leibler (KL) divergence, but at the expense of embedding capacity. In this paper, we attempt to directly use a classic entropy coding method (range coding) to achieve secure steganography, and then propose an efficient and provably secure linguistic steganographic method with a rotation mechanism. Experiments across various language models show that our method achieves around 100% entropy utilization (embedding efficiency) for embedding capacity, outperforming the existing baseline methods. Moreover, it achieves high embedding speeds (up to 1554.66 bits/s on GPT-2). The code is available at github.com/ryehr/RRC_steganography.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.