ArXiv TLDR

Backdoor Channels Hidden in Latent Space: Cryptographic Undetectability in Modern Neural Networks

🐦 Tweet
2605.13214

Marte Eggen, Eirik Reiestad, Kristian Gjøsteen, Inga Strümke

cs.CRcs.LG

TLDR

This paper shows how to create cryptographically undetectable backdoors in modern neural networks by exploiting latent space geometry, resisting current defenses.

Key contributions

  • Identifies backdoor channels as learned latent directions in modern NNs (ResNet, ViT).
  • Achieves high backdoor success rates with negligible impact on clean accuracy.
  • Demonstrates strong resistance against a comprehensive suite of post-training defenses.
  • Suggests cryptographic backdoors are inherent latent properties, not just artificial constructions.

Why it matters

This paper demonstrates that cryptographically undetectable backdoors are practical in modern neural networks like ResNet and Vision Transformers, not just stylized architectures. This reframes the threat, showing backdoors can exploit inherent latent space geometry, making them extremely difficult to detect and remove with current defenses.

Original Abstract

Recent cryptographic results establish that neural networks can be backdoored such that no efficient algorithm can distinguish them from a clean model. These guarantees, however, have been confined to stylised architectures of limited practical relevance, leaving open whether comparable undetectability extends to modern, end-to-end trained networks. We construct such an attack mechanism for state-of-the-art architectures, closely aligned to the cryptographic notion of undetectability, by identifying backdoor channels as learned latent directions, and show that the question of undetectability reduces to a hypothesis test between two unknown distributions over model parameters, which we conjecture to be intractable in practice. The consequence of this reframing is significant: if exploitable channels within a network's latent space are statistically indistinguishable from naturally learned directions, an attacker need not introduce foreign structure but can instead exploit the geometry the network already possesses. Demonstrating the approach on ResNet and Vision Transformer architectures trained on standard image classification datasets, the attack achieves both consistently high success rates with negligible clean accuracy degradation, and resists a comprehensive suite of post-training defences, none of which neutralise the backdoor without rendering the model unusable. Our results establish that cryptographic backdoors need not be artefacts requiring exotic architectures or artificial constructions, but identifiable as latent properties inherent to the geometry of learned representations.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.