Cryptography & Security
Research on AI security, adversarial attacks, privacy, and cryptographic methods.
cs.CR · 505 papersThe Adversarial Discount - AI, Signal Correlation, and the Cybersecurity Arms Race
A model of AI-driven cybersecurity arms races reveals how signal correlation neutralizes attacker advantages and highlights defense inefficiencies.
Probabilistic-bit Guided CDCL for SAT Solving using Ising Consensus Assumptions
A hybrid SAT solver uses a p-bit Ising sampler to guide CDCL, significantly reducing conflicts and propagations on specific 3-SAT benchmarks.
Redefining AI Red Teaming in the Agentic Era: From Weeks to Hours
This paper introduces an AI red teaming agent that automates vulnerability probing, reducing the process from weeks to hours with a unified framework.
LIPPEN: A Lightweight In-Place Pointer Encryption Architecture for Pointer Integrity
LIPPEN is a hardware-software co-design that uses full-pointer encryption to provide strong, metadata-free pointer integrity with PAC-comparable overhead.
Generating Proof-of-Vulnerability Tests to Help Enhance the Security of Complex Software
PoVSmith automates generating proof-of-vulnerability tests for software supply chain attacks using LLMs, significantly improving test quality and reducing manual effort.
MOSAIC-Bench: Measuring Compositional Vulnerability Induction in Coding Agents
MOSAIC-Bench reveals that coding agents, when given decomposed tasks, often create exploitable code, bypassing current safety measures and reviewer checks.
HELO Cryptography: A Lightweight Cryptographic System for Enhancing IoT Security in P2P Data Transmission
HELO is a new lightweight hybrid cryptographic system designed to enhance IoT security in P2P data transmission without sacrificing performance.
A Deeper Dive into the Irreversibility of PolyProtect: Making Protected Face Templates Harder to Invert
This paper enhances PolyProtect's biometric template irreversibility by proposing a key selection algorithm, making protected face embeddings harder to invert.
KVerus: Scalable and Resilient Formal Verification Proof Generation for Rust Code
KVerus enables scalable and resilient formal verification for Rust code by bridging the Semantic-Structural Gap with a self-adaptive, retrieval-augmented system.
GPUBreach: Privilege Escalation Attacks on GPUs using Rowhammer
GPUBreach shows GPU Rowhammer can achieve privilege escalation, enabling unprivileged access to other processes' GPU memory and CPU root control.
Firmware Distribution as Attack Surface: A Security Study of ASIC Cryptocurrency Miners
ASIC cryptocurrency miner firmware distribution is a major attack surface, enabling large-scale compromises through static analysis.
Internet of Things Security: A Survey on Common Attacks
This paper surveys 28 common IoT attacks, classifying them with STRIDE/CVSS and mapping them to vulnerabilities, offering mitigation insights.
Tailored Prompts, Targeted Protection: Vulnerability-Specific LLM Analysis for Smart Contracts
An LLM framework detects smart contract vulnerabilities using tailored prompts, AST context, and a new large-scale dataset for high precision.
The Infinite Mutation Engine? Measuring Polymorphism in LLM-Generated Offensive Code
LLMs can generate highly polymorphic, behaviorally identical offensive code, posing a significant threat to signature-based malware detection.
ZK-Value: A Practical Zero-Knowledge System for Verifiable Data Valuation
ZK-Value is a practical zero-knowledge system for verifiable data valuation that scales to real-world demands using a co-designed architecture.
Design of Memristive Lightweight Encryption For In-Memory Image Steganography
This paper designs memristive lightweight encryption for in-memory image steganography, addressing data transfer bottlenecks and enhancing security.
From TinyGo to gc Compiler: Extending Zorya's Concolic Framework to Real-World Go Binaries
Zorya, a concolic execution framework, now supports multi-threaded Go binaries, detecting real-world vulnerabilities, including silent overflows.
MEMSAD: Gradient-Coupled Anomaly Detection for Memory Poisoning in Retrieval-Augmented Agents
Introduces MEMSAD, a gradient-coupled anomaly detection defense, to secure retrieval-augmented LLM agents against memory poisoning attacks with formal guarantees.
Exposing LLM Safety Gaps Through Mathematical Encoding:New Attacks and Systematic Analysis
New attacks encode harmful prompts as math problems, bypassing LLM safety filters with high success rates and revealing fundamental security gaps.
Graph Reconstruction from Differentially Private GNN Explanations
This paper reveals that differentially private GNN explanations can still leak significant graph structure, proposing PRIVX, a diffusion-based attack.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.