Cryptography & Security
Research on AI security, adversarial attacks, privacy, and cryptographic methods.
cs.CR ยท 505 papersSketch-based Access Control: A Multimodal Interface for Translating User Preferences into Intent-Aligned Policies
SBAC is an AI-assisted sketch-based system that simplifies creating and refining access control policies using multimodal LLMs.
Nautilus Compass: Black-box Persona Drift Detection for Production LLM Agents
Nautilus Compass detects persona drift in black-box LLM agents using prompt-text analysis, offering an efficient and accessible memory solution.
GLiGuard: Schema-Conditioned Classification for LLM Safeguard
GLiGuard is a compact 0.3B-parameter model that uses schema-conditioned classification to efficiently safeguard LLMs, outperforming larger models in speed.
Graph Representation Learning Augmented Model Manipulation on Federated Fine-Tuning of LLMs
AugMP is a novel graph representation learning strategy to manipulate federated fine-tuning of LLMs, reducing accuracy and evading defenses.
Longitudinal Analyses of SAST Tools: A CodeQL Case Study
This study longitudinally evaluates CodeQL's efficacy, actionability, and stability on thousands of CVEs across OSS, finding detection inconsistencies.
Zero-determinant Strategy for Moving Target Defense: Existence, Performance, and Computation
This paper proposes zero-determinant (ZD) strategies for Moving Target Defense (MTD) to achieve high performance with low computational cost.
CyBiasBench: Benchmarking Bias in LLM Agents for Cyber-Attack Scenarios
CyBiasBench reveals LLM cyber-attack agents exhibit inherent biases, concentrating efforts on specific attack families regardless of prompts.
Can I Check What I Designed? Mapping Security Design DSLs to Code Analyzers
This paper maps security design DSLs to code analyzers to bridge the abstraction gap between design and implementation security.
GRASP -- Graph-Based Anomaly Detection Through Self-Supervised Classification
GRASP is a novel provenance-based intrusion detection system that uses self-supervised graph classification to detect advanced persistent threats without thresholds.
Differentially Private Auditing Under Strategic Response
This paper models differentially private AI audits as a strategic game, proposing an optimal DP budget allocation algorithm to counter developer responses.
Quotient Semivalues for False-Name-Resistant Data Attribution
This paper introduces quotient semivalues to prevent false-name manipulation in ML data attribution, significantly reducing Sybil attack gains.
CCX: Enabling Unmodified Intel SGX Applications on Arm CCA
CCX enables existing Intel SGX applications to run on Arm CCA without modification, offering comparable security and improved performance.
GESR: Graph-Based Edge Semantic Reconstruction for Stealthy Communication Detection with Benign-Only Training
GESR detects stealthy network attacks by reconstructing edge semantics from local graph context using benign-only training.
Resilience of IEC 61850 Sampled Values-Based Protection Systems Under Coordinated False Data Injections
This paper reveals critical vulnerabilities in IEC 61850 digital substations under coordinated False Data Injection Attacks targeting Sampled Values.
An Automated Framework for Cybersecurity Policy Compliance Assessment Against Security Control Standards
PROPARAG automates cybersecurity policy compliance assessment against security controls using LLMs, achieving high F1 scores and identifying policy gaps.
Cross-Modal Backdoors in Multimodal Large Language Models
This paper introduces a novel cross-modal backdoor attack on multimodal LLMs by poisoning lightweight connectors, achieving high success rates and stealth.
Spying Across Chiplets: Side-Channel Attacks in 2.5/3D Integrated Systems
This paper demonstrates side-channel attacks across chiplets in 2.5/3D integrated systems by repurposing communication-oriented chiplets as observation platforms.
Vaporizer: Breaking Watermarking Schemes for Large Language Model Outputs
This paper introduces "Vaporizer," an attack framework demonstrating how to effectively remove watermarks from large language model outputs.
HBEE: Human Behavioral Entropy Engine -- Pre-Registered Multi-Agent LLM Simulation of Peer-Suspicion-Based Detection Inversion
An LLM-driven multi-agent simulation reveals that adaptive insider threats can achieve "detection inversion," receiving less peer suspicion than innocent agents.
Forensic analysis of video data deletion and recovery in Honeywell surveillance file system
This paper analyzes the proprietary Honeywell surveillance file system to understand video deletion mechanisms and demonstrate recovery feasibility.
๐ฌ Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week โ summarized, scored, and delivered to your inbox every Monday.