ArXiv TLDR

Semantic-Aware UAV Command and Control for Efficient IoT Data Collection

🐦 Tweet
2604.08153

Assane Sankara, Daniel Bonilla Licea, Hajar El Hammouti

cs.RO

TLDR

A novel framework integrates semantic communication with UAV C&C for efficient IoT image data collection, using DeepJSCC and DDQN.

Key contributions

  • Proposes a semantic-aware UAV C&C framework for efficient IoT image data collection.
  • Uses Deep Joint Source-Channel Coding (DeepJSCC) for compact semantic image representation, enabling partial transmission reconstruction.
  • Models UAV trajectory optimization as an MDP and solves it with a Double Deep Q-Learning (DDQN) policy.
  • Achieves superior device coverage and semantic reconstruction quality compared to greedy and TSP baselines.

Why it matters

UAV-based IoT data collection faces resource and real-time challenges. This paper integrates semantic communication into UAV C&C for a novel solution. This approach significantly improves data collection efficiency and quality, addressing critical limitations in current UAV-IoT systems.

Original Abstract

Unmanned Aerial Vehicles (UAVs) have emerged as a key enabler technology for data collection from Internet of Things (IoT) devices. However, effective data collection is challenged by resource constraints and the need for real-time decision-making. In this work, we propose a novel framework that integrates semantic communication with UAV command-and-control (C&C) to enable efficient image data collection from IoT devices. Each device uses Deep Joint Source-Channel Coding (DeepJSCC) to generate a compact semantic latent representation of its image to enable image reconstruction even under partial transmission. A base station (BS) controls the UAV's trajectory by transmitting acceleration commands. The objective is to maximize the average quality of reconstructed images by maintaining proximity to each device for a sufficient duration within a fixed time horizon. To address the challenging trade-off and account for delayed C&C signals, we model the problem as a Markov Decision Process and propose a Double Deep Q-Learning (DDQN)-based adaptive flight policy. Simulation results show that our approach outperforms baseline methods such as greedy and traveling salesman algorithms, in both device coverage and semantic reconstruction quality.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.