EOS-Bench: A Comprehensive Benchmark for Earth Observation Satellite Scheduling
Qian Yin, Jiaxing Li, Jiaqi Cheng, Qizhang Luo, Annalisa Riccardi + 21 more
TLDR
EOS-Bench is a new open-source benchmark for Earth observation satellite scheduling, providing 13,900 instances for systematic algorithm evaluation.
Key contributions
- Introduces EOS-Bench, a comprehensive framework for reproducible evaluation of satellite scheduling methods.
- Generates 13,900 benchmark instances across 1,390 scenarios, from small to large-scale problems.
- Proposes a scenario characterisation scheme to quantify structural difficulty based on key factors.
- Introduces a multidimensional evaluation protocol with five metrics for performance assessment.
Why it matters
This paper addresses the critical need for a unified, open-source benchmark in Earth observation satellite scheduling. EOS-Bench enables systematic comparison of algorithms, accelerating research and development in this complex NP-hard problem. It provides deep insights into solver performance and scenario complexity.
Original Abstract
Earth observation satellite imaging scheduling is a challenging NP-hard combinatorial optimisation problem central to space mission operations. While next-generation agile Earth observation satellites (EOS) increase operational flexibility, they also significantly raise scheduling complexity. The lack of a unified, open-source benchmark makes it difficult to compare algorithms across studies. This paper introduces EOS-Bench, a comprehensive framework for systematic and reproducible evaluation of scheduling methods. By integrating high-fidelity orbital dynamics and platform constraints, EOS-Bench generates 1,390 scenarios and 13,900 benchmark instances, spanning from small-scale validation cases to large coordination problems with up to 1,000 satellites and 10,000 requests. We further propose a scenario characterisation scheme to quantify structural difficulty based on factors such as opportunity density, task flexibility, conflict intensity, and satellite congestion. A multidimensional evaluation protocol is introduced, assessing performance across five metrics: task profit, completion rate, workload balance, timeliness, and runtime. The framework is evaluated using mixed-integer programming, heuristics, meta-heuristics, and deep reinforcement learning across both agile and non-agile settings. Results show that EOS-Bench effectively distinguishes solver performance across scales and conditions, revealing trade-offs between solution quality and computational efficiency, and providing deeper insight into scenario complexity. EOS-Bench offers a unified and extensible open testbed for advancing research in Earth observation satellite scheduling. The code and data are available at https://github.com/Ethan19YQ/EOS-Bench.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.