ExoNet: Multimodal Deep Learning for TESS Exoplanet Candidate Identification via Phase-Folded Light Curves, Stellar Parameters, and Multi-Head Attention Fusion
TLDR
ExoNet uses multimodal deep learning with attention to identify TESS exoplanet candidates, outperforming manual vetting processes.
Key contributions
- Introduces ExoNet, a multimodal deep learning framework for exoplanet candidate identification.
- Combines phase-folded light curves and stellar parameters using CNNs and Multi-Head Attention.
- Achieves strong performance on Kepler data and generalizes effectively to TESS candidates.
- Identified multiple high-confidence TESS candidates, including several in habitable zones.
Why it matters
Manual vetting of TESS exoplanet candidates is slow and inefficient. ExoNet provides an automated, accurate solution, accelerating the confirmation process and potentially discovering more habitable exoplanets. This advances our understanding of planetary systems.
Original Abstract
NASA's Transiting Exoplanet Survey Satellite (TESS) has identified thousands of exoplanet candidates, yet many remain unconfirmed due to the limitations of manual vetting processes. This paper presents ExoNet, a multimodal deep learning framework that integrates phase-folded global and local light curve representations with stellar parameters using a late-fusion architecture combining 1D Convolutional Neural Networks and Multi-Head Attention. Trained on labeled Kepler data, ExoNet achieves strong classification performance and demonstrates effective generalization to TESS data. Applied to 200 unconfirmed TESS planet candidates, the model identifies multiple high-confidence candidates, including several within the habitable zone. The results highlight the effectiveness of multimodal fusion and attention mechanisms in automated exoplanet candidate validation.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.