Does Dimensionality Reduction via Random Projections Preserve Landscape Features?
Iván Olarte Rodríguez, Anja Jankovic, Thomas Bäck, Elena Raponi
TLDR
Random projections often distort landscape features used in Exploratory Landscape Analysis, making them unrepresentative of the original problem.
Key contributions
- Investigates robustness of ELA features under Random Gaussian Embeddings.
- Shows linear random projections often alter geometric and topological landscape structures.
- Most ELA features are highly sensitive to dimensionality reduction, yielding unrepresentative values.
- Warns that apparent robustness may reflect projection artifacts, not intrinsic landscape properties.
Why it matters
This paper is crucial for practitioners using dimensionality reduction with Exploratory Landscape Analysis. It reveals that random projections often distort key landscape features, leading to potentially misleading insights. Researchers must be cautious, as robust features might still be artifacts.
Original Abstract
Exploratory Landscape Analysis (ELA) provides numerical features for characterizing black-box optimization problems. In high-dimensional settings, however, ELA suffers from sparsity effects, high estimator variance, and the prohibitive cost of computing several feature classes. Dimensionality reduction has therefore been proposed as a way to make ELA applicable in such settings, but it remains unclear whether features computed in reduced spaces still reflect intrinsic properties of the original landscape. In this work, we investigate the robustness of ELA features under dimensionality reduction via Random Gaussian Embeddings (RGEs). Starting from the same sampled points and objective values, we compute ELA features in projected spaces and compare them to those obtained in the original search space across multiple sample budgets and embedding dimensions. Our results show that linear random projections often alter the geometric and topological structure relevant to ELA, yielding feature values that are no longer representative of the original problem. While a small subset of features remains comparatively stable, most are highly sensitive to the embedding. Moreover, robustness under projection does not necessarily imply informativeness, as apparently robust features may still reflect projection-induced artifacts rather than intrinsic landscape characteristics.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.