ArXiv TLDR

Block-Bench: A Framework for Controllable and Transparent Discrete Optimization Benchmarking

🐦 Tweet
2604.06973

Furong Ye, Frank Neumann, Thomas Bäck, Niki van Stein

cs.NE

TLDR

Block-Bench is a novel framework for discrete optimization benchmarking, offering fine-grained control over problem properties and transparent algorithm analysis.

Key contributions

  • Introduces Block-Bench, a new framework for controllable discrete optimization benchmarking.
  • Builds problems using block functions and dependency graphs for fine-grained property control.
  • Enables transparent analysis of algorithm behavior in both objective and variable representation spaces.

Why it matters

This framework addresses the need for more controllable and transparent benchmarks in discrete optimization. By allowing detailed analysis of algorithm behavior at the variable level, it significantly improves the practical relevance of benchmark studies. It also opens new avenues for research in areas like self-adaptation and dynamic algorithm configuration.

Original Abstract

We present a novel approach for constructing discrete optimization benchmarks that enables fine-grained control over problem properties, and such benchmarks can facilitate analyzing discrete algorithm behaviors. We build benchmark problems based on a set of block functions, where each block function maps a subset of variables to a real value. Problems are instantiated through a set of block functions, weight factors, and an adjacency graph representing the dependency among the block functions. Through analyzing intermediate block values, our framework allows to analyze algorithm behavior not only in the objective space but also at the level of variable representations in the obtained solutions. This capacity is particularly useful for analyzing discrete heuristics in large-scale multi-modal problems, thereby enhancing the practical relevance of benchmark studies. We demonstrate how the proposed approach can inspire the related work in self-adaptation and diversity control in evolutionary algorithms. Moreover, we explain that the proposed benchmark design enables explicit control over problem properties, supporting research in broader domains such as dynamic algorithm configuration and multi-objective optimization.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.