Geometric Monomial (GEM): a family of rational 2N-differentiable activation functions
TLDR
Introduces GEM, a family of C^2N-smooth, rational activation functions that outperform GELU on various benchmarks, improving deep learning optimization.
Key contributions
- Proposes GEM, a family of C^2N-smooth, rational activation functions based on a log-logistic CDF.
- Introduces E-GEM (ReLU approximation) and SE-GEM (dead neuron elimination) variants with strong performance.
- Achieves state-of-the-art or competitive results, outperforming GELU on CIFAR-10, CIFAR-100, GPT-2, and BERT-small.
- Reveals optimal N and epsilon parameters vary for CNNs vs. transformers and network depth.
Why it matters
This paper introduces a novel family of smooth activation functions that address the limitations of ReLU's non-differentiability. By achieving superior performance across diverse architectures like CNNs and transformers, GEM offers a promising alternative for more stable and effective deep learning optimization. Its parameterization allows for fine-tuning to specific network types and depths.
Original Abstract
The choice of activation function plays a crucial role in the optimization and performance of deep neural networks. While the Rectified Linear Unit (ReLU) remains the dominant choice due to its simplicity and effectiveness, its lack of smoothness may hinder gradient-based optimization in deep architectures. In this work we propose a family of $C^{2N}$-smooth activation functions whose gate follows a log-logistic CDF, achieving ReLU-like performance with purely rational arithmetic. We introduce three variants: GEM (the base family), E-GEM (an $ε$-parameterized generalization enabling arbitrary $L^p$-approximation of ReLU), and SE-GEM (a piecewise variant eliminating dead neurons with $C^{2N}$ junction smoothness). An $N$-ablation study establishes $N=1$ as optimal for standard-depth networks, reducing the GELU deficit on CIFAR-100 + ResNet-56 from 6.10% to 2.12%. The smoothness parameter $N$ further reveals a CNN-transformer tradeoff: $N=1$ is preferred for deep CNNs, while $N=2$ is preferred for transformers. On MNIST, E-GEM ties the best baseline (99.23%). On CIFAR-10 + ResNet-56, SE-GEM ($ε=10^{-4}$) surpasses GELU (92.51% vs 92.44%) -- the first GEM-family activation to outperform GELU. On CIFAR-100 + ResNet-56, E-GEM reduces the GELU deficit from 6.10% (GEM $N=2$) to just 0.62%. On GPT-2 (124M), GEM achieves the lowest perplexity (72.57 vs 73.76 for GELU), with GEM $N=1$ also beating GELU (73.32). On BERT-small, E-GEM ($ε=10$) achieves the best validation loss (6.656) across all activations. The $ε$-parameterization reveals a scale-dependent optimum: small $ε$ ($10^{-4}$--$10^{-6}$) for deep CNNs and larger transformers, with the special case of small transformers (BERT-small) benefiting from large $ε$ ($ε=10$) due to its limited depth and unconstrained gradients.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.