Densely Connected Convolutional Networks
Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger
TLDR
DenseNet introduces a convolutional network architecture that connects each layer to every other layer, improving accuracy, efficiency, and feature reuse.
Key contributions
- Proposes DenseNet where each layer receives inputs from all preceding layers, resulting in L(L+1)/2 connections.
- Alleviates vanishing-gradient problem and strengthens feature propagation through dense connectivity.
- Achieves state-of-the-art results on multiple object recognition benchmarks with fewer parameters and less computation.
Why it matters
This paper matters because it fundamentally changes how convolutional neural networks are designed by introducing dense connectivity, which improves training dynamics, enhances feature reuse, and reduces model complexity, leading to better performance and efficiency across various challenging image recognition tasks.
Original Abstract
Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet .
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.