ArXiv TLDR

SSD: Single Shot MultiBox Detector

🐦 Tweet
1512.02325

Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed + 2 more

cs.CV

TLDR

SSD is a fast and accurate single-shot object detection method that eliminates the need for proposal generation by predicting bounding boxes and class scores directly from multiple feature maps.

Key contributions

  • Introduces default boxes at multiple scales and aspect ratios per feature map location to discretize bounding box output space.
  • Combines predictions from multiple feature maps with different resolutions to effectively detect objects of various sizes.
  • Achieves competitive accuracy with state-of-the-art methods while running significantly faster, enabling real-time detection.

Why it matters

This paper matters because it presents a streamlined, end-to-end deep learning approach for object detection that simplifies the pipeline by removing the computationally expensive proposal generation step. SSD's balance of speed and accuracy makes it highly practical for real-world applications requiring fast and reliable object detection, such as autonomous driving, robotics, and video analysis.

Original Abstract

We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. Our SSD model is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stage and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, MS COCO, and ILSVRC datasets confirm that SSD has comparable accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. Compared to other single stage methods, SSD has much better accuracy, even with a smaller input image size. For $300\times 300$ input, SSD achieves 72.1% mAP on VOC2007 test at 58 FPS on a Nvidia Titan X and for $500\times 500$ input, SSD achieves 75.1% mAP, outperforming a comparable state of the art Faster R-CNN model. Code is available at https://github.com/weiliu89/caffe/tree/ssd .

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.