Background Adaptive Faster R-CNN for semi-supervised convolutional object detection of threats in X-ray images

Published

Conference Paper

© 2020 SPIE. Recently, progress has been made in the supervised training of Convolutional Object Detectors (e.g. Faster R-CNN) for threat recognition in carry-on luggage using X-ray images. This is part of the Transportation Security Administration's (TSA's) mission to ensure safety for air travelers in the United States. Collecting more data reliably improves performance for this class of deep algorithm, but requires time and money to produce training data with threats staged in realistic contexts. In contrast to these hand-collected data containing threats, data from the real-world, known as the Stream-of-Commerce (SOC), can be collected quickly with minimal cost; while technically unlabeled, in this work we make a practical assumption that these are without threat objects. Because of these data constraints, we will use both labeled and unlabeled sources of data for the automatic threat recognition problem. In this paper, we present a semi-supervised approach for this problem which we call Background Adaptive Faster R-CNN. This approach is a training method for two-stage object detectors which uses Domain Adaptation methods from the field of deep learning. The data sources described earlier are considered two “domains”: one a hand-collected data domain of images with threats, and the other a real-world domain of images assumed without threats. Two domain discriminators, one for discriminating object proposals and one for image features, are adversarially trained to prevent encoding domain-specific information. Penalizing this encoding is important because otherwise the Convolutional Neural Network (CNN) can learn to distinguish images from the two sources based on superficial characteristics, and minimize a purely supervised loss function without improving its ability to recognize objects. For the hand-collected data, only object proposals and image features completely outside of areas corresponding to ground truth object bounding boxes (background) are used. The losses for these domain-adaptive discriminators are added to the Faster R-CNN losses of images from both domains. This technique enables threat recognition based on examples from the labeled data, and can reduce false alarm rates by matching the statistics of extracted features on the hand-collected backgrounds to that of the real world data. Performance improvements are demonstrated on two independently-collected datasets of labeled threats.

Full Text

Duke Authors

Cited Authors

  • Sigman, JB; Spell, GP; Liang, KJ; Carin, L

Published Date

  • January 1, 2020

Published In

Volume / Issue

  • 11404 /

Electronic International Standard Serial Number (EISSN)

  • 1996-756X

International Standard Serial Number (ISSN)

  • 0277-786X

International Standard Book Number 13 (ISBN-13)

  • 9781510635852

Digital Object Identifier (DOI)

  • 10.1117/12.2558542

Citation Source

  • Scopus