Training a single multi-class convolutional segmentation network using multiple datasets with heterogeneous labels: Preliminary results

Published

Conference Paper

© 2019 IEEE. Segmentation convolutional neural networks (CNNs) are now popular for the semantic segmentation (i.e., dense pixel-wise labeling) of remote sensing imagery, such as color or hyperspectral satellite imagery. In recent years a large number of hand-labeled datasets of overhead imagery have emerged, leading to breakthrough performance for CNNs. However, these datasets are typically used in isolation of one another because they are either (i) annotated with heterogeneous object type labels, or (ii) they are collected over different geographic areas. This imposes a major bottleneck on the value of these datasets. In this work we present what we call a class-asymmetric loss function that makes it possible to train a single multi-class network using multiple datasets that are heterogeneously-labeled. We show, for example, that it is possible to train a segmentation algorithm for Buildings, roads, and background using two datasets: one annotated with buildings and one annotated with buildings. We propose a class asymmetric loss that under certain common conditions, allows for one to train models on datasets in which the target class is unlabeled.

Full Text

Duke Authors

Cited Authors

  • Kong, F; Chen, C; Huang, B; Collins, LM; Bradbury, K; Malof, JM

Published Date

  • July 1, 2019

Published In

  • International Geoscience and Remote Sensing Symposium (Igarss)

Start / End Page

  • 3903 - 3906

International Standard Book Number 13 (ISBN-13)

  • 9781538691540

Digital Object Identifier (DOI)

  • 10.1109/IGARSS.2019.8898617

Citation Source

  • Scopus