Nested learning for multi-level classification

Conference Paper

Deep neural networks models are generally designed and trained for a specific type and quality of data. In this work, we address this problem in the context of nested learning. For many applications, both the input data, at training and testing, and the prediction can be conceived at multiple nested quality/resolutions. We show that by leveraging this multiscale information, the problem of poor generalization and prediction overconfidence, as well as the exploitation of multiple training data quality, can be efficiently addressed. We evaluate the proposed ideas in six public datasets: MNIST, Fashion-MNIST, CIFAR10, CIFAR100, Plantvillage, and DBPEDIA. We observe that coarsely annotated data can help to solve fine predictions and reduce overconfidence significantly. We also show that hierarchical learning produces models intrinsically more robust to adversarial attacks and data perturbations.

Full Text

Duke Authors

Cited Authors

  • Achddou, R; Di Martino, JM; Sapiro, G

Published Date

  • January 1, 2021

Published In

Volume / Issue

  • 2021-June /

Start / End Page

  • 2815 - 2819

International Standard Serial Number (ISSN)

  • 1520-6149

Digital Object Identifier (DOI)

  • 10.1109/ICASSP39728.2021.9415076

Citation Source

  • Scopus