Hierarchical dictionary learning for invariant classification

Journal Article

Sparse representation theory has been increasingly used in the fields of signal processing and machine learning. The standard sparse models are not invariant to spatial transformations such as image rotations, and the representation is very sensitive even under small such distortions. Most studies addressing this problem proposed algorithms which either use transformed data as part of the training set, or are invariant or robust only under minor transformations. In this paper we suggest a framework which extracts sparse features invariant under significant rotations and scalings. The algorithm is based on a hierarchical architecture of dictionary learning for sparse coding in a cortical (log-polar) space. The proposed model is tested in supervised classification applications and proved to be robust under transformed data. ©2010 IEEE.

Full Text

Duke Authors

Cited Authors

  • Bar, L; Sapiro, G

Published Date

  • November 8, 2010

Published In

Start / End Page

  • 3578 - 3581

International Standard Serial Number (ISSN)

  • 1520-6149

Digital Object Identifier (DOI)

  • 10.1109/ICASSP.2010.5495916

Citation Source

  • Scopus