RoTDCF: Decomposition of convolutional filters for rotation-equivariant deep networks

Published

Journal Article

© 7th International Conference on Learning Representations, ICLR 2019. All Rights Reserved. Explicit encoding of group actions in deep features makes it possible for convolutional neural networks (CNNs) to handle global deformations of images, which is critical to success in many vision tasks. This paper proposes to decompose the convolutional filters over joint steerable bases across the space and the group geometry simultaneously, namely a rotation-equivariant CNN with decomposed convolutional filters (RotDCF). This decomposition facilitates computing the joint convolution, which is proved to be necessary for the group equivariance. It significantly reduces the model size and computational complexity while preserving performance, and truncation of the bases expansion serves implicitly to regularize the filters. On datasets involving in-plane and out-of-plane object rotations, RotDCF deep features demonstrate greater robustness and interpretability than regular CNNs. The stability of the equivariant representation to input variations is also proved theoretically. The RotDCF framework can be extended to groups other than rotations, providing a general approach which achieves both group equivariance and representation stability at a reduced model size.

Full Text

Duke Authors

Cited Authors

  • Cheng, X; Qiu, Q; Calderbank, R; Sapiro, G

Published Date

  • January 1, 2019

Published In

  • 7th International Conference on Learning Representations, Iclr 2019

Citation Source

  • Scopus