Cross-modal similarity learning via pairs, preferences, and active supervision

Published

Conference Paper

© 2015, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. We present a probabilistic framework for learning pairwise similarities between objects belonging to different modalities, such as drugs and proteins, or text and images. Our framework is based on learning a binary code based representation for objects in each modality, and has the following key properties: (i) it can leverage both pairwise as well as easy-to-obtain relative preference based cross-modal constraints, (ii) the probabilistic framework naturally allows querying for the most useful/informative constraints, facilitating an active learning setting (existing methods for cross-modal similarity learning do not have such a mechanism), and (iii) the binary code length is learned from the data. We demonstrate the effectiveness of the proposed approach on two problems that require computing pairwise similarities between cross-modal object pairs: cross-modal link prediction in bipartite graphs, and hashing based cross-modal similarity search.

Duke Authors

Cited Authors

  • Zhen, Y; Rai, P; Zha, H; Carin, L

Published Date

  • June 1, 2015

Published In

  • Proceedings of the National Conference on Artificial Intelligence

Volume / Issue

  • 4 /

Start / End Page

  • 3203 - 3209

International Standard Book Number 13 (ISBN-13)

  • 9781577357025

Citation Source

  • Scopus