Random forests can hash

Published

Journal Article

© 2015 International Conference on Learning Representations, ICLR. All rights reserved. Hash codes are a very efficient data representation needed to be able to cope with the ever growing amounts of data. We introduce a random forest semantic hashing scheme with information-theoretic code aggregation, showing for the first time how random forest, a technique that together with deep learning have shown spectacular results in classification, can also be extended to large-scale retrieval. Traditional random forest fails to enforce the consistency of hashes generated from each tree for the same class data, i.e., to preserve the underlying similarity, and it also lacks a principled way for code aggregation across trees. We start with a simple hashing scheme, where independently trained random trees in a forest are acting as hashing functions. We the propose a subspace model as the splitting function, and show that it enforces the hash consistency in a tree for data from the same class. We also introduce an information-theoretic approach for aggregating codes of individual trees into a single hash code, producing a near-optimal unique hash for each class. Experiments on large-scale public datasets are presented, showing that the proposed approach significantly outperforms state-of-the-art hashing methods for retrieval tasks.

Duke Authors

Cited Authors

  • Qiu, Q; Sapiro, G; Bronstein, A

Published Date

  • January 1, 2015

Published In

  • 3rd International Conference on Learning Representations, Iclr 2015 Workshop Track Proceedings

Citation Source

  • Scopus