Skip to main content

An automatic blocking mechanism for large-scale de-duplication tasks

Publication ,  Journal Article
Das Sarma, A; Jain, A; Machanavajjhala, A; Bohannon, P
Published in: ACM International Conference Proceeding Series
December 19, 2012

De-duplication - identification of distinct records referring to the same real-world entity - is a well-known challenge in data integration. Since very large datasets prohibit the comparison of every pair of records, blocking has been identified as a technique of dividing the dataset for pairwise comparisons, thereby trading off recall of identified duplicates for efficiency. Traditional de-duplication tasks, while challenging, typically involved a fixed schema such as Census data or medical records. However, with the presence of large, diverse sets of structured data on the web and the need to organize it effectively on content portals, de-duplication systems need to scale in a new dimension to handle a large number of schemas, tasks and data sets, while handling ever larger problem sizes. In addition, when working in a map-reduce framework it is important that canopy formation be implemented as a hash function, making the canopy design problem more challenging. We present CBLOCK, a system that addresses these challenges. CBLOCK learns hash functions automatically from attribute domains and a labeled dataset consisting of duplicates. Subsequently, CBLOCK expresses blocking functions using a hierarchical tree structure composed of atomic hash functions. The application may guide the automated blocking process based on architectural constraints, such as by specifying a maximum size of each block (based on memory requirements), impose disjointness of blocks (in a grid environment), or specify a particular objective function trading off recall for efficiency. As a post-processing step to automatically generated blocks, CBLOCK rolls-up smaller blocks to increase recall. We present experimental results on two large-scale de-duplication datasets from a commercial search engine - consisting of over 140K movies and 40K restaurants respectively - and demonstrate the utility of CBLOCK. © 2012 ACM.

Duke Scholars

Published In

ACM International Conference Proceeding Series

DOI

Publication Date

December 19, 2012

Start / End Page

1055 / 1064
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Das Sarma, A., Jain, A., Machanavajjhala, A., & Bohannon, P. (2012). An automatic blocking mechanism for large-scale de-duplication tasks. ACM International Conference Proceeding Series, 1055–1064. https://doi.org/10.1145/2396761.2398403
Das Sarma, A., A. Jain, A. Machanavajjhala, and P. Bohannon. “An automatic blocking mechanism for large-scale de-duplication tasks.” ACM International Conference Proceeding Series, December 19, 2012, 1055–64. https://doi.org/10.1145/2396761.2398403.
Das Sarma A, Jain A, Machanavajjhala A, Bohannon P. An automatic blocking mechanism for large-scale de-duplication tasks. ACM International Conference Proceeding Series. 2012 Dec 19;1055–64.
Das Sarma, A., et al. “An automatic blocking mechanism for large-scale de-duplication tasks.” ACM International Conference Proceeding Series, Dec. 2012, pp. 1055–64. Scopus, doi:10.1145/2396761.2398403.
Das Sarma A, Jain A, Machanavajjhala A, Bohannon P. An automatic blocking mechanism for large-scale de-duplication tasks. ACM International Conference Proceeding Series. 2012 Dec 19;1055–1064.

Published In

ACM International Conference Proceeding Series

DOI

Publication Date

December 19, 2012

Start / End Page

1055 / 1064