A Mathematical Foundation of Big Data
The recent research evolution on big data has brought exciting aspiration to mathematicians, computer scientists and business professionals alike. However, the lack of a sound mathematical foundation presents itself as a real challenge amidst the swarm of big data marketing activities. This paper intends to propose a possible mathematical theory as a foundation for big data research. Specifically, we propose the concept of the adjective “big” as a mathematical operator, furthermore, the concept of so-called “big” logically and naturally fits the concept of being “linguistics variable” as per fuzzy logic research community for decades. The consequence of adopting such a mathematical modeling can be profoundly considered as an abstraction of the technologies, systems, tools for data management and processing that transforms data into big data. In addition, the concept of infinity of the big data is based on the theory of calculus and the set theory. Furthermore, the concept of relativity of the big data, as we find out, is based on the operations of the fuzzy subsets theory. The proposed approach in this paper, we hope, can facilitate and open up more opportunities for big data research and developments on big data analytics, business analytics, big data intelligence, big data computing as well as big data science.
New Mathematics and Natural Computation
Volume / Issue
Start / End Page