Data Smashing Used To Find Hidden Structure in Data

Tuesday, October 14, 2014

Big Data


 Big Data
Computer science researchers have come up with a new principle they call 'data smashing' for estimating the similarities between streams of arbitrary data without human intervention, and without access to the data sources.




Simply feeding raw data into a data analysis algorithm is unlikely to produce meaningful results, say the authors of a new Cornell University study funded by DARPA.

Related articles
From recognizing speech to identifying unusual stars, new discoveries often begin with comparison of data streams to find connections and spot outliers. But most data comparison algorithms today have one major weakness – somewhere, they rely on a human expert to specify what aspects of the data are relevant for comparison, and what aspects aren’t.

Data Smashing Used To Find Hidden Structure in Data
Data smashing applications. Pairwise distance matrices, identified clusters and 3D projections of Euclidean embeddings for epileptic pathology identification (shown in (A)), identification of heart murmur (shown in (B)), and classification of variable stars from photometry (shown in (C)). In these applications, the relevant clusters are found unsupervised.
Image Source - Chattopadhyay and Lipson
In the era of Big Data however, experts aren’t keeping pace with the growing amounts and complexities of data.

Now, Cornell computing researchers have come up with a new principle they call “data smashing” for estimating the similarities between streams of arbitrary data without human intervention, and without access to the data sources. Hod Lipson, associate professor of mechanical engineering and computing and information science, and Ishanu Chattopadhyay, a former postdoctoral associate with Lipson and now at the University of Chicago, have described their method in Royal Society Interface.

"Just as smashing atoms can reveal their composition, "colliding" quantitative data streams can reveal their hidden structure."


Data smashing is based on a new way to compare data streams. The process involves two steps. First, the data streams are algorithmically “smashed” to “annihilate” the information in each other. Then, the process measures what information remained after the collision. The more information remained, the less likely the streams originated in the same source.

"Just as smashing atoms can reveal their composition, "colliding" quantitative data streams can reveal their hidden structure," writes Chattopadhyay.

Any time a data mining algorithm searches beyond simple correlations, a human expert must help define a notion of similarity - by specifying important distinguishing ``features'' of the data to compare, or by training learning algorithms using copious amounts of examples. The data smashing principle removes the reliance on expert-defined features or examples, and in many cases, does so faster and with better accuracy than traditional methods according to Chattopadhyay.

data smashing
Data smashing principles may open the door to understanding increasingly complex observations, especially when experts do not know what to look for, according to the researchers.

The authors demonstrated the application of their principle to data from real-world problems, including the disambiguation of electroencephalograph patterns from epileptic seizure patients; detection of anomalous cardiac activity from heart recordings; and classification of astronomical objects from raw photometry.

In all cases and without access to original domain knowledge, the researchers demonstrated performance on par with the accuracy of specialized algorithms and heuristics devised by experts.

Chattopadhyay has a demonstration of the data smasher available to try at:  http://home.uchicago.edu/~/ishanu/smash.html.


SOURCE  Cornell University

By 33rd SquareEmbed