MOST people think that the more data you have, the more and better insights you get. And that means the more you gather, the more valuable it is, right?
Unfortunately, having too much data will only decrease its value, according to Mike Potter, senior vice president of global research and development at Radnor, Pennsylvania-based Qlik Technologies.
“The amount of data that’s growing is far exceeding our ability to understand it, and importantly, the value of that is decreasing,” he says in a recent conversation with Digital News Asia (DNA) in Singapore.
This is contrary to what most companies have been hearing from vendors and analysts alike, where it seems like there is no such thing as too much data.
But there’s only so much a poor human being can take.
“It all has to do with the ability to process – not just the technological ability to process data, but the human ability to process,” says Potter.
“What happens is that there is a higher degree of noise, there is a lot of duplication, and the value gets lost in the granularity of it all,” he adds.
Mining for insights
Even systems that generate value out of data often take it from derivative sets and not the raw data itself, according to Potter.
“Most systems that are successful in generating value are interpreting the data and then using the interpreted data for analytics,” he says.
“It is not just simply traditional forms of aggregation, it is also rules, it is predictive, it is being able to run it through machine learning technologies.
“The idea is that the base data is not what you analyse, but the derivative data,” he adds.