Big Data Analytics: Seeing The Big Picture Or Finding The Rabbit?

Big Data Analytics: Seeing The Big Picture Or Finding The Rabbit?

Tom Weiss, Sun 15 November 2015

There are two approaches to managing data from multiple sources that at first sight seem contradictory. One is to get the computers to mix and mash all the data for deeper insights; the second is to keep the data separate and analyse it in isolation for greater reliability and to avoid making erroneous decisions.

From its outset in the early nineties, big data has been hyped as a way of harnessing the power inherent in diverse sources of information about customers, services, networks, products and operational issues. Big promises were made that this would enable insights hitherto undreamed of about everything under the sun.

Pioneered in call centres and enterprise data services, Big Data has more recently invaded the worlds of video and pay TV. In many cases this calls for an integrated approach. To take a simple case, if a single MVPD client complains of loss of TV signal but finds that voice services are still available, just rebooting the set top box can probably solve the issue. If, however, voice services are also down, the issue is likely to be network related. On the other hand if multiple customers have issues, their geographic location may well pinpoint the probable cause. The more data available to the diagnosis tool, the easier it will be to find and fix the problem.

Detractors of this approach, though, cite two main objections (and pardon us making two food-based metaphors in a row) - 1 - apples can’t be compared with oranges, and 2 - putting all your eggs in one basket is dangerous. The apples and oranges argument is that if you artificially massage one kind of data to make it compatible with another you are creating meaning that wasn’t there in the first place. A hardware issue in one part of the network may actually have nothing to do with a TV consumer complaint elsewhere.

The second objection is that if your monitoring system is wrong or is itself the problem, you have no other tools to investigate this. Making simple deductions from simple data is relatively reliable. But as data gets more sophisticated, a system that analyses it in real time can become exponentially more complex.

Keeping systems separate can avoid both pitfalls.

At Dativa we do not really see any conflict here. We have worked on customer systems that combine the best of both approaches while aiming to avoid the pitfalls of each.

So for us the debate between these opposing views over big data is almost academic. It is not so much about whether data should be kept separate but instead, ensuring that the logic enables the right actions to be taken for all viewers of a given operator, depending on all the relevant factors, such as content preferences, time of day and device being used at the time.

Need help? Get in touch...

Sign up below and one of our data consultants will get right back to you

Other articles about Data Engineering


Dativa is a global consulting firm providing data consulting and engineering services to companies that want to build and implement strategies to put data to work. We work with primary data generators, businesses harvesting their own internal data, data-centric service providers, data brokers, agencies, media buyers and media sellers.

145 Marina Boulevard
San Rafael
California
94901

Registered in Delaware

Thames Tower
Station Road
Reading
RG1 1LX

Registered in England & Wales, number 10202531