One trend emerging throughout GigaOM’s Structure:Data conference today is the collaboration between man and machine to solve big-data problems. Speaking with Phil Francisco, vice president of product management for big data at IBM, and Emile Werr, head of enterprise data architecture at the New York Stock Exchange, my colleague Barb Darrow spent a session Wednesday explaining how people — a company’s IT experts and business experts — sometimes need to work in different ways to achieve the same business goals.
Developers need to build systems for crossing lots of data sets from legacy data warehouses as well as Hadoop clusters and make available options for visualizing trends that might otherwise be obvious, Francisco said. That’s when business experts come into play and ask questions and derive insights that could lead to new strategies and campaigns.
How does that work in practice? Facing greater volumes of data, the NYSE has trained business analysts as “data architects” to develop a system with IBM products for capacity planning and spotting patterns to detect fraud in billions of transactions each day, Werr said. Analysts also need to be able to figure out if a a possible fraud case is a false positive. Those are early-stage use cases for analyzing data in near-real time.
For now, financial deployments tend to play out on premise. Werr pointed out places where public clouds make sense. Developers can test out new data architectures for data sets. But the cost advantage of running on production scale on Infrastructure as a Service (IaaS) such as Amazon Web Services is appealing, Werr said. But, at least for now, bandwidth across multiple data centers is an issue, he said.
Check out the rest of our Structure:Data 2013 live coverage here, and a video embed of the session follows below.
A transcription of the video follows on the next page