Big Data is still in its early stages of life; to get to the next stage, its integration with core enterprise technologies needs to get better. Chief among the enterprise environments with which Big Data must integrate is the developer ecosystem.
There are a number of reasons for this: in the Big Data era, the task of data transformation is falling increasingly to developers; in the world of Hadoop there is no database administrator per se, putting more burden on developers; and since Big Data tools themselves have relatively low usability, it’s up to developers to embed Big Data functionality in their applications and carry these capabilities the last mile, to the business user.
It’s time for business applications to include Big Data functionality, and it’s time for developers to get on the Big Data train. This webinar will focus on how to make that happen.
In this webinar, our panel will discuss these topics:
- The interplay between Big Data applications and Hadoop adoption
- The difference between MapReduce coding and building Big Data applications
- How enterprise developers can code for clustered server environments
- The similarities and differences between coding for Big Data/analytics and doing so for operational databases
- A workflow for developers and analysts/data scientists to make embedded Big Data analytics successful
- Andrew Brust, Research Director, Gigaom Research
- Lynn Langit, Founder & Consultant, Lynn Langit
- Chris Kinsman, Chief Architect, PushSpring
- Jon Gray, CEO and Founder, Cask
Register here to join Gigaom Research and our sponsor Cask for “Big Data Application Development: Why it Matters,” a free analyst webinar on Wednesday, February 4, 2015 at 10 a.m. PT.