We don’t need more data scientists — just make big data easier to use


Credit: Sergey Nivens/Shutterstock.com

Virtually any article today about big data inevitably turns to the notion that the country is suffering from a crucial shortage of data scientists. A much-talked-about 2011 McKinsey & Co. survey pointed out that many organizations lack both the skilled personnel needed to mine big data for insights and the structures and incentives required to use big data to make informed decisions and act on them.

What seems to be missing from all of these discussions, though, is a dialogue about how to steer around this bottleneck and make big data directly accessible to business leaders. We have done it before in the software industry, and we can do it again.

To accomplish this goal, it’s helpful to understand the data scientist’s role in big data. Currently, big data is a melting pot of distributed data architectures and tools like Hadoop, NoSQL, Hive and R. In this highly technical environment, data scientists serve as the gatekeepers and mediators between these systems and the people who run the business – the domain experts.

While difficult to generalize, there are three main roles served by the data scientist: data architecture, machine learning, and analytics. While these roles are important, the fact is that not every company actually needs a highly specialized data team of the sort you’d find at Google or Facebook. The solution then lies in creating fit-to-purpose products and solutions that abstract away as much of the technical complexity as possible, so that the power of big data can be put into the hands of business users.

By way of example, think back to the web content management revolution at the turn of the century. Websites were all the rage, but the domain experts were continually banging their heads against the wall – we had an IT bottleneck. Every new piece of content had to be scheduled and sometimes hard-coded by the IT elite. So how was it resolved? We generalized and abstracted the basic needs into web content management systems and made them easy for non-techies to use. As long as you didn’t need anything too crazy, the problem was solved easily, and the bottleneck averted.

Let’s dig a little deeper into the three main roles of today’s data scientist, using online commerce as a backdrop.

Data Architecture

The key to reducing complexity is to limit scope. Nearly every ecommerce business is interested in capturing user behavior – engagements, purchases, offline transactions and social data – and almost every one of them has a catalog and customer profiles.

Limiting scope to this basic functionality would allow us to create templates for the standard data inputs, making both data capture and connecting the pipes much simpler. We’d also need to find meaningful ways to package the different data architectures and tools, which currently include Hadoop, Hbase, Hive, Pig, Cassandra and Mahout. These packages should be fit for purpose. It comes down to the 80/20 rule: 80 percent of big data use cases (which is all most ecommerce businesses need), can be achieved with 20 percent of the effort and technology.

Machine Learning

Surely we need data scientists in machine learning, right? Well, if you have very customized needs, perhaps. But most of the standard challenges that require big data, like recommendation engines and personalization systems, can be abstracted out. For example, a large part of the job of a data scientist is crafting “features,” which are meaningful combinations of input data that make machine learning effective. As much as we’d like to think that all data scientists have to do is plug data into the machine and hit “go,” the reality is people need to help the machine by giving it useful ways of looking at the world.

On a per domain basis, however, feature creation could be templatized, too. Every commerce site has a notion of buy flow and user segmentation, for example. What if domain experts could directly encode their ideas and representations of their domains into the system, bypassing the data scientists as middleman and translator?


It’s never easy to automatically surface the most valuable insights from data. There are ways to provide domain-specific lenses, however, that allow business experts to experiment – much like a data scientist. This seems to be the easiest problem to solve, as there are a variety of domain-specific analytics products already on the market.

But these products are still more constrained and less accessible to domain experts than they could be. There is definitely room for a friendlier interface. We also need to take into consideration how the machine learns from the results that analytics deliver. This is the critical feedback loop, and business experts want to provide modifications into that loop. This is another opportunity to provide a templatized interface.

As we learned in the CMS space, these solutions won’t solve every problem every time. But applying a technology solution to the broader set of data issues will relieve the data scientist bottleneck. Once domain experts are able to work directly with machine learning systems, we may enter a new age of big data where we learn from each other. Maybe then, big data will actually solve more problems than it creates.

Scott Brave is co-founder and CTO of Baynote, an e-tail and e-commerce advisory business. He is also an editor of the “International Journal of Human-Computer Studies” (Amsterdam: Elsevier) and co-author of “Wired for speech: How voice activates and advances the human-computer relationship” (Cambridge, MA: MIT Press).

Photo courtesy of Sergey Nivens/Shutterstock.com


John Santaferraro

Great post! I couldn’t agree more. We have just launched a program to close the gap between the data scientist and the BI analyst, by putting analytics in the hands of the data analyst and BI analyst. We provide a number of analytic functions embedded in our database, easily called by SQL. It opens up the use of text analytics, social media analytics, pattern match, time series analysis, path analysis, fraud analytics, and more to ordinary analysts. It also comes with tools to do things like sessionize data or parse JSON files. This is the future of analytics, not just the data scientist.

Here is the announcement: http://www.paraccel.com/news/press-releases.php?acc=022713


Wibidata (www.wibidata.com) is trying to make the job of a data scientist easier (from a model/develop/deploy point of view). They have recently open sourced the lowest layer in their stack, an entity-centric database that sits on top of hbase (kiji.org)

Robbie Allen

Generally dashboards and visualizations have been the primary tool to help to communicate insights with data. But they are one-size-fits-all, and when it comes to data analysis there is a big gap between the “data experts” (those that can understand visualizations) and the “data novices” (those that can’t understand visualizations).

Dashboards are a tool to help someone “do” analysis, but they are not good for communicating the results of the analysis. In many cases you can automate the Data Analyst job completely and let software “do” analysis and communicate the results at the same time.

I recently wrote about how we need to move away from Dashboards as the primary tool for communicating analysis and move to automated analysis:

The ideal scenario is you provide the right tool for the job. When it comes to Data Analysts/Scientists, they may always want to navigate the data on their own. Dashboards are fine for them. But for the vast majority of users, providing insights in plain English is the better option. Now technology exists to do just that (see http://automatedinsights.com)


Absolutely agree Scott point,we don’t need more data scientists,SAAS base in cloud computing already relaize this,IOT develop new inteligent interface can manage and treat most operation,operators just need training and understand operation,then involve manage with interface. We already have many hand on case for use this manage water supply network including asset management. IT invest in water supply more focus on new business development and ROI,so invest is not tradationally information department,which is operation department,this is bige difference than before,use this for operation not for informaion department like report.
With BID DATA variety managment,more external constituents included in system,environment,meteorological,public safety and consumer.


Having been part of the literally hundreds of Web Content Management Software providers in the early 2000’s I really like the analogy of this market to understanding data. There are different challenges to be sure, but companies like Windsor Circle http://windsorcircle.com/ in the ecommerce space and too many to name in the social media space have done a good job of making this transformation a reality.

Zhou Ji

I thought data scientists are exactly who will make big data easier to use

Scott Linford

Eval: (Data Scientist == Gatekeeper) ? (sack him/her) : (sack the status qou)

Jefferson Braswell

Both sides of the coin are valid observations, in a (over-simplified) “tools versus skills” comparison. The purpose of having tools is of course to make the people using the tools more productive. At the same time, tools in the hands of people who do not understand them can produce counterproductive results. ( Imagine, if you will – for a brief moment – a power saw with a blade meant for wood being used by someone on a steel pipe ! )

Similar discussions have taken place in the less rarified air of such simple things as the ‘wizards’ that Microsoft was fond of populating in order to make their programming tools easier to ‘use’. I have seen cases where business executives have argued that the arrival of such programming ‘wizards’ was tantamount to the Yellow Brick Road stretching out in front of the organization’s future and becoming a central assumption in the information technology strategy of the organization. Putting more control of ‘business logic’ in the hands of business users and decreasing the need to rely on support (and budget) from the technology and engineering side of the house was a strategy that, however useful in appropriate doses, often left an organization foundering in the mud when the tide went out if no one bothered to vet the claims of the wizard makers (and sellers).

Tools of all kinds are useful, and required. But knowledge and skills pertaining to the tasks and challenges that a particular tool has been pressed into service to address will produce far better results in the end than when there is a conceptual disconnect between the user of the tool and the user’s understanding and knowledge of the subject to which the tool is applied.


We don’t need more people that can read – just more books with pictures. Scott, I can’t describe how disgusted and conflicted I was with such a flipent and destructive headline mixed with such an inightful analysis. On one hand I am relieved to see such an educated man in common publication on such an important topic and on the other I am dismayed that a 10 year could read and analyse this piece and be completely and utterly disappointed by the content. I wish you well on your next contribution. And I hope you take my headline alteration in good spirit. I think that; more than anything else, it shows that what is really required to improve the field; is not dumbing it down but appreciating its complexity and making it accessible In a responsible way. We are looking into an abyss of data and hoping that our best minds can show us meaning. Let us not belittle the next great feat of human endeavour by suggesting that http://mr.data.miner.org can offer insights for 3.95 a month. Please do continue to inspire business on the potential of data through your publications but we respectively ask; don’t sell short the great minds of our generation, that work on it with cheap headlines. Good luck to us all in the future. The next few years will be fun. I.

Alfred Poor

I am not directly involved in Big Data, but I’ve been closely tied to technology in general for the past 30 years. This discussion reminds me of how Esther Dyson once described artificial intelligence; she said “that’s what we call it until we can do it.”

I believe that Big Data is following the same trajectory as many other important technologies. 50 years ago, computers were giants that live in special rooms, and you needed the intervention of a data processing wizard to submit your stack of punch cards, and then tell you whether or not your program ran successfully. Now most of us carry more computing power in our shirt pockets and we don’t need to know how to write a single line of code.

Sure, at present we still require wizards to wade through our Big Data tasks (at least for the most part), but we do have examples where the technology is maturing and becoming more directly accessible. Netflix does a pretty good job of guessing what movies I might like to watch. Google and YouTube are just two examples of pretty sophisticated site analytics that I can access just by clicking on some menus.

Humans are tool builders and pattern-recognizing machines. That’s what we do best. And if the gems hidden in Big Data are valuable, we will build tools that make it easier and more efficient to find those gems, and that will be more effective at screening out the garbage results. We have always benefited from standing on the shoulders of the giants who went before us, and I expect that the development of Big Data will be no different.

Alfred Poor

Anonymous Coward

We don’t need more heavy lifters, just make heavy weights easier to lift! Right …

And of course, the fact that the author works for a company interested in providing data mining tools is of no importance, in the context of this article. (Actually he’s one of the founders and CTO.)

Yet another issue is that the author seems to not have a proper understanding of big data. Not only is big data big, but it is also highly non-uniform in its structure. It’s not like a very big relational database, it’s more like a huge Christmas tree on which all sorts of different decorations were hung, plus a huge amount of boxes, toys and whatnot placed beneath the tree, plus all cats from the neighborhood climbing around in the tree. This is why professionals are needed to mine it. A non-professional will derive anything he wants from it, be confident in his findings, and not even be able to think about why his findings may be wrong. Letting a non-professional mine big data without the aid of a professional is like letting a politician with Alzheimer’s use statistics – you can’t tell anything about the result, other than it’s most probably useless, if not plain wrong.


Why stop at big data scientists? Why don’t we abstract away entire businesses, so that our CEO wanna be can just buy some turn key program that will run the whole shop for him?


Always good to have more data analysis tools for users! I see a problem though with the data validation and reliability estimation.

It’s great to have QuickBooks, but for a company, you still need an accountant.

Michael O'Connell

Scott is missing the fact that a large portion of a day in the life of the data scientists is exploratory data analysis (EDA), feature construction and feature validation. This is the art of data science. The modeling / machine learning is the fun stuff at the end of the cycle. Once features and models have been identified and validated, models can then be actioned in-line via real-time systems and widely opened to all kinds of business leaders/analysts etc; but data science is required up front to identify and validate the features.

Scott Brave


The question in my mind is whether features can be shared cross-instance within the same domain. For example, if certain features (or feature templates) work well for ecommerce site A, might they also be worth a shot for ecommerce site B? In other words, can we leverage the learnings (that data scientists figure out) from one data set to another similar data set within the same domain, or do we have to rediscover from scratch every time?

Michael O'Connell

We try to leverage features cross-instance within the same domain; sometimes it works out, sometimes it doesnt. In some cases, even within the same instance the features evolve e.g. in fraud applications new features are required to keep up. Point is that manual EDA is needed for feature definition/validation. Once features are in place, a self-service, interactive, collaborative environment can be made available to all kinds of end users, including data scientists who may carry on with modeling, simulation etc. Congrats on getting such an active response to your article!

Jen m

This type of post worries me. The data scientists are skilled and properly trained in first, identifying the correct variables and test and second, interpreting the findings properly. In addition to this, cleansing e data set when required (with large data almost always). The concern I have with making it ‘easier’ to run analyses by making the programmes easier (and btw, many are pretty user friendly already if you skill up…) is that, just like with students, the new novice users will start trying out a combination of many variable (data mining) just to find interesting results. This is not a good thing, as mentioned by a previous poster, you can make your data tell you almost anything of you squeeze it enough. Perhaps a better approach than creating a one size fits all, let’s all play expert approach is to actually train up your staff to allow them to correctly use what is already on the market and to correctly interpret the data. This wouldn’t be too difficult and I’m sure that there would be many biting at the bit to receive additional training…


have a look at SAP’s new HANA applications: these are business apps for analysts who can use them w/o including any data scientist, in scenarios like segmentation, flex. anayltical queries…
found them very interesting when presented at sapphire

Comments are closed.