Applications are driving investment in deep learning startups

1 Comment

Deep learning, the popular approach to machine learning currently driving new capabilities in fields such as computer vision, is beginning to attract some serious investment. Like most things machine learning, though, the big money appears to be in applications rather than attempts to sell the technology wholesale.

In the past year or so, much of the investment activity in the deep learning space has come via M&A. Google’s $400 million acquisition of DeepMind is by far the biggest deal, but there’s also Twitter’s acquisition of Madbits and Yahoo’s acquisition of LookFlow, among others. With the exception of DeepMind, which continues to produce some very interesting research that should power novel capabilities for future Google products, many of the deals have revolved around real, working applications of the technology. Often, they’ve been in the area of computer vision and image recognition.

The same holds true for the handful publicly announced venture capital deals involving deep learning. AlchemyAPI, which began with a text-analysis API and is expanding into computer vision and question-answering, has announced $2 million in funding to date, and a computer vision startup called Clarafai has raised an undisclosed amount of money from some respectable investors. There are also venture-backed companies like text-messaging-app creator SwiftKey, which launched years before deep learning became part of mainstream lexicon, but claims to use the algorithms to help power features such as word prediction.

An artificial intelligence startup called Vicarious, which is not using deep learning but, like DeepMind, is trying to make major advances in the same general fields (including computer vision), has raised nearly $70 million.

The Buttefly Network vision.

The Buttefly Network vision.

One new, specific area of promise appears to be in health care. On Monday, a startup called Butterfly Network launched with $100 million in capital, promising a new form of handheld device that will let doctors see inside patients more easily and for far less money than previous imaging and ultrasound technologies. Last week, VentureBeat reported that Enlitic, the startup from former Kaggle chief scientist Jeremy Howard that plans on using deep learning models to analyze medical images, has raised a $2 million seed round.

More-traditional approaches to machine learning have become a hugely popular selling point in the past couple years (even reaching “the new black” status in elevator pitches) but many of the biggest success stories have been companies applying machine learning algorithms to specific applications (e.g., CRM, sales automation or recommendation engines) rather than general-purpose software for building models. It’s entirely possible this will be the case with deep learning, too.

I wrote last week that the architectural details of platforms such as IBM Watson and various deep learning approaches aren’t as important to users as the fact that they work. In an application development world increasingly dominated by cloud services, it’s less important that developers are able to deploy and train their own models or that any given company have computer vision and natural-language processing experts on staff, and more important they can connect via API to a service, or download an app, that can handle that for them.

1 Comment

Comments are closed.