The company released the API publicly in February, promising to power apps that listen to speech commands or questions and deliver meaningful content based on relevance and context. Expect Labs’ first product, an “anticipatory computing” app called MindMeld, uses the API to listen to voice conversations and surface relevant web content as users speak — sometimes predicting what they’ll say next.
The MindMeld API is one of a number of recent, and varying, attempts to put artificial intelligence in the hands of mobile developers. Others include IBM’s Watson cognitive computing API, AlchemyAPI’s natural-language processing and computer vision services, and Jetpac’s DeepBelief neural network SDK for object recognition.
Expect Labs Co-founder and CEO Tim Tuttle (pictured above) spoke about the future of pervasive search and intelligent devices at our Structure Data conference in March. That video is embedded below.