Natural language user interfaces – think Siri – promise to revolutionize everything from call-centers to hospitals, but what about their data sources? How do the companies and organizations deploying them ensure real choice, rather than just assuming a certain one or two sources will do the job? These are questions that came up today at GigaOM’s Structure:Data conference in New York and, according to Nuance(s nuan) Communications CTO Vlad Sejnoha, the answer may lie in the semantic web.
The semantic web, a term coined by Tim Berners-Lee, is an initiative being developed under the auspices of the World Wide Web Consortium (W3C). It aims, through the use of standardized tags and formats, to build a framework where content on the web has machine-understandable meaning, rather than simply being searchable by keyword.
Asked by GigaOM Research analyst George Gilbert how best to tap a variety of information repositories without having to “hardwire” the language interface into each one separately, Sejnoha said the answer lies in an open approach:
“The conversation stack… has to interact with content sources, other services, applications and devices. Today, integrating those interfaces into those resources is a one-off job. Some applications on the market make choices on behalf of the user, and this brings important questions about openness. I do think the promise of the semantic web remains very important there.
“I hope we get to the point where people who have important services or content on the web publish them in standard formats that we can connect to and [use] almost automatically… I am hopeful that this will gain greater support in the industry as these folks realize without something like this the interface might become opaque to new entrants.”
However, Gilbert demurred, suggesting that incumbents have “no incentive to make a common interface.”
Check out the rest of our Structure:Data 2013 coverage here, and a video embed of the session follows below: