From the Apple Newton (s aapl) to Palm’s Graffiti (s palm) to modern day Microsoft Windows Tablet PCs (s msft) and Apple’s forthcoming iPad, people have looked at digital inking as another form of input. The keyboard and mouse are far more common both for typing text and navigating around today, but maybe there’s life beyond handwriting for ink solutions. MIT is looking past the traditional uses for digital ink with their sketch-interpreting software.
[W]hile a drawing can be rich in information, it’s information that’s usually inaccessible to computers. If you draw a diagram on the screen of a tablet computer, like the new Apple iPad, the computer can of course store the drawing as an image. But it can’t tell what the image means.
The video demo shows a practical application — practical if you’re a chemist, that is. Sketching the molecular composition of a compound is easy enough on a display, but getting the computer to recognize and identify the sketch is the challenge. MIT’s software does just that however, and once the sketch is identified, accessing related data is simple. How does the recognition work? The solution combines what was drawn along with how it was drawn — it’s not difficult to determine if a stroke was made from left to right or up to down, for example. Those two data sets further break down into individual elements of the sketch and all of the information is compared to a database for recognition.
As impressive as this feat is, it also highlights the challenge that’s plagued slate tablets for years and relegated them mainly to vertical niche markets. Sketches and drawings can be interpreted into a nearly infinite number of items, while text is simply text. With standard keyboard input or even handwritten text, the input generally means one thing. Freehand drawing — while liberating — could be an image, a standard figure or interpretation of one, a molecular compound, a math equation, or an emoticon to name a few. And that output variance for ink usage is where the challenge lies. Each “object” drawn could have many different contexts or meanings. Accounting for all of those permutations and combinations through recognition software is something the personal computing industry simply isn’t equipped for just yet. Instead, there are more and more custom solutions designed to recognize specific object types, which points those solutions at specific vertical markets.
Should you expect to see an iPad application from the folks at MIT? That’s highly doubtful. Might we see some similar, generic solutions on tablets in the future, though? As this type of research blossoms out of educational halls and is adapted by software developers, it should come to market. But it’s going to take a smart set of algorithms and the hardware to power them before you can freely draw anything on a tablet and have the device recognize it for what it is.
Related research on GigaOM Pro (sub req’d):