I’m all for new search technologies and finding information on the Web. I recently wrote about my initial impressions of ReSearch.ly, which looks to provide context within social searches of Twitter content. Qwiki, now in alpha, takes another approach, offering an “information experience” that “transforms static information into interactive stories.”
So what does that actually mean? The site presents a montage of images, video, animations and other visual resources culled from search results, and overlays it with real-time narration, using text-to-speech technology. The result is information as a watchable experience.
For me, the actual experience of watching the content is eerie and unsettling. I’ve never liked the way text-to-speech technology sounds; it’s strange and unnerving to my ears and brain, sounding almost, but not quite, human in a way that could mean trouble, like HAL in 2001: A Space Odyssey.
My first search was for “Tok,” the rural Alaskan community where I live and work. The text-to-speech voice sounded nearly human, but with mispronunciations and odd inflections that are inherent in the technology. Qwiki pronounced my community as “Tawk” rather than the correct pronunciation, “Toke.” Right away, this interfered with my ability to appreciate the visual montage.
The audio was accompanied by a flowing stream of imagery that seemed at first to correspond with the narration. But on closer examination, many images had nothing to do with what was being said, other than being from Tok. For example, when the narration mentioned Tok School, the images that appeared were of a gift shop and an RV park. Another image of a burned-out old gas station showed up, and I immediately wondered how I could remove such a photo, and replace it with something more representative of the community. Ditto for an image of a coffee shack that isn’t even in Tok, but is over 200 miles away, near the community of Glennallen.
My second search was for “karaoke.” I found the experience of this information less off-putting. However, the images in the presentation, while colorful and interesting, seemed less familiar. Many turned out to be of displays and equipment common in Japan.
My third search was for “social media.” The narration was a bit convoluted and the visual presentation consisted of a single static screenshot of Flickr.
My final test search was for…me. Here’s what I found (login required). And here’s what it looked like–apparently I have one of the dirtiest minds in business–or at least that’s what was displayed during the entire presentation!
Qwiki may give us a new phrase to replace “Googling ourselves.” We can now “get a Qwiki” and “give a Qwiki.”
Currently, Qwiki covers over two million reference terms, which feels sufficient for pretty good results when searching for a not-too-uncommon term. You can also search for people and places.
For me, the visual and audio dissonance of Qwiki was initially disturbing, but I tried to look past that and appreciate that I was “experiencing information.” Presentations are short–about 30 seconds–which is just enough time for narrative content from a paragraph or two from a Wikipedia entry.
Conceptually, Qwiki is a fascinating step forward in the presentation and consumption of search results. Since the site is in its alpha phase, it’s available only by invitation, although you can request one at Qwiki’s website. You’ll probably start itching to fix what isn’t working, but Qwiki doesn’t yet have a Wikipedia-like system for collaborating on editing information. However, the company is very open to input on ways to improve the experience.
Give Qwiki a try, and let me know what you think of it. What implications do you think it will have for the future of search?
Related content from GigaOM Pro (sub. req.):