Disruption: It moves in mysterious ways


A friend and a long time reader emailed earlier this morning, offering his observation regarding Google (s GOOG) Glass. His prognosis – it was a hands-free camera. Laughs aside, it is an easy deduction to make from the new video shared by the Google Glass team. Sure, the video focused on ballerinas, balloon rides and bubbles, but Google was trying to get maximum oohs-and-aahs from as wide a set of people as possible.

That said, I have been intrigued by Google glasses from the very beginning, mostly because despite being nerdy and in a very early stage, it represents a bit of the old Google. As I previously wrote: “It represents the kind of things the company needs to do in order to leap over its rivals.”

So, if you focus on just the video, I guess, the shrug of shoulder is an appropriate response. However, when I watched that video, I saw three things that were possible.

  • A new way to interact with information Google indexes: Google’s original premise was to make sense of all the messy data on the web. The mess has become bigger and finding information has become more difficult. We have to start looking at information we need in context of where we are, who we are and to what purpose we need that information. While it is easy to provide the “where” and “who” information, nothing adds “purpose” than what the eye is seeing. So, from that perspective, this is the right evolution of Google’s basic utility.
  • A decent working voice-based user interface: Siri is cute. Siri is helpful … sometimes. But Siri is still not the answer. However, the Google Glass UI seems to have found the answer to the age old voice-based UI question. And with increased usage, the UI will get smarter and better. (Well, that is what I hope.)
  • And lastly, it gives us the ability to add more contextual information to the real world around us: With Google glasses, everything becomes searchable. I think this is the most underrated part of Google Glass. So far, we have restricted “information lookup” to the computers.

That said, this is just a video. And as a cynic I am going to withhold final judgment on the glasses and what changes they might unleash until I can get my hands on the actual devices. That said, the video released today definitely makes me even more keen on trying them on.

Google is working to get developers to sign-up and develop apps for this new class of anywhere, anytime computers. And we don’t know just yet, what the creative minds might do and what impact their efforts might have on how we live, create and consume.

iPad, the slate for disruption

slide-1-2a812792491a22aba871d062eba41963Three years ago, when everyone saw a bigger the iPhone, I first saw the iPad (s AAPL) and my initial response was that it was a slate “to reinvent pretty much how we think of media, information and in fact the whole user experience.” I saw a blank slate that was ready, for lack of better words, for creation and disruption. It is just that no one knew how it would disrupt and who it would disrupt. I was reminded of that today when I saw this news release from Jack Dorsey’s Square, a San Francisco-based payments company.

Business in a Box: all the hardware you need to run Square Register on your counter: Historically, business owners were forced to piece together multiple hardware components from various manufacturers, manage complicated contracts and pricing structures, and pay for expensive software licensing and service plans. Now, they can be up and running with Square Register in minutes.

Three years ago, it wasn’t clear if the iPad was going to clean VeriFone’s clock or give an underclass of merchants a chance to participate in the mobile and electronic economy. It certainly wasn’t clear that it would become the engine for the people-to-people economy I often talk about. The sharp decline in the fortunes of laptops is another disruption.

The fact is that when you combine software with connectivity and use data to create new experiences, you end up disrupting old industries and building new fortunes.

Green Overdrive: We ride a Tesla Model S Beta! thumbnailWhen I look at Google Glass today, I see a big similarity between them and Tesla (s TSLA) — the electric car, not the company. Both are a bit nerdy, both are a bit cool and both are showing us the way to the future.

Elon Musk would like us to believe that he is building the new Toyota — and I for one am glad that he is — but the real impact of his car is on the business of transportation. Tesla for me is the marriage of electronics with data, software and connectivity.

The Big IF

“The big if” for both Tesla and Google Glass is going to be how they think about the interaction of humans and the machine. If they keep using data without an emotional quotient, then they are going to get nowhere fast. If they don’t build systems that constantly learn, evolve and become smarter with usage — much like human brain — they are not going to go anywhere. They should take a cue from IBM (s IBM) and its Watson effort: that’s where some of the answers lie for them.

And as for disruption, as the title says, it moves in mysterious ways.


Jerry Ballard

Wearables are no doubt the future.
The question is, will the price be $x paid up front for the hardware and services, or will it be complete surrender of your privacy in the service of advertisers?
I know which model I want to see happen.


This is too far out there. You could see the benefits of an iPhone or iPad immediately.

Brendan Des Brisay

Actually people were very sceptical of the iPhone and especially the iPad. “Why would I want a giant iPod touch”, “There is no market for this and apple is going to crash and burn”. Look at the tablet market today. HUD’s have been extremely useful in the industrial world, this will put that into idea into the hands of every day individuals with unimaginable possibilities.

Samuel S. Lee

Om, I agree- like 3D printing, Glass at the very least is going to be a gateway to a game-changer.

Nicholas Paredes

Having spent some time in the mobile mapping world, I have been considering presenting a project which fits right into the dashboard or the headboard. The big question is whether people want to wear these products, particularly if they don’t already wear glasses.

We will see, but the opportunity is enormous!


I must admit, I had the ‘glorified camera’ perspective at first, but when I showed the video to a friend of mine who is a first response medical worker, he was immediately excited by the possibilities.

He painted the scenario whereby he could arrive at an emergency scene, and Glass could give him information based on trauma conditions that he called out. It may also lookup medical records of unconscious patients via facial recognition. Show him ingress/egress routes into an unfamiliar building. Real time hookup back to HQ for planning and control of the situation. Research possible treatments via real time search. The list goes on.

He was truly excited by what this could mean – and now so am I…


Why we are pondering Watson, or the absent of Google in context research. To apply data one needs to know what context is, how it’s created and how it works. Google health?[2]

When we think of changes to the health-care system, byzantine legislation comes to mind. But according to a growing number of observers, the next big thing to hit medical care will be new ways of accumulating, processing, and applying data—revolutionizing medical care the same way Billy Beane and his minions turned baseball into “moneyball.”
The idea, they say, is no more fanciful than the notion of self-driving cars, experimental versions of which are already cruising California streets. “A world mostly without doctors (at least average ones) is not only reasonable, but also more likely than not,” wrote Vinod Khosla, a venture capitalist and co-founder of Sun Microsystems, in a 2012 TechCrunch article titled “Do We Need Doctors or Algorithms?” He even put a number on his prediction: someday, he said, computers and robots would replace four out of five physicians in the United States.[1]

1. The Robot Will See You Now

2. http://www.google.com/intl/en_us/health/about/


Two quotes in one piece, hands-free camera and Watson.
I still think the data model is wrong. Like Bill Gates WinFS regret this will be Googel’s, not changing the data model. What do I mean by that, take for example the Roller coaster ride, why not having some music (playlist) assigned to it. Calendar, email all have to be broken up to become useful, not the protocols, the data/UI. All data associated with any data any space time possibilities. Programming with voice control, programs are just sequences in space time.

As an ex European, my wife sends me to the Grocery store. Now the list is on my phone, shared Google docs, but says 2cups of something. The something comes in pints, turns out I’m bad at judging volumes(just give me ml). Glass could know that and provide the data I need, based on previous conversions. Hey and convert that doc into a list from which I can delete by by voice or gesture, selecting and deleting with one hand su…

Watson uses self directed batch learning, bad, but better than waiting for the guys in the white coats(Google) to provide another program. We did that in the 70’s, then the data model got changed by the spreadsheet …. While batch mode works kinda for search, it doesn’t for context. Context is [self] organized data, not a mix of sensor input data associated to concepts by a programmer.
Another thing they should take a look at, G2 {IBM}, which is more advanced than Watson in terms of self organization.
Or build it correctly from the ground up.

Comments are closed.