Forget multi-touch, it’s time for the Instinctive Interface

InterfaceWe have seen the User Interface (UI) evolve from the days of the command prompt (C:>) to today’s rich graphical environment.  The UI is the most important part of the user experience and the user’s ability to get stuff done easily, far more than the hardware on which it runs.  We have seen a number of technologies appear to this end over the past few years but they’ve all fallen short of providing the ultimate user environment, at least the way I see it.  Many of these technologies have done a good job of attempting to make our computing lives easier but have not quite reached the pinnacle.  The mouse made a huge improvement over the command prompt but can only do so much for us.  Speech recognition is the technology that shows such great promise in the area of UI but not the way it’s implemented today.  Touch interfaces have recently expanded the ability to interact with the items on the screen, and multi-touch has the potential to get people excited.

I’m afraid that all of this is not enough, not even multi-touch, even though it’s the big buzz phrase these days.  I think about how I work with my computers a lot and I realized that none of these things are going to give me the rich interaction that will make a difference in the way I work.  I think it’s time to be thinking about what I term the Instinctive Interface (II), which I do believe is totally realizable today given current technology.  So what is the II?  One of the things that I notice about the way I work is that I quite often do the same things the same way for the most part.  Imagine this scenario:

Most mornings I fire up my PC at 8am and check my email.  My computer should be smart enough to learn from my actions which is what the Instinctive Interface provides my PC.  I resume my computer at 8am and tell my PC "good morning" which flags this as a normal day.  It opens up my email client, Outlook if that’s what I always use, or the web browser opened to GMail if I use that.  It downloads new email from overnight if needed and presents it to me in the application.  I can open an email by moving the cursor over it like I do now, or telling the computer to "open Kevin" since that’s who the email is from, or maybe I touch the email on the screen.  All of these actions will do the same thing which makes my life much easier because all I want to do is read Kevin’s email.  The II has readied mousing, speech and touch control to handle whatever I throw at it.  It has learned over time what I normally do and it knows I will continue to do so this morning and it is focussed on letting me do it. 

After I finish doing my email, or even before I’m done if there are too many emails to do them all, I want to go to Google Reader to check all the items from my RSS feeds overnight.  I can open up Firefox or just say "check the feeds" or the equivalent and the II knows to fire up Firefox with the Google Reader page loaded.  The key to the learning capabilities of the II is that just because I use Firefox doesn’t mean you do.  If it’s learned from your actions that you use Opera or Internet Explorer then that’s what it will use for you.  No overt training required, the II can learn volumes about your preferences and what you normally do just by paying attention when you do them.  After just a short time of doing this the II can be working WITH you, not just for you.  It will become a very intelligent personal assistant that works the way you do when you do.  It’s always watching what you do and WHEN you do it as most people’s work days are very routine when it comes to schedule.

The Instinctive Interface can also learn what you normally do at a given location using GPS functionality that is integrated in a lot of devices.  If you always turn off your wireless connectivity at Client A’s office due to security restrictions then the II will do that for you.  If you always log into the hotspot when you hit your local Starbucks then the II will do that based on your location.  It will also know where your 3 o’clock appointment is and based on how far you currently are from that destination will make sure the appointment reminder happens in plenty of time for you to drive there because it knows exactly how far away it is.  You probably see where I’m going with this- your system will always be learning from your actions and will always be working with you as never before.

Think about how effective your gadget would be if it usually could figure out what you needed to do next.  Sure you could use the various UI technologies to make your point but most of the time I believe the II could accurately determine what you were going to do and help you do it with little or no effort.  The beauty of the II is it could be device independent.  Maybe it’s helping you when you use your laptop but the same technology will reside on your cell phone and work there.  No matter what gadget you use the II would not only help you get your things done but also be contributing to the learning portion.  It would be smart enough to know that you use Opera Mini on your cell phone to check your RSS feeds and use that but using Firefox on your laptop as I described above.  The point is that your technology would be helping you all the time and learning what you do and how you do it constantly.  The cloud could help here big time.  I am convinced that this could be done today with the current technology if the right people, people a whole lot smarter than me, would get to thinking about this.  So come on, smart folks, lets get the Instinctive Interface going.

loading

Comments have been disabled for this post