Rss

Archives for : October2011

The future of Human Computer Interaction

I am normally the last to applaud Apple for “innovation,” as to be honest, most things they do these days are just copied (think of pull down menus in iOS 5,) but I had to applaud them for their fine implementation of Siri. Though it’s important to note that I’m only applauding the implementation, not the innovation. Apple didn’t make Siri, they purchased Siri.

Now, disregarding any prejudiced I have against Apple, and whether or not they think they invented Siri: It is actually really amazing. Not so much the voice recognition, as whilst much better than it was just a couple of years ago, it isn’t the most amazing part of Siri. It’s the ability to interpret natural language and work out a task.

This is the futrue of computing. Not talking to your phone or computer, but your computer knowing what you want to do based on what you type or say (though hopefully type, imagine a whole office of people talking to their computers) and it being able to find the information you are looking for, or completing a task for you. I can’t imagine it becoming everything we do on computers, but I can imagine it becoming a huge in assisting us in doing what we want to do. For example, it isn’t going to write up an essay for you, but it can assist you in writing up an essay for you. How about work out your end of financial year report for your work? It can help pull in, sort through and present all the information you need, though you’ll still have to do a little bit of work (though no where near as much.)

What we are moving into is what’s called Semantic Web. Some people have called this “Web 3.0.” This is the way I understand Semantic Web: the web is data, not documents, and this data can be contributed to by both humans and machines. You don’t access web pages, you access a machine that processes the data for you. For example, if you want to find the phone number for someone, currently you would go to the White Pages website, search for the person you want and it would return that information… in a web page. The web page is simply a way of accessing that data. Wouldn’t it be much better if the data was available in an easy to use format for other machines to read (such as Siri?) You would ask your machine “Find the phone number for from ,” and it would be able to search through the results and present the data. No web page needed. How about the weather? Instead of going to a website that displays the weather, just ask your machine and have it pull up and present all the data.

Siri is probably the first consumer machine that is advanced as it is, but it is by no means the last. These machines need some way to access data, and until that data is presented in an easy machine readable format (which probably isn’t going to happen for a little while,) we need companies that can search through and sort that information. This is probably why a combination of Google and IBM are going to be the two biggest driving factors in Semantic Web. Google for obvious reasons, they already do a good job at aggregating lots of information (for example, ask when a game is been released, or for the weather) and IBM have the amazing super computer called “Watson.” We have also seen for a little while Wolfram Alpha, which is also pretty cool (and something Google should put on their “too buy” list.) I think the next step in the push towards a semantic web would be the brains of Google, Watson and Wolfram, with the interface of Siri.

Google Developer Day 2011

Looks like heaps of fun! I’ve registered, hopefully I get in (not all registrations do.)

Google Developer Day Sydney Website

Edit (13/10/2011): Got in. Found out yesterday, less than 24 hours after submitting my registration.