Google seems to be in great form: first it launches underwater Street View scenery, a couple of days later Sergey Brin promises that we'll be able to buy a driverless car in five years. Now one of the Google research teams reports brilliant results in pursuing the Holy Grail of each and every IT engineer on this planet: the artificial intelligence.
The core idea of the research conducted this summer was not that special at the first sight. The researchers took 1,000 computers and connected them into a single 16,000-CPU-strong network, after which they fed this computing monster about 10 million snapshots from YouTube videos. The outcome was striking: the network learned... to learn!
More particularly, this humongous neural network has learned to recognize human and cat faces. The trick is it did it by merely viewing the pictures, without any active intervention from its operators. In other words, it did what any human baby does: it has figured out by itself that certain combinations of colored spots can be classified as what we call 'face'. The network has gone even further, producing 'an ideal stimulus' for a human and cat face, i.e. virtually a visualization of what an average face looks like for it.
The potential that these new abilities hold for computers are enormous. They are already being implemented in one of Google's headline products, Google Maps, to recognize house numbers, showing even better accuracy rates than the human-recognized areas. Bringing the computing capabilities of such neural networks onto individual devices over cloud can also drastically improve the performance of the latter. For example, imagine your driverless car is powered not only by its onboard OS but also by a vast self-teaching network. It remembers your daily routes even without you telling it about this; it remembers where most jams occur and checks the traffic monitoring services to find a detour if there are any congestions...
Using this network in Google Glasses would be no bad thing as well. You could just fire it up and it would learn more and more about your world and objects you interact with, providing you with increasingly more accurate information about them.
Unfortunately, these hi-tech dreams are still somewhat far from coming true, as the currently available computing power for the neural networks is still way too limited for these highly intelligent services to work smoothly. A most realistic scenario implies using the smart Google networks for improving the overall quality and reliability of speech recognition, which is the focus of the recently launched Google Now feature for Android Jelly Bean.
So, it's a bit too early to wait for a war with the machines. Anyway, you can hope that in a couple of years you'll finally have something to chat with your smartphone about.