++THIS COULD WORK FOR HUFFPO US++
By Matthew Higgs, University College London
All eyes turned to London this week, as Google announced its latest acquisition in the form of DeepMind, a company that specialises in artificial intelligence technologies. The £400m pricetag paid by Google and the reported battle with Facebook to win the company over indicate that this is a firm well worth backing.
Although solid information is thin on the ground, you can get an idea of what the purchase might be leading to, if you know where to look.
Clue 1: what does Google already know?
Google has always been active in artificial intelligence and relies on the process for many of its projects. Just consider the "driver" behind its driverless cars, the speech recognition system in Google Glass, or the way its search engine predicts what we might search for after just a couple of keystrokes. Even the page-rank algorithm that started it all falls under the banner of AI.
Acquiring a company such as DeepMind therefore seems like a natural step. The big question is whether Google is motivated by a desire to help develop technologies we already know about or whether it is moving into the development of new technologies.
Given its track record, I'm betting on the latter. Google has the money and the drive to tackle the biggest questions in science, and developing computers that think like humans has, for a long time, been one of the biggest of them all.
Clue 2: what's in the research?
The headlines this week have described DeepMind as a "secretive start-up", but clues about what it gets up to at its London base can be gleaned from some of the research publications produced by the company's co-founder, Demis Hassabis.
Hassabis' three most recent publications all focus on the brain activity of human participants as they undergo particular tasks. He has looked into how we take advantage of our habitat, how we identify and predict the behaviour of other people and how we remember the past and imagine the future.
As humans, we collect information through sensory input and process it many times over using abstraction. We extract features and categorise objects to focus our attention on the information that is relevant to us. When we enter a room we quickly build up a mental image of the room, interpret the objects in the room, and use this information to assess the situation in front of us.
The people at Google have, until now, generally focused on the lower-level stages of this information processing. They have developed systems to look for features and concepts in online photos and street scenes to provide users with relevant content, systems to translate one language to another to enable us to communicate, and speech recognition systems, making voice control on your phone or device a reality.
The processes Hassabis investigates require these types of information processing as prerequisites. Only once you have identified the relevant features in a scene and categorised objects in your habitat can you begin to take advantage of your habitat. Only once you have identified the features of someone's face and recognised them as a someone you know can you start to predict their behaviour. And only once you have built up vivid images of the past can you extrapolate a future.
Clue 3: what else is on the shopping list?
Other recent acquisitions by Google provide further pieces to the puzzle. It has recently appointed futurist Ray Kurzweil, who believes in search engines with human intelligence and being able to upload our minds onto computers, as its director of engineering. And the purchase of Boston Dynamics, a company developing ground breaking robotics technology, gives a hint of its ambition.
Google is also getting into smart homes in the hope of more deeply interweaving its technologies into our everyday lives. DeepMind could provide the know-how to enable such systems to exhibit a level of intelligence never seen before in computers.
Combining the machinery Google already uses for processing sensory input with the ideas under investigation at DeepMind about how the brain uses this sensory input to complete high-level tasks is an exciting prospect. It has the potential to produce the closest thing yet to a computer with human qualities.
Building computers that think like humans has been the goal of AI ever since the time of Alan Turing. Progress has been slow, with science fiction often creating false hope in people's minds. But these past two decades have seen unimaginable leaps in information processing and our understanding of the brain. Now that one of the most powerful companies in the world has identified where it wants to go next, we can expect big things. Just as physics had its heyday in the 20th century, this century is truly the golden age of AI.
Matthew Higgs receives funding from the EPSRC.
This article was originally published at The Conversation.
Read the original article.