Natural Search Blog


Google Developing Artificial Intelligence (AI) – Brave New World

Google’s Larry Page addressed the recent conference for the American Association for the Advancement of the Sciences, and in his presentation he revealed what many of us had suspected or already knew from some of our friends who are employees within the company: researchers in Google are working upon developing Artificial Intelligence (aka “AI”).

Artificial Intelligence

During the address, Page stated he thought that human brain algorithms actually weren’t all that complicated and could likely be approximated with sufficient computational power.  He said, “We have some people at Google (who) are really trying to build artificial intelligence and to do it on a large scale. It’s not as far off as people think.” Well, one of the top scientists in the world disagrees, if he’s talking about approximating a human-like consciousness.

I’ve written previously about how stuff predicted in cyberpunk fiction is becoming reality, and how Google might be planning to develop intelligent ‘search pets’ which would directly integrate with the human brain in some fashion. What might Google use this for and how soon might they show it to the world? Read on…

Writers and futurists have been exploring how artificial intelligence might affect our lives, but they’ve really only scratched the surface. I’d say that most of the fiction based on it has focused on the scariness of it or the “Frankenstein Complex” as some call it. Hal, the psychotically murderous man-made intellect in the film, 2001: A Space Odyssey, is perhaps one of the most famous exemplars of this archetype. One of the best fiction authors to tackle some of the ethical questions of AI was the sci-fi author, Isaac Asimov. In Asimov’s short story collection named I, Robot, he outlined some basic behavioral programming that he thought might be necessary for thinking machines, and used it as a primary component of the “positronic brain” that was the base for his AIs. These are known as the Three Laws of Robotics:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Many others have become inspired by Asimov’s Three Laws. The Will Smith movie, I, Robot, was apparently inspired by Asimov’s works, though the only Asimov story it comes even distantly close to resembling was “The Evitable Conflict”, from his I, Robot collection. Bishop, the humanoid robot in Aliens movies, was apparently programmed with these three laws.

Will Google’s AI be programmed with the Three Laws of Robotics as a core, involuntary restriction for its behavior?

That question is likely putting the cart before the horse. AI has remained elusive to computer scientists for numerous reasons including complexity, the need for fast computing, and lots of memory. It sounds as if Google likely has the latter two, but the complexity piece of the puzzle is still going to be a major hurdle. There are a lot of approaches to AI programming which people have explored: The building of basic building-block components, and the enabling of self-programming and learning/heuristics for development. Linguistic approaches based on the theory that intelligence in part rests with the ability to understand speach and language. Game theory, rewards systems, and neural nets. There are numerous theories and approaches, and any solution that would come anywhere close to creating a human-like consciousness would likely be some sort of soup composed of a few of these ingredients.

The biggest problem of all is in trying to pull all the various components together to create a gestalt that forms human or human-like consciousness. Some have suggested that parallel processing on a massive scale, perhaps even making use of quantum computers, could enable the gestalt effect. Sounds like Page’s people are exploring this route. Yet the human mind uses a lot of fuzzy logic for many different applications – a tricky thing to approximate through machinery. For instance, the intuitive leap at the core of much creative innovation defies the ability of programming to enable. The gestalt of the human mind transcends the mere sum of its parts. The best they’ll likely be able to do is to either generate AI which can mimic human-like responses and behavior, perhaps well enough to fool a Turing Test, or else they’ll create artificial intelligence geared towards efficient, complex decision-making and controlling of systems.

What I’m describing is the philosophical debate between Strong AI devotees and proponents of Weak AI. Believers in Strong AI suppose that it might be possible to create a machine capable of consciousness – self-awareness. Weak AI denies this as a reasonable possibility and posits that software can only assist with solving specific problems. The most compelling of the philosophers on the side of Weak AI is the esteemed physicist, Sir Roger Penrose, via his book, The Emporer’s New Mind. The title pretty well says it all – it’s implying that proponents of Strong AI are trying to sell “the emperor’s new clothes“.

The Weak AI pursuit – the creation of AI for the purpose of efficient decision making or control of complex systems – is the most likely immediate application for Google’s AI research work. Heuristics have been used for a while now in the identification of spammers and in the validation of performance-based advertisement clicks. I would guess that the application of AI which would most immediately serve Google interests would be for the identification of click fraud. Aspects of some paths of AI research – namely, heuristics and pattern-recognition – apply very readily to the growing need for better click-fraud policing.

Another approach could be the creation of an intelligent evaluation machine which could be used to immitate their ranks of human quality assurance testers who check out the quality of links appearing in their SERPs. If a straightforward algorithm could’ve been adequately trained to provide this function, they wouldn’t need the dozens of humans they are currently using for this purpose. The complexity of evaluating links for quality criteria really needs a human-like intelligence to some degree. Questions of aesthetics and the ability to view the two-dimensional page layout with visual-recognition comprehension would require that a machine have a much more robust system for this purpose than the much more simplistic algorithms used to compute PageRank.

It’s obvious, though, that Google really intends for more than just an immediately-applicable commercial use for Weak AI. They are likely working on AI as a pure research project with the intention of later exploring how such a thing might be best used. And, they’re likely approaching it with a bias towards Stong AI.

In his book, The Search, John Battelle reports that one of Google’s core computer scientists (and their very first employee), Craig Silverstein, has said “I would like to see the search engines become like computers in Star Trek. You talk to them and the understand what you’re asking.” Silverstein has also said that human-like intelligence is needed in order to know what information people are seeking – a sort of information retrieval “Holy Grail” that each of the major search engines has been trying to develop in some way.

When asked about the future of Search, Silverstein has also said, “The future of search will involve genetically engineered search pets that will understand human emotions — not just facts, but how people work,� and also, “We’ll still search for facts,� he says, “but in all likelihood the facts will be contained in a brain implant.�

Can this be attained through Weak AI? Perhaps. Yet, understanding human motivations sufficiently to be able to evolve search results toward the highest degrees of appropriateness matching up with each individual’s desires/needs is a fairly high-order level of comprehension. To adequately achieve this might require something bordering upon the gestalt of human consciousness. If they’re going for Strong AI, I think that Page’s statements were overly optimistic. Sir Roger Penrose says so.

(If you’re interested in reading speculations on futurism, you might also read my article on the prophetic nature of Philip K. Dick’s science fiction.)

No comments for Google Developing Artificial Intelligence (AI) – Brave New World

No comments yet.

Sorry, the comment form is closed at this time.

RSS Feeds
Categories
Archives
Other