There are two kinds of AI, and the difference is important

Flickr user DigitalRalph

Artificial intelligence can solve many problems, but it won’t replicate the human brain anytime soon.

Today’s artificial intelligence is certainly formidable. It can beat world champions at intricate games like chess and Go, or dominate at Jeopardy!. It can interpret heaps of data for us, guide driverless cars, respond to spoken commands, and track down the answers to your internet search queries.

And as artificial intelligence becomes more sophisticated, there will be fewer and fewer jobs that robots can’t take care of—or so Elon Musk recently speculated. He suggested that we might have to give our own brains a boost to stay competitive in an AI-saturated job market.

But if AI does steal your job, it won’t be because scientists have built a brain better than yours. At least, not across the board. Most of the advances in artificial intelligence have been focused on solving particular kinds of problems. This narrow artificial intelligence is great at specific tasks like recommending songs on Pandora or analyzing how safe your driving habits are. However, the kind of general artificial intelligence that would simulate a person is a long ways off.

“At the very beginning of AI there was a lot of discussion about more general approaches to AI, with aspirations to create systems…that would work on many different problems,” says John Laird, a computer scientist at the University of Michigan. “Over the last 50 years the evolution has been towards specialization.”

Still, researchers are honing AI’s skills in complex tasks like understanding language and adapting to changing conditions. “The really exciting thing is that computer algorithms are getting smarter in more general ways,” says David Hanson, founder and CEO of Hanson Robotics in Hong Kong, who builds incredibly lifelike robots.

And there have always been people interested in how these aspects of AI might fit together. They want to know: “How do you create systems that have the capabilities that we normally associate with humans?” Laird says.

So why don’t we have general AI yet?

There isn’t a single, agreed-upon definition for general artificial intelligence. “Philosophers will argue whether General AI needs to have a real consciousness or whether a simulation of it suffices,” Jonathan Matus, founder and CEO of Zendrive, which is based in San Francisco and analyzes driving data collected from smartphone sensors, said in an email.

But, in essence, “General intelligence is what people do,” says Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence in Seattle, Washington. “We don’t have a computer that can function with the capabilities of a six year old, or even a three year old, and so we’re very far from general intelligence.”

Such an AI would be able to accumulate knowledge and use it to solve different kinds of problems. “I think the most powerful concept of general intelligence is that it’s adaptive,” Hanson says. “If you learn, for example, how to tie your shoes, you could apply it to other sorts of knots in other applications. If you have an intelligence that knows how to have a conversation with you, it can also know what it means to go to the store and buy a carton of milk.”

General AI would need to have background knowledge about the world as well as common sense, Laird says. “Pose it a new problem, it’s able to sort of work its way through it, and it also has a memory of what it’s been exposed to.”

Scientists have designed AI that can answer an array of questions with projects like IBM’s Watson, which defeated two former Jeopardy! champions in 2011. “It had to have a lot of general capabilities in order to do that,” Laird says.

Today, there are many different Watsons, each tweaked to perform services such as diagnosing medical problems, helping businesspeople run meetings, and making trailers for movies about super-smart AI. Still, “It’s not fully adaptive in the humanlike way, so it really doesn’t match human capabilities,” Hanson says.

We’re still figuring out the recipe for general intelligence. “One of the problems we have is actually defining what all these capabilities are and then asking, how can you integrate them together seamlessly to produce coherent behavior?” Laird says.

And for now, AI is facing something of a paradox. “Things that are so hard for people, like playing championship-level Go and poker have turned out to be relatively easy for the machines,” Etzioni says. “Yet at the same time, the things that are easiest for a person—like making sense of what they see in front of them, speaking in their mother tongue—the machines really struggle with.”

The strategies that help prepare an AI system to play chess or Go are less helpful in the real world, which does not operate within the strict rules of a game. “You’ve got Deep Blue that can play chess really well, you’ve got AlphaGo that can play Go, but you can’t walk up to either of them and say, ok we’re going to play tic-tac-toe,” Laird says. “There are these kinds of learning that are not you’re not able to do just with narrow AI.”

What about things like Siri and Alexa?

A huge challenge is designing AI that can figure out what we mean when we speak. “Understanding of natural language is what sometimes is called AI complete, meaning if you can really do that, you can probably solve artificial intelligence,” Etzioni says.

We’re making progress with virtual assistants such as Siri and Alexa. “There’s a long way to go on those systems, but they’re starting to have to deal with more of that generality,” Laird says. Still, he says, “once you ask a question, and then you ask it another question, and another question, it’s not like you’re developing a shared understanding of what you’re talking about.”

In other words, they can’t hold up their end of a conversation. “They don’t really understand what you Casinovale say, the meaning of it,” Etzioni says. “There’s no dialogue, there’s really no background knowledge and as a result…the system’s misunderstanding of what we say is often downright comical.”

Extracting the full meaning of informal sentences is tremendously difficult for AI. Every word matters, as does word order and the context in which the sentence is spoken. “There are a lot of challenges in how to go from language to an internal representation of the problem that the system can then use to solve a problem,” Laird says.

To help AI handle natural language better, Etzioni and his colleagues are putting them through their paces with standardized tests like the SAT. “I really think of it as an IQ test for the machine,” Etzioni says. “And guess what? The machine doesn’t do very well.”

In his view, exam questions are a more revealing measure of machine intelligence than the Turing Test, which chatbots often “pass” by resorting to trickery.

“To engage in a sophisticated dialogue, to do complex question and answering, it’s not enough to just work with the rudiments of language,” Etzioni says. “It ties into your background knowledge, it ties into your ability to draw conclusions.”

Let’s say you’re taking a test and find yourself faced with the question: what happens if you move a plant into a dark room? You’ll need an understanding of language to decipher the question, scientific knowledge to inform you what photosynthesis is, and a bit of common sense—the ability to realize that if light is necessary for photosynthesis, a plant won’t thrive when placed in a shady area.

“It’s not enough to know what photosynthesis is very formally, you have to be able to apply that knowledge to the real world,” Etzioni says.

The Allen Institute for Artificial Intelligence

The Allen Institute for Artificial Intelligence is training AI to solve standardized exam questions.

Will general AI think like us?

Researchers have gained a lot of ground with AI by using what we know about how the human brain. “Learning a lot about how humans work from psychology and neuroscience is a good way to help direct the research,” Laird says.

One promising approach to AI, called deep learning, is inspired by the architecture of neurons in the human brain. Its deep neural networks gather human amounts of data and sniff out patterns. This allows it to make predictions or distinctions, like whether someone uttered a “P” or a “B,” or if a picture features a cat or a dog.

“These are all things that the machines are exceptionally good at, and [they] probably have developed superhuman patter recognition abilities,” Etzioni says. “But that’s only a small part of what is general intelligence.”

Ultimately, how humans think is grounded in the feelings within our bodies, and influenced by things like our hormones and physical sensations. “It’s going to be a long time before we can create an effective simulation of all of that,” Hanson says.

We might one day build AI that is inspired by how humans think, but does not work the same way. After all, we didn’t need to make airplanes flap their wings. “Instead we built airplanes that fly, but they do that using very different technology,” Etzioni says.

Still, we might want to keep some especially humanoid features—like emotion. “People run the world, so having AI that understand and gets along with people can be very, very useful,” says Hanson, who is trying to design empathetic robots that care about people. He considers emotion to be an integral part of what goes into general intelligence.

Plus, the more humanoid a general AI is designed to be, the easier it will be to tell how well it works. “If we create an alien intelligence that’s really unlike humans, we don’t know exactly what hallmarks for general intelligence to look for,” Hanson says. “There’s a bigger concern for me which is that, if it’s alien are we going to trust it? Is it going to trust us? Are we going to have a good relationship with it?”

When will it get here?

So, how will we use general AI? We already have targeted AI to solve specific problems. But general AI could help us solve them better and faster, and tackle problems that are complex and call for many types of skills. “The systems that we have today are far less sophisticated than we could imagine,” Etzioni says. “If we truly had general AI we would be saving lives left and right.”

The Allen Institute has designed a search engine for scientists called Semantic Scholar. “The kind of search we do, even with the targeted AI we put in, is nowhere near what scientists need,” Etzioni says. “Imagine a scientist helper…that helps our scientists solve humanity’s thorniest problems, whether it’s climate change or cancer or superbugs.”

Or it could give strategic advice to governments, Matus says. “It could also be used to plan and execute super complex projects, like a mission to Mars, a political campaign, or a hostile takeover of a public company.”

People could also benefit from general AI in their everyday lives. It could assist elderly or disabled people, improve customer service, or tutor us. “When it comes to a learning assistant, it could understand your learning weaknesses and find your strengths to help you step up and plan a program for improving your capabilities,” Hanson says. “I see it helping people realize their dreams.”

Hanson Robotics

David Hanson with the lifelike Sophia.

But all this is a long way off. “We’re so far away from…even six-year-old level of intelligence, let alone full general human intelligence, let alone super-intelligence,” Etzioni says. He surveyed other leaders in the field of AI, and found that most of them believed super-intelligent AI was 25 years or more away. “Most scientists agree that human-level intelligence is beyond the foreseeable horizon,” he says.

General artificial intelligence does raise a few concerns, although machines run amok probably won’t be one of them. “I’m not so worried about super-intelligence and Terminator scenarios, frankly I think those are quite farfetched,” Etzioni says. “But I’m definitely worried about the impact on jobs and unemployment, and this is already happening with the targeted systems.”

And like any tool, general artificial intelligence could be misused. “Such technologies have the potential for tremendous destabilizing effects in the hands of any government, research organization or company,” Matus says. This “simply means that we need to be clever in designing policy and systems that will keep stability and give humans alternative sources of income and occupation.” People are pondering solutions like universal basic income to cope with narrow AI’s potential to displace workers.

Ultimately, researchers want to beef up artificial intelligence with more general skills so it can better serve humans. “We’re not going to see general AI initially to be anything like I, Robot. It’s going to be things like Siri and stuff like that, which will augment and help people,” Laird says. “My hope is that it’s really going be something that makes you a better person, as opposed to competes with you.”

Our editors found this article on this site using Google and regenerated it for our readers.

Exit mobile version