Skip to main content

Artificial Intelligence Isn't a Threat - Yet


The development of full artificial intelligence could spell the end of the human race.”— Stephen Hawking, Dec. 2   

Does artificial intelligence threaten our species, as the cosmologist Stephen Hawking recently suggested? Is the development of AI like “summoning the demon,” as tech pioneer Elon Musk told an audience at MIT in October? Will smart machines supersede or even annihilate humankind?

As a cognitive scientist and founder of a new startup that focuses on “machine learning,” I think about these questions nearly every day. But let’s not panic. 

“Superintelligent” machines won’t be arriving soon. Computers today are good at narrow tasks carefully engineered by programmers, like balancing checkbooks and landing airplanes, but after five decades of research, they are still weak at anything that looks remotely like genuine human intelligence.


Even the best computer programs out there lack the flexibility of human thinking. A teenager can pick up a new videogame in an hour; your average computer program still can only do just the single task for which it was designed. (Some new technologies do slightly better, but they still struggle with any task that requires long-term planning.)

A more immediate concern is that a machine doesn’t have to be superintelligent to do a lot of damage, if it is sufficiently empowered. Stock market flash crashes are one example: Hundreds of millions of dollars have been lost in minutes as a result of minor, difficult-to-completely-eliminate bugs.

The clear and present danger, if not the greatest long-term danger, is that mediocre computer programs can cause significant damage if left unchecked. What will happen, for example, when nearly perfect—but still imperfect—software controls not just stock trades but driverless cars? It’s one thing for a software bug to trash your grocery list; it’s another for it to crash your car.

None of this means that we should abandon research in artificial intelligence. Driverless cars probably will cause some fatalities, but they also will avert tens of thousands of deaths. Robotic doctors (perhaps a couple of decades away) may occasionally make bad calls, but they will also bring high-quality medicine to places that would otherwise lack trained doctors. Banning AI could squander a chance to save or radically enhance millions of lives.

Still, the scalability of AI—a single program can be replicated millions of times—means that each new program carries risks if it has access to the outside world. The more autonomy that we give to machines, the more we need to have safeguards in place. A program “sandboxed” on your iPhone, with no real access to the outside world, isn’t of much concern. A program that places stock trades needs more safeguards. A general-purpose robot that lives in your home, with full access to the Internet, would need vastly more.

The trouble is, nobody yet knows what that oversight should consist of. Though AI poses no immediate existential threat, nobody in the private sector or government has a long-term solution to its potential dangers. Until we have some mechanism for guaranteeing that machines never try to replace us, or relegate us to zoos, we should take the problem of AI risk seriously.

Computers have become much better at many things over the last decades, from chess to arithmetic to network trafficking, but so far they have not shown the slightest interest in us or our possessions. If this remains the case, there is every reason to think that they will continue to be our partners rather than our conquerors. We could be worried about nothing.

But the alarmists have a point, too. The real problem isn’t that world domination automatically follows from sufficiently increased machine intelligence; it is that we have absolutely no way, so far, of predicting or regulating what comes next. Should we demand transparency in programs that control important resources? 

Fund advances in techniques for “program verification,” which tries to make sure that programs do what they are designed to do? Outlaw certain specific, risky applications?For now, anyone can write virtually any program at any time, and we have scarcely any infrastructure in place to predict or control the results. And that is a real reason for worry.

Dr. Marcus is a professor of psychology and neuroscience at New York University and CEO of Geometric Intelligence. His latest book is “The Future of the Brain.”

Popular posts from this blog

Beware of Wombats & Other Vampires

You are surrounded by dangerous WOMBATS. They’re everywhere. Sometimes they hide in plain sight, easy to spot. Other times they are well camouflaged, requiring heightened awareness to identify them. You need to stay alert, it’s important to avoid them. WOMBATs resemble ordinary, productive tasks. However, they are vampires for time and resources, weapons of mass distraction.WOMBATs are seductive. Working on a WOMBAT feels productive.WOMBATs are bad for your career.WOMBATs are bad for your business.WOMBATs infiltrate your work day (and your personal time). Strike them down.WOMBATs may be be ingrained in your company culture: “We’ve always done it that way…” WOMBAT Metamorphosis Alert: A task or project that wasproductive in the pastcanevolve into a WOMBAT in today's environment.Your comfort zone is populated with WOMBATs.More on comfort zones, here.Some people are WOMBATs in disguise. Stay away from them, they are vampire WOMBATs.If you don’t control your WOMBATs, your WOMBATs will…

Taking Tips From a Younger Generation

Phyllis Korkki, an assignment editor at The New York Times, visited the garment district in Manhattan to interview designers as part of a story for the newspaper’s Snapchat account. Credit George Etheredge/The New York Times
What Could I Possibly Learn From A Mentor Half My Age? Plenty.

How on earth did I become an “older worker?”

It was only a few years ago, it seems, that I set out to climb the ladder in my chosen field. That field happens to be journalism, but it shares many attributes with countless other workplaces. For instance, back when I was one of the youngest people in the room, I was helped by experienced elders who taught me the ropes.

Now, shockingly, I’m one of the elders. And I’ve watched my industry undergo significant change. That’s why I recently went searching for a young mentor — yes, a younger colleague to mentor me.

How Randa and the Fashion Industry are Adapting to DIY

The term 'Do It Yourself' has turned into a phenomenon over the past decade and is continuing to gain momentum, especially in the fashion industry. From interactive design stations at Topshop, to custom shoes at Jimmy Choo, every level of the fashion industry is dipping their toes into the pools of DIY.

"Many industry insiders think it is just the beginning. Ask about the future of fashion, and the answer that is likely to come back (along with the importance of Instagram and the transformation of shows into entertainment) is personalization," says Vanessa Friedman from the New York Times.