Skip to main content

Artificial Intelligence Isn't a Threat - Yet


The development of full artificial intelligence could spell the end of the human race.”— Stephen Hawking, Dec. 2   

Does artificial intelligence threaten our species, as the cosmologist Stephen Hawking recently suggested? Is the development of AI like “summoning the demon,” as tech pioneer Elon Musk told an audience at MIT in October? Will smart machines supersede or even annihilate humankind?

As a cognitive scientist and founder of a new startup that focuses on “machine learning,” I think about these questions nearly every day. But let’s not panic. 

“Superintelligent” machines won’t be arriving soon. Computers today are good at narrow tasks carefully engineered by programmers, like balancing checkbooks and landing airplanes, but after five decades of research, they are still weak at anything that looks remotely like genuine human intelligence.


Even the best computer programs out there lack the flexibility of human thinking. A teenager can pick up a new videogame in an hour; your average computer program still can only do just the single task for which it was designed. (Some new technologies do slightly better, but they still struggle with any task that requires long-term planning.)

A more immediate concern is that a machine doesn’t have to be superintelligent to do a lot of damage, if it is sufficiently empowered. Stock market flash crashes are one example: Hundreds of millions of dollars have been lost in minutes as a result of minor, difficult-to-completely-eliminate bugs.

The clear and present danger, if not the greatest long-term danger, is that mediocre computer programs can cause significant damage if left unchecked. What will happen, for example, when nearly perfect—but still imperfect—software controls not just stock trades but driverless cars? It’s one thing for a software bug to trash your grocery list; it’s another for it to crash your car.

None of this means that we should abandon research in artificial intelligence. Driverless cars probably will cause some fatalities, but they also will avert tens of thousands of deaths. Robotic doctors (perhaps a couple of decades away) may occasionally make bad calls, but they will also bring high-quality medicine to places that would otherwise lack trained doctors. Banning AI could squander a chance to save or radically enhance millions of lives.

Still, the scalability of AI—a single program can be replicated millions of times—means that each new program carries risks if it has access to the outside world. The more autonomy that we give to machines, the more we need to have safeguards in place. A program “sandboxed” on your iPhone, with no real access to the outside world, isn’t of much concern. A program that places stock trades needs more safeguards. A general-purpose robot that lives in your home, with full access to the Internet, would need vastly more.

The trouble is, nobody yet knows what that oversight should consist of. Though AI poses no immediate existential threat, nobody in the private sector or government has a long-term solution to its potential dangers. Until we have some mechanism for guaranteeing that machines never try to replace us, or relegate us to zoos, we should take the problem of AI risk seriously.

Computers have become much better at many things over the last decades, from chess to arithmetic to network trafficking, but so far they have not shown the slightest interest in us or our possessions. If this remains the case, there is every reason to think that they will continue to be our partners rather than our conquerors. We could be worried about nothing.

But the alarmists have a point, too. The real problem isn’t that world domination automatically follows from sufficiently increased machine intelligence; it is that we have absolutely no way, so far, of predicting or regulating what comes next. Should we demand transparency in programs that control important resources? 

Fund advances in techniques for “program verification,” which tries to make sure that programs do what they are designed to do? Outlaw certain specific, risky applications?For now, anyone can write virtually any program at any time, and we have scarcely any infrastructure in place to predict or control the results. And that is a real reason for worry.

Dr. Marcus is a professor of psychology and neuroscience at New York University and CEO of Geometric Intelligence. His latest book is “The Future of the Brain.”

Popular posts from this blog

Warning, Car Porn

The signature feature is the Rolls Royce Wraith’s Starlight Headliner, consisting of 1,340 LEDs hand-sewn to create an effect of owning one’s personal night sky filled with stars...

Warning, content below represents a man's libidinous fascination with an automobile. It is not Lolita; after all Bradley Berman, the author, is not Nabokov and the Wraith is not underaged. Nonetheless, I find myself simultaneously repulsed... and seduced. David J. Katz

Annotated Guide To Men's Belts

The Complete Guide To Men’s BeltsArticle By  on 11th March 2014 | @gabrielweil

IMAGE: AUSTIN REED SS14

Discounts, Discovery & Delight: 3Ds for Retail Success

In fashion and retail, Dopamine is the drug of choice. Technically, Dopamine is the neurotransmitter of “desire.” Dopamine leaps across synapses in our brain to control our reward and pleasure centers. It enables craving. It induces repeat behaviors. It makes us want more. Therefore, it is in our best interest to create products and experiences which induce the release of dopamine in our consumers. We could use some dopamine for ourselves, too. In our fashion and retail world, there are three primary stimuli, "3Ds," we can control to deliver hits of dopamine: Discounts, Discovery and Delight.