Skip to main content

Artificial Intelligence Isn't a Threat - Yet


The development of full artificial intelligence could spell the end of the human race.”— Stephen Hawking, Dec. 2   

Does artificial intelligence threaten our species, as the cosmologist Stephen Hawking recently suggested? Is the development of AI like “summoning the demon,” as tech pioneer Elon Musk told an audience at MIT in October? Will smart machines supersede or even annihilate humankind?

As a cognitive scientist and founder of a new startup that focuses on “machine learning,” I think about these questions nearly every day. But let’s not panic. 

“Superintelligent” machines won’t be arriving soon. Computers today are good at narrow tasks carefully engineered by programmers, like balancing checkbooks and landing airplanes, but after five decades of research, they are still weak at anything that looks remotely like genuine human intelligence.


Even the best computer programs out there lack the flexibility of human thinking. A teenager can pick up a new videogame in an hour; your average computer program still can only do just the single task for which it was designed. (Some new technologies do slightly better, but they still struggle with any task that requires long-term planning.)

A more immediate concern is that a machine doesn’t have to be superintelligent to do a lot of damage, if it is sufficiently empowered. Stock market flash crashes are one example: Hundreds of millions of dollars have been lost in minutes as a result of minor, difficult-to-completely-eliminate bugs.

The clear and present danger, if not the greatest long-term danger, is that mediocre computer programs can cause significant damage if left unchecked. What will happen, for example, when nearly perfect—but still imperfect—software controls not just stock trades but driverless cars? It’s one thing for a software bug to trash your grocery list; it’s another for it to crash your car.

None of this means that we should abandon research in artificial intelligence. Driverless cars probably will cause some fatalities, but they also will avert tens of thousands of deaths. Robotic doctors (perhaps a couple of decades away) may occasionally make bad calls, but they will also bring high-quality medicine to places that would otherwise lack trained doctors. Banning AI could squander a chance to save or radically enhance millions of lives.

Still, the scalability of AI—a single program can be replicated millions of times—means that each new program carries risks if it has access to the outside world. The more autonomy that we give to machines, the more we need to have safeguards in place. A program “sandboxed” on your iPhone, with no real access to the outside world, isn’t of much concern. A program that places stock trades needs more safeguards. A general-purpose robot that lives in your home, with full access to the Internet, would need vastly more.

The trouble is, nobody yet knows what that oversight should consist of. Though AI poses no immediate existential threat, nobody in the private sector or government has a long-term solution to its potential dangers. Until we have some mechanism for guaranteeing that machines never try to replace us, or relegate us to zoos, we should take the problem of AI risk seriously.

Computers have become much better at many things over the last decades, from chess to arithmetic to network trafficking, but so far they have not shown the slightest interest in us or our possessions. If this remains the case, there is every reason to think that they will continue to be our partners rather than our conquerors. We could be worried about nothing.

But the alarmists have a point, too. The real problem isn’t that world domination automatically follows from sufficiently increased machine intelligence; it is that we have absolutely no way, so far, of predicting or regulating what comes next. Should we demand transparency in programs that control important resources? 

Fund advances in techniques for “program verification,” which tries to make sure that programs do what they are designed to do? Outlaw certain specific, risky applications?For now, anyone can write virtually any program at any time, and we have scarcely any infrastructure in place to predict or control the results. And that is a real reason for worry.

Dr. Marcus is a professor of psychology and neuroscience at New York University and CEO of Geometric Intelligence. His latest book is “The Future of the Brain.”

Popular posts from this blog

Annotated Guide To Men's Belts

The Complete Guide To Men’s BeltsArticle By  on 11th March 2014 | @gabrielweil

IMAGE: AUSTIN REED SS14

Warning, Car Porn

The signature feature is the Rolls Royce Wraith’s Starlight Headliner, consisting of 1,340 LEDs hand-sewn to create an effect of owning one’s personal night sky filled with stars...

Warning, content below represents a man's libidinous fascination with an automobile. It is not Lolita; after all Bradley Berman, the author, is not Nabokov and the Wraith is not underaged. Nonetheless, I find myself simultaneously repulsed... and seduced. David J. Katz

The End of Mass Marketing: Go Small, or Go Home

Once upon a time… business success was based on providing a narrow segment of consumers with a narrow segment of products, uniquely suited to their needs, sourced and advertised locally, and sold at a local store. Over time, the spread of mass media - TV, national newspapers and magazines - along with the expansion of national retail stores, and the growth of a global and highly efficient supply chain, led to a world of mass marketing, mass production, and massive retailers. The retail world moved from personalized products for localized, niche markets to mass-produced products for mass markets. Mass marketers thrive on "must-have" items - huge volumes of single styles, sold across many market segments to an audience of consumers eager to have the item they saw advertised in mass media, and which, in turn are produced in great scale and efficiency. This strategy worked. Until it didn’t.