Last week, a robot on an automotive assembly line killed a worker at a factory in Germany.
The man was installing the robot at a Volkswagen assembly line when the robot gripped and pressed him up against a metal plate, crushing his chest.
It’s not the first time a human has been killed by a robot. That distinction occurred 36 years ago, in 1979, when a Ford Motor Company employee was killed in a Flat Rock, Michigan, plant.
And, it will not be the last time a human is killed by a robot.
As robotics continue to proliferate and infiltrate our economy and our households the incidence of human injury and death at the "hand" of robots will only increase. Statistics can, and will, lie. (see my article "Lies, Damn Lies, and Statistics.")
Robots may be "deadly" and yet still be safer than humans performing the same tasks.
The inflection point for public opinion, along with legal liability, will be the application of artificial intelligence and "autonomy."
The robots that killed workers at automotive plants were “dumb” machines, following pre-coded instructions without any allowance for “choice.” An autonomous "smart" robot performs tasks with a high degree of choice. Modern factory robots have autonomy within predetermined parameters in their constrained environment, and greater autonomy for robots is planned for the near-future. "Smart" robots catalyze "smart" questions, answers and challenges...
Robots will make mistakes.
Robots will make mistakes. However, they won't make the kind of routine miscalculations and mistakes that humans commonly make. They won’t be drunk, tired or distracted. They won't be angry, depressed or vengeful. The safety performance of robots will be greater than that of humans. Yet, the "smarter" and more human-like a robot becomes the less we will tolerate their mistakes.
Google Self-Driving Car, Circa 2015
Self-driving cars will be safer than human drivers. In the United States, some 30,000 people are killed and more than 2 million injured in crashes every year, and studies show that the vast majority of these crashes are caused by human error. The Associated Press reports that self-driving cars on California’s roads have been involved in four accidents since September.
Children are killed in car accidents every month. The first time a child is killed by a self-driving car, regardless of fault, there will be unprecedented public outrage.
We need rules, regulations and public education.
“The only thing we have to fear is fear itself - nameless, unreasoning, unjustified terror…” FDR
In 1942, Issac Asimov wrote the short story “Runaround,” and established “The Three Laws of Robotics” - still quoted by engineers and ethicists:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
No one can stop an idea who's time has come.
In the near future we will see rapid development and deployment of autonomous robots. Beyond self-driving vehicles, consider these currently available robots:
Robots as housekeepers
- Currently available robots from iRobot:
- Roomba - an autonomous vacuum cleaning robot
- Scooba - a floor scrubbing robot
- Braava - a floor mopping robot
- Mirra - a pool cleaning robot
- Looj - a gutter cleaning robot
Note, there is...
"no need to hunt Roomba down when the job is done. Roomba charges itself, returning to its Home Base to dock and recharge between cleanings.”
In the ubiquitous "Internet of Things," these robots, along with your Nest Thermostat and Smoke Detector, August Smart Door Locks, and Hue Lighting System... will communicate with one another and make decisions without consulting with you. When triggered, your smoke alarm will turn off your furnace, call the fire department, unlock your doors and light up your house. And, we know, there will be "glitches."
Robots as childcare providers
- In 2015, Apple’s “Siri” provides assistance for an autistic child (New York Times)
- Gus: “You’re a really nice computer.”
- Siri: “It’s nice to be appreciated.”
- Gus: “You are always asking if you can help me. Is there anything you want?”
- Siri: “Thank you, but I have very few wants.”
- Gus: “O.K.! Well, good night!”
- Siri: “Ah, it’s 5:06 p.m.”
- Gus: “Oh sorry, I mean, goodbye.”
- Siri: “See you later!”
- This fall, Mattel will introduce “Hello Barbie”, a Wi-Fi enabled version of the iconic doll, which uses a Siri-like system to analyze a child’s speech and produce relevant, interactive, verbal responses.
Robots as caregivers
- In Japan, where robots are considered “iyashi,” or healing, the health ministry has a program designed to meet work-force shortages and help prevent injuries by promoting nursing-care robots that assist with lifting and moving patients.
- A consortium of European companies and research institutions collaborated on "Mobiserv," a project that developed a touch-screen-toting, humanoid-looking “social companion” robot that offers reminders about appointments and medications and encourages social activity, healthy eating and exercise.
- In Sweden, researchers have developed "GiraffPlus", a robot that monitors health metrics like blood pressure and has a screen for virtual doctor and family visits.
- From the NYT
Robots as surgeons
- The first robot to assist in surgery was the Arthrobot, used in an orthopedic procedure in 1983.
- Today, the NYU Langone Robotic Surgery Center offers the following:
- Robotic hysterectomy
- Robotic prostatectomy for prostate cancer
- Robotic kidney surgery for kidney cancer
- Robotic urologic procedures
- Robotic lung surgery
- Robotic heart surgery
- Robotic colorectal surgery
Robots as weapons
- The Samsung SGR-A1 is a military robot designed to replace humans. The SGR “Sentry Guard Robot” is a fixed-position autonomous weapon capable of “tracking and engaging human targets with a mounted grenade launcher or machine gun.” It is currently in use in South Korea.
- Autonomous drones are available. Whether they have been deployed and activated in weaponized "auto" mode... is not fully disclosed. We know that human operators have made "kill errors." Military robots will do so, too.
- Last month, in Geneva, there was the second "expert" meeting of Lethal Autonomous Weapons Systems (LAWS); one objective was to discuss whether...
- LAWS should be put in the same category as biological and chemical weapons and comprehensively and pre-emptively banned.
- LAWS should put in the same category as precision-guided weapons and regulated.
- Should humans be "in," "on," or "out" of the "loop?" Should a human be required to push the kill button, have the option to hit an abort button, or should they be removed from the decision process?
I’m fairly certain that the SGR-A1 has not been programmed with Asimov’s Three Laws of Robotics.
Some super-smart people feel that "smart" robots are good... and that "super-smart" robots are scary.
Elon Musk, CEO of Tesla and SpaceX, at a recent MIT symposium, compared the advent of artificial intelligence to “summoning the demon.” Musk was concerned enough to donate $10 million to the “Future of Life Institute” with the objective of assuring that A.I. doesn’t exceed our ability to control and regulate its proliferation.
Bill Gates agrees;
"I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”
As does Stephen Hawking;
"The primitive forms of artificial intelligence we already have, have proved very useful. But I think the development of full artificial intelligence could spell the end of the human race… Humans, who are limited by slow biological evolution, couldn't compete and would be superseded.”
The Rise of The Machines.
They're coming. You can't stop them. You can educate yourself, influence public policy and insist upon rational regulation.
You can enjoy a life freed from mundane tasks, better suited to robots, and you can appreciate the elevated service level provided by robots, more skilled than humans.
Interlaced with the concern that robots may rob us of our lives is the greater fear that robots may rob us of our livelihood. That, however, is another story.
(c) David J. Katz, New York City