These 23 Guidelines Could Stave Off an AI Apocalypse

How can humanity develop artificial intelligence while ensuring our own safety? Chad Baker/Getty Images

There are plenty of reasons to fear the future of life on Earth, from nuclear terrorism and global warming to the rise of the man bun and the inexplicable fact that someone still pays Adam Sandler to make movies. Some folks fear that advances in technology are moving us precariously close to the days when robots and other automated technology surpass their human creators and take over the world. If that reminds you of how super computer HAL tried to outmaneuver his human colleagues in "2001: A Space Odyssey," you may shudder to recall that sci-fi tale envisioned a future that was supposed to happen some 16 years in our past.

These days, robots are already replacing human workers on factory floors and in restaurants. Artificial intelligence (AI) — technology that gives machines a humanlike ability to perceive and adapt to their surroundings — is improving at such a fast rate that experts are putting together a guidebook they say will help us ensure that robots don't get so smart that they try to make us extinct.

The list, published by the Future of Life Institute, provides 23 principles that researchers and tech giants like Stephen Hawking and Elon Musk say should guide the development of AI technology. Artificial intelligence should be sought for the benefit of humans rather than for intelligence's sake alone (1), the principles say, and should include protections against malfunctions and hacking (6). Designers should also consider the "moral implications" of their creations' use and misuse (9) and aim to make AI machines "compatible with ideals of human dignity, rights, freedoms, and cultural diversity" (11). 

Speaking of dignity, maybe there's a machine that can keep Adam Sandler off the big screen. In the mean time, dig into the full list of guidlines here.