While we're probably not headed for a Skynet-like Armageddon, an increasing number of scientists worry whether adequate measures are being taken to safeguard ourselves from our robotic and digital creations.
One of the main concerns is automation. Will military drones eventually be allowed to make their own decisions on whether or not to attack a target? If a human is monitoring, will he or she still be able to override the drone's wishes? Will we allow machines to replicate themselves without human direction? Are we going to allow self-driving cars? (Some cars already offer the ability to park themselves or to prevent a driver from drifting into another lane.)
Then there is the issue of robots occupying roles they probably should not. Already, there are prototype medical robots designed to ask patients about their symptoms and to provide counsel, simulating comforting emotions -- a role traditionally occupied by a human doctor. Microsoft has a video-based receptionist A.I. in one of its buildings. A new class of "service robots" can plug themselves into electrical outlets and perform other menial tasks -- not to mention the long-established Roomba, an automated, vacuum like robot.
We may also be placing too many critical tasks and responsibilities into the "hands" of non-human actors, or will gradually find ourselves in a position of dependence on machines. At a 2009 conference of computer scientists, roboticists and other researchers, the experts in attendance expressed concern about how criminals could take advantage of next-generation technology, like artificial intelligence, to hack information or impersonate real people [source: Markoff]. The bottom line of this conference and other discussions seems to be that it's important to start tackling these issues early, to outline industry standards now, even if it's not clear what kind of technological advancements the future will bring.