Artificial intelligence: robots with ethics?

Just like a gun, a computer can do good or evil depending on the user. But consider an artificially intelligent gun or computer that can think for itself. Thursday, we looked at the dicey ethical issues of autonomous computers, machines and robots.
    
In Isaac Asimov's sci-fi classic: I, Robot, robots were always required to obey three laws. A robot must obey human orders except where such orders would conflict with the First Law. A robot must protect its own existence so it does not conflict with the First or Second Laws. But even that didn't work out.
           
Though they do our bidding now, eventually artificially intelligent machines and robots could become self-aware life forms unto themselves, as they learn more and more.
            
Some theoretical physicist, such as Stephen Hawking and tech guru Elon Musk, suggest that this could become humankind's undoing. 

"I mean, with artificial intelligence, we are summoning the demon," said Musk. "Conceivably be destroyed by it," said Hawking.
    
Unlike humans, robots are tireless, relentless, don't call in sick and are driven by their programming. Because of that, they can do work that is boring, tedious and tiring to humans – faster and more accurately and without human passion or prejudice.
    
But when they can think for themselves will they develop biases and perhaps, even become anti-human? 

Irina Raicu is the University of Santa Clara's Internet Ethics Program Director. "It's the first time that we're really talking about whether the machines will really do the kinds of things that make us human. Will they care for people, think through the implications of their actions, revolt if they are given bad orders or something? They might recreate the biases present in society, rather than correct them or they may just reach really bad decisions," said Raicu. 

For the moment, however, Raicu sees artificial intelligence ethics as a problem far down the technological road. 

"The kind of ethical reasoning that you're describing is done by people and will have to continue to be done by people for the foreseeable future," he said.
    
Stanford Computer Scientist Michael Genesereth agrees. "I'm pretty sure that humans today could not live without their machines. It seems they have become essential. But. I am sure, certainly, about one thing: our machines cannot live without humans, at least not for a very long time. We are nowhere near where they will subsist entirely on their own." said Genesereth.

Genesereth sites the case where the autonomous car – the self-driving car – is faced with deciding which way to swerve. One way this six people and the other way one person. 
 
"And that kind of reasoning is not programmed and not taught into machines, although there are people who are trying because there are decisions that have to be made that have an ethical component," said Raicu. "Invariably, so far, human beings have a better sense of how to respond. The machines are nowhere near that point.”

But, most observers agree, we will reach a point one day where artificial intelligence will take many of our jobs and, if we're not careful, perhaps our humanity.