OBS SPEAKS OUT: THE RIGHTS OF ROBOTS

OBS SPEAKS OUT: THE RIGHTS OF ROBOTS

By OBS Staffer Staar84

An article on The World of Weird Things website talked about artificial intelligence, mostly in the Matrix, and how robots in the future should be held accountable for their actions. The author used an example of a robot that killed its owner (the the anime Matrix series) that was tried and convicted. He argues that that’s ridiculous because any action like that would be the result of an error in programming or a malfunction: “… in the real world, a machine is property. Hailing [sic] one to court would be like putting a defective toaster on trial for electrocuting your friend”.

But what about all of the other Sci Fi that argues that robots should have rights (and have to follow laws)? If a robot is self aware, then it should be held responsible. After all, when a person snaps and goes crazy, we still hold them accountable for whatever they have done. And isn’t that a defect of the human machine? The question then, that would need to be answered is “what makes something alive?” Since we don’t have the advanced technology around today that would need us to answer that question, I’ll focus on the Sci-Fi that has answered that question.

In I, Robot (the movie, I haven’t read the book yet), Sonny is accused of murdering his owner (or “father”, as he calls him). There is no trial, he is simply a defective product, and is scheduled to be shut down. But he is saved at the last minute by Dr. Calvin because he is “unique”. Even the older model robots show some signs of humanity: we see the robot models created before Sonny huddling together for companionship in their crates. They are the predecessor of Sonny, and he is the advancement: while they are programmed to have human-like qualities, Sonny not only seeks out companionship, but had emotions; so isn’t he alive?

In Star Trek: The Next Generation (“The Measure of a Man”), a scientist from Star Fleet arrives onboard the Enterprise because he thinks he can create more androids like Data, but he has to crack Data open to study him. The process of studying him could permanently damage him and delete his memory. While Captain Picard is hesitant, he agrees to the study because Data is Star Fleet property. But when Data comes to him and tells him he does not wish to go through with the procedure, Picard changes his mind, and a trial to determine Data’s humanity begins. Picard defends him and points out “… that Data was created by a human…it’s not relevant. Children are created by the building blocks of their parents DNA. Are they property?” Data also keeps personal effects, with no logical value, including a picture of a fellow crew member who was killed, because he misses her. Picard then asks the scientist what is required for sentience: “intelligence, self-awareness, conscience”. Data eventually wins personal rights.

Yet another example of artificial intelligence being classified as life is from Stargate SG-1. In the episode “Urgo” (Season 3, episode 16) the team travels to a planet, and they inadvertently return with a computer program in their brains that interacts with them in order to learn about their culture. The program (who calls himself “Urgo” tells them he cannot be removed without killing them. They later find out that he was lying, because he can only live through other people. Urgo tells them he lied because his removal means his death and he “want to live! I want to eat pie!”. They decide that he is a living being because he was self-aware enough to defy his programming (which was to be removed after a certain amount of time) and fear death

A more modern example would be animal rights. You could argue (and I am) that most animals are less self-aware than these advanced AI’s, but they still have protective rights. Animals do not fear death the way humans do: they do not look for answers beyond surviving daily struggles. You can be arrested for animal cruelty, even though they can’t speak or create like the robots can. Scientists are only now discovering that there are a limited number of animals that display characteristics of self awareness (Gorillas, Dolphins, Elephants, and Pigs), and even that is not at a level equal to ours. And Kurt Cobain once sang “It’s OK to eat fish/cause they don’t have any feelings”. The difference is that humans created robots. But should that really matter? Wouldn’t anything that knows what it is have a right to live, and at the same time be held responsible for what it does? A dog that is trained to kill by a human is still put down if it follows it’s training, and the dog isn’t aware of the moral implications of the kill. So if a robot kills a human, then a trial would be necessary to determine whether or not it knew what it was doing. As for the idea that a machine is property “A comfortable, easy euphemism: property. But that’s not the issue at all” (Start Trek). If a being is self-aware, then it’s sentient. Shouldn’t it have responsibilities and rights?

What do you think?  Join us in the forum and share your thoughts.  We’d love to hear from you!