IIIT
A Bi Monthly e-Magazine
|
Volume
I Issue II
|
January-February
2005
|
X`pressions@iiita
|
||||||
Jest Corner
|
|
Human Rights Transcend Computer Rights It's time to start thinking about how we might grant legal rights to computers.
The Final Day A trial being held at supreme court, New Delhi, India; Ananya Agarwal argued an especially tough case. The difficulty for Ananya, an attorney-entrepreneur and pioneer in the satellite communications industry, was not that she represented an unsympathetic client. Far from it-the Niharika's story of confronting corporate oppressors moved the large audience. The problem was that the Niharika was a computer.
AT
SOME POINT IN THE NOT-TOO-DISTANT FUTURE, we might actually face a sentient,
intelligent machine who demands, or who many come to believe deserves,
some form of legal protection. The plausibility of this occurrence is
an extremely touchy subject in the artificial intelligence field, particularly
since overoptimism and speculation about the future has often embarrassed
the movement in the past. The
legal community has been reluctant to look into the question as well,
with only Cyber Laws, IT ACT 2000 etc; this part is yet to be explored.
Granting COMPUTERS RIGHTS requires overcoming not only technological
impediments, but intellectual ones as well. There are many people who
insist that no matter how advanced a machine's circuits or how vast
its computational power, a computer could never have an intrinsic moral
worth. Probably they think at the moment, there is no artifact of sufficient
intelligence, consciousness, or moral agency to grant legislative or
judicial urgency to the question of rights for artificial intelligence. But
some A.I. researchers believe that moment might not be far off. And
as their creations begin to display a growing number of human attributes
and capabilities-as computers write poems and serve as caretakers and
receptionists-these researchers have begun to explore the ethical and
legal status of their creations. Much
of artificial intelligence research has rested on a computational theory
of mental faculties. Intelligence, consciousness, and moral judgment
were viewed as emergent properties of "programs" implemented
in the brain. Given sufficient advances in neuroscience regarding the
architecture of the brain and the learning algorithms that generate
human intelligence, the idea goes, these programs could be replicated
in software and run in a computer. Raymond Kurzweil is one of Strong
A.I.'s king and one of the inventors of print-recognition and speech-recognition
software. Extrapolating from the last few decades' enormous growth in
computer processing speed, and projecting advances in chip and transistor
technology, he estimated recently that by 2019, a $1,000 personal computer
"will match the processing power of the human brain-about 20 million
billion calculations per second." Soon after that point, claims
Kurzweil, "The machines will convince us that they are conscious,
that they have their own agenda worthy of our respect. They will embody
human qualities and will claim to be human. And we'll believe them." NO
MATTER HOW FAST THE TECHNOLOGY ADVANCES, the design of intelligent computers
is entirely within our control. The same might be said about the rights
and protections we extend to them. We will create a robot that society
deems worthy of rights only when and if we choose to do so. Even
if we don't grant rights that match what the hypothetical jury gave
NIHARIKA, we might offer some sort of legal protection to A.I. machines
because we come to believe that they represent the culmination of human
ingenuity and creativity, a reflection and not a rejection of human
exceptionality. Protections
encouraged by that sort of celebration would likely not be framed in
the language of rights: Christopher Stone (Christopher D. Stone, an
expert on environmental law and ethics, holds the J.Thomas McCarthy
Trustees' Chair in Law at USC.) suggests various gradations of what
he calls "legal considerateness" that we could grant A.I.
in the future. One possibility would be to treat A.I. machines as valuable
cultural artifacts, to accord them landmark status, so to speak, with
stipulations about their preservation and disassembly. Or we could take
as a model the Endangered Species Act, which protects certain animals
not out of respect for their inalienable rights, but for their "aesthetic,
ecological, historical, recreational, and scientific value to the Nation
and its people." We could also employ argument for protection of
slaves, based on the possibility that if we don't afford that protection,
individuals might learn to mistreat humans by viewing the mistreatment
of robots. There is a great deal more than verbal "turn-taking" that computers and robots have to learn from us in order to become more fully human. But then again, there is much more we might learn from them to become the same
|
|
©
2005 Indian Institute of Information Technology Allahabad
|
Designed
by Graffiti Studios IIITA
|