As simulacra of humanity approach that of a real live person we experience an increase in affinity with said simulacra—but only to a point. For instance, we have no problem with a character like C-3P0, because although he behaves like a human he bears only the most passing resemblance to a person. The epitome of the uncanny valley is a corpse, because while it looks just like a living person, it lacks that requisite humanity. The same is true of robots, androids and cinema. Humans like to try and emulate human emotion and expression, and as of now, artificial intelligence is far from perfect, though that won’t stop scientists from trying to obtain “more human than human” perfection. Latest AI scientists seek to program computers (and thus robots) with cognitive thought so that a computer will have the ability to visibly read--and understand--the expressions on the user's face. With a flick of the user's finger or a tightening in his lips, a computer may soon be able to read these motions as "angry," and react accordingly. For some people, advanced technology such as this will be world-breaking, exciting, a new way of approaching our more mechanic counterparts; for others, however, the mere thought of computers with cognitive thought is downright frightening and shocking. The uncanny valley is really an uncanny wall in which those who strive to create a human that is "more human than human" will continuously face a Zeno's paradox situation in which no matter how close an animator or sculptor gets he will never reach perfection.
This might be in part explained by Kristeva’s theory of abjection—that humans are repulsed by those objects which challenge our sense of boundaries and limits. Things which come from the body but are not alive, which are part of oneself but simultaneously not part of oneself, meet this criterion, and according to Kristeva this explains the disgust of a human when viewing spit, dung, or other forms of bodily discharge. Similarly, androids who approach, but do not reach, a human appearance may fall into a gulf which challenges our perceptions of what it is to be human. Androids force us to acknowledge the fact that we are cyborgs, and differ from our machines in degree, not in kind. This terrifying (and liberating) revelation has led to a new late-20th and 21st century conception of intelligence and consciousness as detached from any specific state of matter (such as the human body)—the posthuman is a construct which we use to understand our existence as cyborgs, and to try to reframe the paradigms through which we understand our existence in relation to other beings, both living and mechanical.
That being said, it is also important to realize that while AI is still imperfect, it is constantly improving, and as it continues to improve, it constantly challenges our conception of what it means to be human and blurs the lines between human and computer. In our presentation, we used examples like the computer-generated post-modern essay and the Turing test using the Internet chatbot A.L.I.C.E. to show how difficult it can be to distinguish between human and something else—something robotic, something computer-generated, something virtual, something other. So, if we can’t tell the difference between human and this robotic other, what is the difference, and why does it matter? Maybe there is no difference. Perhaps it is already irrelevant. As Timothy Morton puts it, “The brilliance of Blade Runner, and of Frankenstein, is not so much to point out that artificial life and intelligence are possible, but that human life already is this artificial intelligence.” Maybe that’s the most terrifying thing about the uncanny valley; the closer humans get to being able to successfully replicate what is human, the more our notion of what it means to be human is obliterated.
--Jeffrey, Maria, Amelie, and Holly
Course Information
Thursday, April 28, 2011
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment