Friday, April 29, 2011

Test for the “Human”

Christine Rosakranse
Response for 369 - Nass
Test for the “Human”

    Any test for humanness is a difficult one because the definition of human is different depending on which aspect of the individual you are studying.  Of course, there is the biological component of a human being, and that is well-defined.  We have certain DNA and the expression of those genes leads to common physical traits.  Mutations do occur and there are outliers of the system, but they lie well within a range of possibilities.
    However, the ontological essence of humanity remains out of the scope of the Turing test.  He relies on transmitted communications in order to remove the face-to-face interaction, but that would be exactly where humanness lies.  To be honest, I would argue that humanness is a spectrum.  On the one end, you have a rock, inanimate and unfeeling.  At the fully human end, you have an entity that hopes, worries, loves, and that is capable of engaging with other members of humanity. 
    I always think of Data from Star Trek: The Next Generation when this question arises.  He progresses through the seasons trying to become more “human”, playing chess, painting, and even have conjugal relations with another crew member.  But throughout all of that practice, he submits that he is not human.  In the movie, he gets the emotion chip, one of my favorite deus ex machina devices ever.  They do not explain how it works.  They just pop it in and boom - manic, all-feeling Data is born.  I would say that at this point he became human, and would pass any Turing or Nass test.
    This is not to say that anything with emotions is human.  I know my dog worries a lot about treats, and the scarcity of treats, and all beef related items in the world.  But it’s how one worries that makes a thing human or not.  On my spectrum, though, a dog would be more human than a rock, less human than me, but not because she can’t use grammar. 
    Perhaps my test would be: To what extent does the entity have the ability to create an explanatory narrative?  We constantly distinguish “what matters”, using a combination of logic and very nearly arbitrary rules.  I think a computer wouldn’t be able to say which is more important, a rainbow or a cricket.  And a human would, coming up with some narrative, that may or not make sense to a computer, but that would make sense to another human being.  

No comments: