Why Anthropomorphize

We consider the dangerous but necessary practice of labeling behaviors as if they are our own.

I have a cat that wants to be near me but not too close. I wonder what in her experience motivates that behavior? I stop myself. Why would I think my experience of the world would have any relation to her life?

At this point I have lived with and gotten to know a variety of cats. I have found each interesting in their own way. I can say this because I remember aspects of each cat through labels I have applied that work for me without the necessity of any other validity.

I read recently some scold of artificial intelligence asserting that no subroutine should ever be labeled "understanding". Better a label like "G3712" which carried no baggage for the programmer. But this can't possibly be good advice because it sabotages any advance in understanding understanding.

As counterpoint I offer Jean-François Cloutier who channels Minsky as he programs a toy robot in Elixer.

YOUTUBE AKMV8OW9R50 Published on Oct 19, 2017.

Jean-François is actually more interested in abstraction mechanisms and especially the pattern matching control paradigm of Elixer than claiming any intelligence realized within his robot. Minsky's society metaphor would then be an approach to managing feature interaction. But then, how do we label the parts?

As we as a discipline address increasingly complex computational architectures we will have no recourse for naming the parts using the metaphors we've applied with no special insight to ourselves.