Cognitive scientists have classically assumed that accessing information and making inferences involves the construction of detailed internal representations that can be referenced. For instance, in order to remember a fact (i.e. "the earth is round") the information must be stored somewhere in your brain, regardless or whether it's stored as a sentence or in some other form that you can query to get information. Right? How can we test the validity of this general framework for thinking about thinking?

David Richardson and Michael Spivey, two researchers at Cornell, tested this using a technique they called "Hollywood Squares." In their study, subjects would look at a screen with a square divided into four quadrants. In one of the quadrants an image of a face would appear while some true statement was read through headphones (i.e. "The Pyrenees is a mountain range separating Spain and France"). In a later session, the subjects would hear another statement relating to the information they had learned earlier, and had to assess whether it was true or false (i.e. "The Pyrenees is a river in France"). When tracking the eye movements of the subjects, Richardson and Spivey found that subjects would look at the quadrant where the face was when they were presented the original information, even though there was no face there any more. The crucial thing here is that eye movements have nothing to do with internal representation of the given information.

It seems that instead of using some sort of internal memory to recall information, the subjects first relied on a learned strategy for acquiring the information. In the literature these strategies are referred to as "deictic pointers." However, since "deictic" is just a fancy word for "pointer-like", I'll ditch that part and just say "pointers."

Everyone has seen and done this before: in school, how many times did you look down at your notes to find an answer, but there were no notes there? How many times have you heard a teacher say "the answer's not written on the ceiling"?

The point here is that you don't store the information itself, you store the most efficient way to get the information. This is in line with the way that people want to think about learning in our current technological environment. In the age of Google, why bother spending time on inefficient evolutionarily-hacked storage systems when all you need to do is learn how to ask the right questions?

If I wanted to be all categorical* about it, I would refer to this as "morphism cognition" as opposed to "object cognition." That is, we're concerned about developing a system of functions that will lead us to the right answer when called upon, not necessarily developing an archive of static representations.

For me, the idea of deictic pointers throws the whole notion of a purely static mental representation into philosophical confusion. What does it really mean to have an internal static object that you can reference? Or is the idea of an internal object just a metaphor to relatively static objects we experience in the outside world, a way to more easily handle patterns that have predictable behavior?

------------
* Here I refer to the mathematical notion of category theory, not the notion of categories in cognitive science. Namespace collisions abound.

** Great quote from the paper I wanted to include: "As Bridgeman (1992, p. 76) remarks, "the vast majority of behavioral acts are saccadic jumps of the eye, unaccompanied by any other behaviors.""

Sources:
- The Oxford Handbook of Philosophy of Cognitive Science
- Representation, Space, and Hollywood Squares: Looking at Things That Aren't There Anymore (Richardson and Spivey 2000)