The Unfairness of Working Memory

Several interrelated posts this morning, including “Intelience and the President of the United States, “Capturing my Thoughts: How could Demographic Warfare me used with 5GW?,” “Fixing Milwaukee Notes: Milwaukee School District Governance,” and “U.S. college panel calls for less focus on SATs.”

The topics all revolve around Working Memory, the capacity of the adult to keep 7 (ish) things in mind at the same time. Some people have more, some have less. Working memory is heritable and impacts life outcomes. Working memory is not “fair.” It is predicted by your class origin, your socio-economic status, your race, and so on while its variance is predicted by your sex. (Being male is risky business.)

Many social problems will be eleviated when we can use retroviruses or stem cell therapy to increase the working memory of the underclass. At the same time, any individual with low working memory can more than compensate by building up his long-term memory (his knowledge and experience), his self-efficacy (how he responds to failure), and his behavior.

4 thoughts on “The Unfairness of Working Memory”

  1. Heh, I do use my blog to capture ideas.

    I generally use lists of things to remember stuff.

    Weirdly, I have making lists since I was a kid (I still have some of those notebooks).

    In school, I made notes that were generlaly lists and bullet points.

    I have also been using the GTD system to manage task, projects and next actions.

    I also write down my after-work todos on a small post-it note I then put in my shirt pocket.

    I can remember though lines from movies and also obscure song lyrics from 20’s on up and much history trivia.

    I await the virus to give me a recall of 25+-2 items!

  2. Is the concept of Working Memory Turnover relevant, how quickly and deeply one can recall long term items into working memory? Although I can fathom the advantage of larger working memory, seems that the most useful measurement of working memory should be how quickly and thoroughly one can analyze the relationships among what one has in working memory and synthesize aggregates or components, whatever the array size. (I’m only a layman on this topic, but it piques me, so I fancy and query.)

  3. Purpleslog,

    Because working memory is so limited, we need to ‘offload’ from it… One way of doing this is to learn things really well, so we can perform them effortlessly or entirely in long-term memory. Antoher way is to offload the work to algorithms, programs, methods, technologies, physical devices, or mneumonics. GTD is a way to offload work from working memoty to improve performance.

    Moon,

    Working memory can be thought of as the ability to solve novel problems. Visual working memory is often measured with Raven’s progressive matrices [1], quesetions that ask someone to mentally rotate an object in three-dimension space, and so on.

    If you can think of the mind like a complex blade server. The better part of this server is composed of hundreds of P66 blades, with dynamically-adjusting fiber-optic cabbling between them. These blades are fantastic at any task that can be quickly distributed to lots of blades such as balancing, face reading, excersizing, threat assessment, and so on. The blade server has a massively dynamic architecture, where all tasks (including task scheduling, task prioritizing, and so on) are distributed among the blades.

    These blades are also, at tremendous computational, virtualizing a Mac OS X Aqua interface. The interface allows a user to control the server. However, the latency is awful (the interfaces takes 10x as long to use as allowing the blades to work themselves), the sluggish interface can overrule decisions already made with more context by the blades, and the low virtual RAM available to the interface means that complex programs keep crashing before completion, either giving out of memory errors or (worse) a result that looks-valid but isn’t.

    Long-term memory would be the glades. The sluggish virtualized interface would be the working memory.

    Another computer metaphor would be to call the chunk capacity of working memory “pointers” to location in long-term memory. Thus, while the total number of pointers is limited, the complexity of the information that passes through the pointers is a function of what is stored in long-term memory.

    I discussed working memory and long-term memory in relation to the OODA loop, both online [2] and in a revised form in the new book [3,4].

    Steve,

    Good observation. Google helps us offload from long-term meory in the way that GTD helps us offload from working memory.

    [1] http://en.wikipedia.org/wiki/Raven%27s_Progressive_Matrices
    [2] http://www.tdaxp.com/archive/2008/02/04/a-history-of-the-ooda-loop.html
    [3] http://www.tdaxp.com/archive/2008/09/22/debating-science-strategy-and-war.html
    [4] http://www.amazon.com/gp/product/1934840467?ie=UTF8&tag=zenpundit-20&linkCode=as2&camp=1789&creative=9325&creativeASIN=1934840467

Leave a Reply

Your email address will not be published. Required fields are marked *