Friday 16 December 2011

Memory’s Role in Relation to the Learning “What Ifs”

Memory’s Role in Relation to the Learning “What Ifs”

The differences between working memory and long-term memory have been a focus of neuroscientists for the past 50 years.  What we believe today is that working memory deals with the calculable aspects of what we are doing, and is activated when information in the form of stimuli from both the environment and our long term memory coalesce into what we are thinking about.  Research has shown that neuron activation (excitation) appears to be at the center of this process. On the other hand, long-term memory appears to be the result of the synaptic connections that are made once a pattern has been detected and stored.  So while working memory is ‘visible’ as neurons light or fire, long-term memory is ‘powered’ by the connectivity of repetition.  Another way you might look at the differences is that working memory focuses on the functional and the immediate (note that a lit bulb provides functional light), while long-term memory focuses on the structural (how a series of lights is connected) and the established.

As mentioned in the last post, I want to speak to the fact that there are ways of building up to a point where the learner actively seeks to determine the “what ifs?” of study, regardless of whether there appears to be a natural aptitude or inclination to do so.  Many educators who are deeply committed to the value of inquiry may have smiled when I wrote, “Prove it – What’s proven – Can you improve? has shown again and again to work very well for young learners of all ability levels.”   Ironically, the proof holds that inquiry alone does not necessarily build you up to the proper “What ifs?” and what I’m about to say today will startle those who see finding answers to questions, as the default learning portal. 

The fact is, the vast majority of inquiry-centric supporters fail to understand that problem solving requires motivation (curiosity) to want to look for something, but just as importantly, an ability to recognize when the thing (hopefully the truth) being sought has been found.  Curiosity without a sense of competency will quickly die off.  For many educators, this is a problem that must be solved.  You can develop all kinds of intriguing problems, but if the knowledge base needed to recognize the answer isn’t there, the problem won’t get solved, and great frustration will be sure to follow.  Talk about a quick way way to kill curiosity.

Unfortunately, those entrenched in the inquiry camp have adopted the view that being able to remember knowledge is less important than how a person manipulates knowledge.  Why both aren’t equally important in their minds, I don’t know.  Of course for this to make any sense, a fundamental assumption must be made which is that the thing to be manipulated in our working memory (in this case knowledge) is actually present and recognizable to both the eye, and the mind’s eye, of the observer.  One must assume that an external representation (that which is observed) is the same as an internal representation (the working mind’s by-product) in order to de-emphasize the need for long-term memory as a primary resource for knowledge.  It would then be argued that we could now depend on other resources like the Internet to provide a presence substitute (I can always look up the answer) for dormant synaptic connections or non-frequented memories.

Neuroscientists will tell you this is a practical impossibility because our ability to recognize patterns is based on our memory storage system, which acts as a comparison center under “recognition” circumstances.  Our visual receptors are simply not enough.  Think of it this way; you can see the final score of a game, or you can see the score and the statistics, or you can watch the game film and review the statistics afterwards in order to prepare for your next game.  None of these would substitute for the memory you would have if you played the game.  How your brain works as a participant differs from when your brain is in an observer role.  The idea that the Internet can substitute for long-term memory (as opposed to support it) is where brain anatomy gets in the way of the inquiry emphasizer’s desire to prematurely get on to the “What ifs” without confirming that the learner will know if/when something is found.

This becomes a real problem for any educator who wants to build generative curriculum.  While trying to create the learning situation where noble learning pursuits can unfold, there is little attention given over to how pursuing the grail of executive and higher-order thinking can often lead to a fight with brain anatomy and neurocognitive structures.  While we structure a lesson with the best pedagogical outcomes in mind, the learner’s mind structure is often ignored.  Try as we might to emphasize a focus on balancing working and long-term memory, long-term memory’s pattern recognition function often interferes with, as much as it reconciles with, the environmental signals being manipulated in the learning moment in working memory.  Before getting too far along on this, maybe a quick refresher of what is going on in the mind of a thinker is in order.

In a brain study done at the University of Colorado on the functions of the pre-frontal cortex and basal ganglia on working memory, the authors focus on specific processing mechanisms that different regions of the brain perform.  Specifically they looked at tasking using working memory.  Aside from proposing what they believe working memory is, (temporarily storing and manipulating information needed for executing complex cognitive tasks) there is also an explanation of what is done:  Activation-based memory (certain neurons firing as a result of the need to maintain and update encoded information), and Cognitive control (contextual attention to task through specific actions to achieve task-relevant objectives).

The study holds important information because it speaks to two important aspects of memory control, namely the ability to update memory when new stimulus is introduced, and the ability to maintain focus on all information, including information from past memories.  Juggling new information with old information requires the thinker to block or inhibit interference and irrelevant distractors.  Of course this means figuring out what is relevant and what is irrelevant. 

The authors describe this in mechanistic terms.  The ability to regulate certain stimuli is seen as a selection process that requires information to be allowed in or blocked.  Their model suggests that that function is served by the basal ganglia when it “Gates”, or acts as a gate to new information being allowed to enter working memory.  “In short, one can think of the overall influence of the basal ganglia on the frontal cortex as ‘releasing the brakes’ for motor actions and other functions.” (Frank, Loughry, & O'Rielly, 2001)   They go on to speak to how the basal ganglion behaves as a movement and memory storage initiator even though it doesn’t control the detailed properties of the movements. 

So what are the implications for a subject curriculum designer determined to get to the “What ifs?” based on brain functionality?  First off, given the choice between allowing new information in and sticking with what we know from long-term memory, the bias tends to be to be to stick with the familiar (Wexler, 2008)This is due to the fact that we can predict with reasonable certainty what the outcome will be, because our pattern recognizers have built in connections based on what we have observed.   Working memory must reconcile this (what we know from the past) with the current environmental stimulus, and determine if the “brakes” can be eased (new information can be introduced and manipulated) so that updated thinking can occur.  Of course for some learners this is a choice fraught with danger, as the unknown holds no patterns, which means mistakes are a given.  Additionally, as a learner there may need to be a substantial amount of time given over to pattern results interpretation.   So the mistakes could be going on for a while. 

As you can probably imagine, building out a curriculum that generates authenticity (having clear meaning and value to the learner) seems to be a direct contradiction with how the mind needs to allow in (take the brakes off to) new information.  Where new information can be clear, concise and fact-based, dealing with new information is by its nature unclear and mistake prone.   My most personal experience with this is with skiing in what I describe as “variable” as opposed to consistent snow conditions.  To let the brakes off and allow the skis to run, can mean all kinds of potentially hazardous outcomes.  However I have also come to learn that going slow virtually guarantees that my feet won’t stay beneath me for long.  To have any chance of staying balanced, I either have to keep the brakes fully on (just stand in one place) or I have to take the brakes completely off to allow inertia to take over, with the belief that I can reapply the brakes without tumbling out of control.  So the paradox exists that I need to let go to the world of mistakes in order to gain control.  

The world presents a similar paradox as we admire the professional people who get it right, while effective learner’s need to get it wrong for a while in order to become potential professionals.  So Daniel Willingham would be correct in pointing out that the ‘what ifs’ for adults are different than the ‘what ifs’ for adolescents in one important way.  Adults perceive ‘what if I get this wrong?’ in more consequential terms as they are expected to get it right.  Adolescents also likely want to get it right as well (and as educators we are expected to help them get it right), but they do themselves no favors if they fail to recognize that getting it ‘wrong’ from a mistake-perspective is actually a tool for improvement. 

So how do you square creating a culture where mistakes are seen as necessary and valued as a learning tool with an unambiguous message: perpetually getting it wrong, isn’t right.  You start with challenging long-term memory assumptions about what the learner knows. Until it can be proven that beliefs no longer hold given the circumstances now faced, people will keep going back to that which is familiar, that which is known.  I’ve mentioned this before but it should be re-stated: We go through life thinking we are right until we are wrong.

This is a critical point.  We must tell students how their memory perpetuates a sense of rightness, when in fact, memory in general terms can be a best friend or a worst enemy when it comes to recognition of right and wrong.  Right or wrong is constantly being determined and there are many times when what is wrong is believed to be right. Remember, the concept of recognizing proof implies that enough time has been spent studying something and that a pattern begins to prevail that can then be locked into long-term memory.  It should be clear now that this is substantially different than unlocking the gate to new information, which will impact and may even contradict, the sense of pattern development taking place.  With the gate open to new information, that which turns out to be evident must also be recognizable as relevant (which takes time and replicability to determine). This is what I mean by memory’s schizophrenic ability to be friend or foe.   Of course we want the learner to recognize and correct themselves (if and when necessary) but more importantly, we need to keep the gateway to new information open no matter how tempting it is to reduce the ‘noise’ they should be allowed to face while manipulating information.   We often step in with the answer too soon (this is another way of describing the potential for bias confirmation…the teacher’s bias, not the learner’s).  If we don’t allow the learner a chance to build a solid knowledge background through testing the “What ifs” (long-term memory is prematurely coerced by outside expert knowledge) the learner will lack the ability to make comparisons that will prove out what is noise and what is necessary, and will become dependent on the teacher for an overly prolonged period of time.  There is but one conclusion that can be drawn from all of this:

When building out generative learning one must assume that in the mind of the learner some things (that which is yet to be learned) will be unclear right up to the end, while that which is known must be proven, then re-proven before any attempt can be made at improving.

I realize that any seasoned educator will take one look at that statement and say, “tell me something I didn’t know.”  Well for one thing, if you believe the statement, you and your learners realize that an emphasis on how you practice (proving and proving again) is just as important as your emphasis on hoped for outcomes.  Goal setting will embrace uncertainty about learning limits and certainty around time on task.  As well, along the way there is built in the need to face anxiety-inducing potential failure unless the learner is willing to practice through the inevitable mistakes.  

Getting comfortable with re-proving what you think is already known, may seem counter-productive, but cannot be downplayed.  By going back over and testing that which you ‘know’, you actually make it easier to reconcile the fact that you may have to give up some conceptual knowledge that you believe is true.  The most important implication here is that the learner has to be made aware of how their thinking works, and should be prepared to process information with this awareness in mind.  Regardless of what is being taught, what is to be learned MUST be open-ended, while that which becomes known must be forever tested for assumptions.  

So there have been a lot of words typed to get to the point of this post, which is you need to start down the road to the “What ifs?” with the learner’s perception of “what if this doesn’t work out the way I want it to?” at the top of your list.  For the learner this may happen before they act (long-term memory based- “I struggled with this problem before”) or it may develop as they gather evidence that their thinking (working memory based- “I guess I’ll try this first?”) isn’t achieving the now hoped for result.   Either way, as learners grasp the ability to make and carry out a plan, they can also see if their plan is achieving the targeted outcome.  Success and failure at generating plan/outcome alignment affects functional and structural thinking.  What to do then when the learner decides, “I’ll just leave the brakes on thank you very much, because I don’t like not knowing what happens next.” This seems to be when the generative approach starts to lose its grip.

Think for a moment about the potential structure of a belief.  We will stay in the past with what we know up until something comes along that demonstrates to us that ‘past thinking’ no longer works (Mezirow & Taylor, 2009)This becomes readily apparent when someone or something blocks us from achieving what we are trying to accomplish.    Attention centers on overcoming the obstacle or challenge before us.  At this critical juncture does the thinking move to the idea of mistakes required to overcome, or drift all the way to contemplating failure?  Talking about the acceptance of mistakes on the way to success must be part of the learning conversation. 

This kind of talk is abstract in nature because we don’t know yet what will ultimately be learned even if we have clearly defined the particulars of success.  However, once the results are in, are they seen as an end point, or a new take off point?  What we believe about the results achieved plays a huge role in how our thinking structure will be prepared for, and participates in, the next learning episode.  As educators we should be comfortable prescribing a minimum amount that we feel can be learned within a certain amount of time.  We must also appreciate that the ability to continuously go forward with new information will be greatly influenced by what is now stored in long-term memory, but just as importantly, how that memory is used.

For adolescents we must break the conventional link between mistakes and failure in effective ways.  We want to make sure that it is clear that making mistakes can mean avoiding long-term failure if the mind can tell what kind of mistake is being made.  We want to insure that the mistake of relying on old habits is not confused with a mistake made while testing the unknown.   Testing the unknown starts with testing the known to be sure that it is true.  If the known holds up under testing, great!  If what is known no longer holds to be true, then the mistake comes in not reconciling belief with reality.
  
This is by definition learning how to become adaptable.  The alternatives are rather depressing.  Placing the learning focus solely on proving external phenomena (through either inquiry or comprehension-based methodologies) avoids the real proving grounds of how your mind approaches a problem.  The focus turns to what you did or did not learn (and the standardized test then bears this out). This stands in stark contrast to recognizing that the learner can make thinking improvements because they understand how memory affects the wonders that lay ahead.  The path of wonderment starts out in a relatively transient state (“is this choice correct or a mistake?”), which leads to more correct choices (iteration).  Further along the hierarchy is a relatively more established state (“as this and this became true, can this be true as well?”).  At some point the question becomes: “because this process is ‘failing’ in some way and I don’t like the result being achieved, can my idea bring about a preferred result?”  This is learning empowerment in the making.  When this type of question is being pondered, both the associative (working memory) and connective (long-term memory) aspects of thinking are engaged.  However, the thinking will never evolve to this point unless learners know and can recognize that they have made the right kind of mistakes while getting closer to being able to effortfully change their surroundings.

I’ve never heard someone who uses an inquiry approach (or even the comprehension approach for that matter) speak to the nature of “What ifs?” in this context.  They tend to see questions as requiring a connection between learner and topic, which is true.  My biggest concern is that this approach has a limited shelf life as curiosity can fade, as competency expectancy isn’t reached.  Competence develops around understanding thinking in ways that support the learner’s ability to recognize what is known as well as the good mistakes along the way that made the knowing possible.  Making the mistakes-failure link a part of the learning conversation should be at the top of any epistemology if you want to move thinking along.  Creating the perfect circumstances where working memory and long-term memory have to co-operate as equals in order for mistakes to be appreciated, and failure to be avoided, is the key.  This is where episodic memory and situated learning comes in.  Where thinking- prove it, proven, improve (defendable recognition) -happens very much matters, and that will be the subject of the next installment.

Thursday 8 December 2011

It’s All in the (Analyzing) Approach

Today’s entry begins with a clarification.  I may have mislead some readers when I talked about “Co-opted Study” without clearly indicating that the analysis of co-opted argumentation is the type of wisdom inducing exercise that I believe can, and should be examined in schools.  I was not, and never have been of the opinion that, the use of co-opting arguments as a strategy actually works.  In fact I’ve already taken a swipe at educators who combine or mix quantitative and qualitative study results in an earlier post.  The International Journal of Science Education article on Co-opting Science that I linked to in the previous post, does an excellent job of showing how we can be tricked by argumentation strategies that weave science fact (descriptive) and evaluative statements.  The study recognizes that educators must be on their toes when fallacious arguments combining normative and factual statements are being raised by students, and that there is a high likelihood of blurring results due to ‘co-option’ argumentation strategies such as the use of conjunctive argument (the concepts ‘hard snow’ and ‘ball’ are much different than ‘hard snowball’) or the fusion of normative and factual statements  (“disease eradication is great for the world” and “gene therapy can eliminate some disease” becomes “gene therapy is great for the world”).  Of course the way to avoid this, is to study and analyze argumentation through models of argumentation patterns and argumentation theory (normative pragmatics). This study requires the learner to not only argue, but also recognize how the functions of arguments are established.  As I pointed out last post, the platform for wisdom development does not have to be this complicated.  For a person ‘learning’ badminton, the function of keeping your opponent off balance through shot placement may not be within your skill repertoire yet, but one should practice with that conceptual function front of mind. The alternative is prolonged frustration as getting the bird back over the net using the now highly refined overhead shot, never seems to be enough to win the rally.

Now that that is cleared up, I’d like to take on an idea posited by Daniel Willingham (who was mentioned in the last post as well) in his book Why Don’t Students Like School? He believes that we should not expect young learners to be able to think like an expert because the young learner lacks the experience through practice (about ten years) required to ‘be the expert’.  The underlying argument is that even child prodigies are imitators rather than creators; so, a more realistic approach is to focus exclusively on comprehension rather than knowledge creation.  I beg to differ because I believe that young learners are more capable at analyzing than we often give them credit for.

To begin with, I would never argue that a learner can skip the hours of practice required to become an expert.  As stated above, studying ‘co-opting argumentation’ is a comprehension exercise critical to avoiding being misinformed about what is happening when people make declarations.   But once you comprehend what is going on, then what?  Should you just say you are done and ignore the ‘what ifs’ that comprehension now raises?

Let’s examine a really big ‘What if?” question.  I would argue that along any learning comprehension journey, ‘experts-in-the-making’ may not create new knowledge in a general sense, but are very comfortable in using both imitation and their own meaning build-out processes, as a way to build personal knowledge and personal wisdom.  This should not be confused with’ thinking like an expert’, but should be recognized for what it is, an expert’s approach to building deeper knowledge/understanding/wisdom.  Any expert who reflects upon and recognizes what it took to create knowledge, appreciates the need to comprehend.  What they don’t tend to do is get stuck in an approach involving a recursive cycle of comprehending comprehension (unless they are a philosopher or linguist or cognitive psychologist off on a tangent).  Once experts have determined that their comprehension is correct, usually through gathering repeatable evidence (and that’s where ten years of experience comes in handy) they get on to hypothesizing on the ‘What ifs?” in the world.  Those experts-in-the-making that morph into recognized experts also take and make the critical decision To DO Something with their comprehensions...they hypothesize and they ACT.   My sense is that if you asked an expert when the moment was that they started thinking about possibilities, but more importantly, DOING something with possibilities in mind, it wasn’t at 10 years + a day into their comprehension journey.  Steve Jobs left us with an important piece of advice, which was to ‘follow your dreams’.  I am particularly cognizant of the fact that his advice does not state that dreaming alone will get you what you want.  This holds no matter how well you comprehend, repeat, practice or otherwise focus on the dream.   On this point I should elaborate because to me ‘following’ something (in this particular case a dream) must be examined in two ways: deciding if someone is, or is not ‘following’, and then if following is occurring, is there a need to recognize the impact of the ‘degree of following’ (is there such a thing as a little bit dead, and does that matter)?

I begin this elaboration by recognizing that the results of creating meaning for oneself is not the same as the results of creating what would be considered generally accepted “New” knowledge. I never assume that new to me equals new to everyone.  With this given in place, the greater question becomes: From the perspective of someone who discovers (the unknown becomes the known via discovery regardless of degree) are the processing functions for comprehension and new knowledge creation the same; just  discoveries at different points along an experience continuum? 

Our entire belief in the concept of progression is built on this being true.  At some point the act of progressing may lead to discovering something no one knew (instead of just me not knowing) but the way one thinks (“how am I going to figure THIS out?”) is still the same.  So, if it is established that something is being followed and discoveries are happening, then how might these processing functions change (if at all) at different places on an experience continuum, and why might this be the case?  According to Dr. Willingham, this is due to the differences in the elaborative nature of the functions.  Essentially, the structure of the function is more elaborate due to experience and therefore can create knowledge where none existed before.  To me this is like saying that because your Ferrari can go 200 mph and my horse drawn buggy can go as fast my horse, I shouldn’t test what my top speed could be because it will never be 200 mph. What this doesn’t speak to is the impact or degree that using the available function has on future events.  If the goal is to learn how to manage a vehicle moving 200 mph, maybe my buggy won’t do.  But can a race in my buggy be used as a take off point for other races I might want to entertain in the future?  I believe so.


In his book, Dr. Willingham does make a near irrefutable point when he identifies the real problem, which is when educators don’t recognize the differences between experts and novice learners (that as educators we can somehow ‘shortcut’ the learning cycle and tell buggy racers that they are Ferrari drivers in the making).  I would argue that the shortcut approach of a teacher telling a student “Be the expert” (which is difficult if not impossible), needs to be replaced with “Be AN expert”.  The implications in differentiating these approaches should not be under appreciated (there is more here than a definite/indefinite article exchange). 

If the expectation is to do exactly as the expert does, of course there will be frustration on the part of the learner.  If however, the expectation is to do some things that change what we know in ‘wisdom gap’ narrowing ways (using the available function), the learner is moving nearer to expertise, with all the benefits that accrue.  The ‘Be the expert’ approach fails because the learner cannot be two places at once (both novice and expert) while the ‘Be an expert’ approach focuses on the relationship or distance between levels of knowledge and the ability to understand both similarities and differences.  My sense is that Dr. Willingham would be hard pressed to defend the idea that the concept of relationship function, which is familiar to everyone from about 5 years of age on, couldn’t be the first abstraction fully grasped if taught properly.  I think the minds behind Facebook would back me up on this.

Of course this puts the concept of who is ‘the expert’ in a different context.   The definition of ‘an expert’ now includes a focus on the ability to overcome cognitive capacity issues regardless of what point you are at on the cognitive ability continuum.   Dr. Willingham does an excellent job of making clear what the learning novice’s discrete limitation issues are.  Unfortunately in doing so, he concludes that the dynamics (the constant change) occurring over the 10 years of ‘becoming’ an expert are so drastically different from overcoming on a day-to-day basis, that one should not be compared with the other.  Note I used the word “compared” (past tense).  Of course the two abilities (novice ability and expert ability) to overcome are very different (both quantitatively and qualitatively) but the ability to recognize a change in ability (very large to relatively small) is exactly the same, especially when looking into the future (and not just the past). 

Regardless of the differences, one thing we can all (novice to expert) appreciate is that the road to overcoming begins with recognizing results.  What may be less obvious, but equally grasp-able, is recognizing the approach to be taken to achieve the results.  My sense is that focusing on effective approach is not a lost cause for novices.  It begins with the teacher saying “explain to me/show me what you did”, what we might see as Prove it first.   It continues with “now how does your approach compare with xyz approach?” or what we might describe as who else has proven this?  It ends with “gather the evidence that indicates how each approach works, then tell me which approach you would use under circumstance A, B, and C.”  This is the point where, should the motivation be present, the learner can contemplate making improvements.  

An interesting side note to this is that in my experience using this method, I find that comparing approaches taken amongst novices is far less threatening than comparing approaches taken by experts.  I believe the reason for this is because you are comparing learning novice belief-systems that are still open to, and accepting of change.

So although I would be hard pressed to debate with Dr. Willingham on how the mind works, I’m also unsure how to square my teaching experiences and learning results with the more modest outcomes that this author believes are “more realistic”.   To me, the evidence is that a Prove it – What’s proven – Can you improve? approach has shown again and again to work very well for young learners of all ability levels.  To do any of these 3 things requires that the learner distil out functionality within a context.  I fully recognize that the learner doesn’t perform these exercises with the sophistication of an expert, but I still see novice learners smiling as all the comprehension effort has as a reward pay off of looking at ‘What if?’ possibilities.  The next post is going to look at some tangible ways of building up to the ‘What ifs?’ that keep learners motivated.  As you can probably guess, the approach you take is very important :)