Attachment: Caregiver-Infant Interactions and the Stages of Attachment

Every time I use a colon in a title I feel as though I’m writing a Harry Potter novel.

We’re onto Attachment!  Halfway through!  Or – now that I’m writing this – more than halfway through!  So, if you don’t know what Attachment is, I made a long post about it in late May, so go ahead and check that out.

In our first topic, we have caregiver-infant interactions and the stages of attachment – and I’m starting with caregiver-infant interactions, because it’s first in the textbook.  I am a simple woman and I am doing a simple thing.

Caregiver-Infant Interactions consist of two main categories: interactional synchrony and reciprocity.  Hey, isn’t it great when psychologists think it’s a super cool thing to do to give simple concepts really complex names?  It feels like we just went through this with the encoding specificity principle.

Let’s explain what they are, pronto, so that nobody is left looking at those titles and having an aneurysm because of the names.  Interactional Synchrony is, essentially, just mimicry.  If a caregiver makes a distinct hand or facial movement, it’s likely that an infant, from as early in its life as 10 minutes old, will mimic the movement.  This was studied by Meltzoff and Moore, who placed a dummy (I believe a dower is the US term) in the infant’s mouth whilst the caregiver gave a distinct hand or facial movement, then removed the dummy to see how the infant responded.  They found a correlation between adult behaviour and infant behaviour.  This type of study is called an observational study, as behaviours were placed into categories by an observer watching a film of the interactions.  The earliness of this suggests it is an innate behaviour.

The use of an observational study was intelligent, as it is difficult to measure infants’ behaviours, as their hands and especially their mouths are constantly in motion.  By getting others to observe whether a behaviour fits into a behavioural category, Meltzoff and Moore removed the issue with observing infant behaviour.

On the other hand, Koepke was not able to replicate Meltzoff and Moore’s study, suggesting that there may be an issue with research findings.  Meltzoff and Moore claim that Koepke did not follow the procedure correctly.

Whilst Meltzoff and Moore believed that imitation was intentional, Jean Piaget believed that intentional imitation did not occur before the end of the first year of an infant’s life.  He believed that the infants in Meltzoff and Moore’s study were displaying pseudo-imitation, and responding to caregivers because the consequent caregiver behaviour was rewarding.  This is called pseudo-imitation.  However, Murray and Trevarthen carried out research in support of Meltzoff and Moore, finding that if the caregiver did not respond to the infant’s imitation, the infant would show acute distress.  This suggests that infants actively try to elicit a response.

Marian found that, in response to Murray and Trevarthan’s study, infants couldn’t distinguish between caregivers in real life and caregivers who were on video, suggesting infants are not really responding to the adult.  However, Marian did acknowledge that this may have been due to procedure.

However, a study by Abravanel and DeYong found that inanimate objects which made mouth opening movements and other similar specific movements did not prompt an infant to display imitation.  This suggests that interactional synchrony is a specific social response to other humans.

It’s also interesting to note that strongly-attached infants displayed greater interactional synchrony.  We’ll learn about attachment types later.  It’s also notable that infants who displayed greater interactions had stronger relationships with their caregivers at three months, though whether this is a cause or an effect (or even simply correlational) is not clear.

Meltzoff and Moore claim that interactional synchrony also helps infants to understand social interactions and empathise with what others are thinking and feeling, based on what they’re feeling as they carry out certain movements.  This is called ‘Theory of Mind’, and I personally think that it needs a lot more research done on it before it should be in an A Level textbook for 16-18 year olds, but I am not on the exam board.

The other type of caregiver-infant interaction is reciprocity.  This one is a bit less complex.  It refers to the conversational rhythm that infants and caregivers adopt when interacting, even though the infant is non-verbal at this point.  This means things like taking turns, so if a caregiver smiles, the infant might smile back, or the infant will wait for the caregiver to repeat a specific action before carrying out a specific action of its own.

Caregiver-Infant Interactions help to form the basis for the different stages of attachment, as developed by Schaffer and Emerson.  According to Schaffer and Emerson, there are four different stages of attachment.  I believe we covered these in the overview of Attachment, but I’m going to go over them again.

Before we start, it should be noted that reports on infants’ behaviours were obtained from the mothers.  Self-report techniques are not always very reliable, as it’s possible that a self-conscious mother could lie – or even that a non-self-conscious mother could simply interpret her child’s behaviour wrong.

Furthermore, Schaffer and Emerson used a biased sample.  Their research was only based in one city (Glasgow) and only around one class (Working-Class).  This means that the validity of their research is lessened, as families in different areas and different classes may show different behaviours.  Furthermore, it was carried out in the 1960s, and cultural norms – such as women working – have changed since then, which may influence the way that attachments are formed.

Cultural differences are very important in attachment research.  Schaffer and Emerson researched the United Kingdom, an individualist culture wherein everyone is mostly concerned with their own needs or the needs of their immediate network.  Other countries, such as China, are collectivist cultures, who are concerned with the needs of the group as a whole.  In these cultures, multiple attachments are more common, as evidenced by Sagi’s study of an Israeli Kibbutzim, in which infants are mostly brought up communally.

The first stage is called indiscriminate attachment.  This occurs when an infant is very young – up to about the second month of life – and it refers to the fact that very young infants do not show stranger anxiety or any preference for a specific caregiver.  However, most importantly, they do not show a defined preference between animate and inanimate objects.  This is at an interesting odds with Abravanel and DeYong’s research, though this isn’t covered in the spec, so I’ll leave that observation there.

The next stage is called the Beginnings of Attachment (apologies if this one’s a bit sloppy – I just had a 20 minute break, and we know what those are like).  It occurs from about the second to the fourth months of life, and it’s the point at which infants begin to show a preference for human interaction over inanimate objects.  They can also distinguish between familiar and unfamiliar people, though they are unlikely to show any stranger anxiety at this point.

Ar around four months to seven months, an infant will begin to show discriminate attachment.  This is when they become strongly attached to one particular caregiver and show significant separation and stranger anxiety.  The primary caregiver is usually the mother, in 60% of cases, with a joint attachment of mother and father occurring in 30% of cases.  This prevalence of the mother over the father is thought to be because mothers produce oestrogen, associated with caregiving, though there is no physiological difference between mothers’ and fathers’ responses to an infant’s distress.  Sociological factors such as fathers being expected to work are also thought to be a factor in this, though they are also thought to provide a strong basis for the development of active problem-solving in offspring.

The final stage of attachment is multiple attachments.  This is when infants have one or more secondary attachments on top of their attachment to the primary caregiver.  Generally, within six months of developing a primary attachment, 78% of infants have developed secondary attachments to grandparents, aunts, uncles and siblings.

A researcher called Bowlby, who gets plenty of spotlight in this research, asserts that not all multiple attachments are equal.  He believes that the infant has a special bond with the primary caregiver (monotropy) and that other bonds may be weaker or serve different purposes.  Rutter, on the other hand, believes that all attachments are equal, and that they all integrate into an infant’s attachment type.

We do need to be careful not to judge attachments and developments on stage theory alone.  In some cases, multiple attachments might come first, or attachments may simply be more flexible than stage theory suggests.

That’s our first attachment topic finished, so we’re onto Animal Studies next!


Memory: Improving Accuracy of Eyewitness Testimony (Cognitive Interview)

Last topic in memory!  Woo!  Personally, I really like this one – it’s something I actually really want to study in real life, just to see how effective it is.  Once I’ve finished my A Levels, I just might – so keep your eyes peeled.  Actually, now is probably a fairly good time to mention that whilst I’m not going on to study Psychology at university, I am going on to study Anthropology, and the two subjects are obviously intrinsically linked – so I’ll probably make a blog for that in around August, so keep an eye out for that, if you’re interested.

Now, onto the topic at hand!  As the last post demonstrated, eyewitness testimony isn’t always strictly accurate – but it is usually necessary.  That means that we have to figure out ways of improving it – and what better way than the cognitive interview?

(This is where the ‘about’ widget will be relevant in about five years, and someone will step forward and say “actually, there are many problems with the cognitive interview, which is why we use mind-deep meditation and informative hypnosis, combined with alcohol, to make sure things are as accurate as possible.  I learned this in class.”)

The cognitive interview has four main components to it, all of which aid memory in some way.  Let’s go through them all in order, for the sake of ease (for me).

The first one is mental reinstatement of original context.  This is to help a person remember exactly what they were doing and where they were when whatever happened happened.  That’s things like asking the witness to recount exactly where they were, what they had been doing before, what the time was, how they felt, what they could see, what they could hear and what the weather was like.  This links us back to retrieval failure: these pieces of contextual information can act as important cues to other information that might be crucial to the investigation.

The next step, Report Everything, is also based on the idea of ensuring that someone is given the cues that allow them to access important information.  I’m sure that you can sort of guess what it entails from the title, but I’ll explain anyway.  Report Everything is when a witness is asked to give information on every detail of an event, even if it doesn’t seem important.  This could be something like “my bag split” or “the person I was with tripped” or even something as trivial as “I coughed”.  This is because memories are interconnected, and so remembering one trivial detail, or a partial memory of spilling a carton of milk earlier that day (for example), might cue a full picture of the incident as it occurred.  Again, this is to counteract the effects of retrieval failure.

The next step isn’t so much to do with cues, but with removing the effects of someone’s schema of how events should occur.  A schema is a built-in framework in the brain which we compile from experience, and it helps us to navigate new situations more easily.

The first way of doing this is by using ‘change order’, in which a witness is asked to recount the event backwards, in order to remove any automatic preconceptions about how something should happen.  For example, I did this very quickly a moment ago, and remembered that the last time I went grocery shopping, I queued on the wrong side because a lady had done so before me and I didn’t want to confuse things when it came to who should pay first, but then a lot of people came to the queue behind me and ended up filing halfway down the fruit aisle.  If I had been asked to remember this forwards, I would probably say I had queued from the far left, because this is what my schema dictates I should do.

The final step of the cognitive interview is ‘change perspective’, which works for the same reason as change order.  Change perspective asks a witness to recall what the event might have looked like from, say, across the road.  This means that they don’t use their schema or their subconscious to fill in any gaps, and so only key details like ‘blue shirt’ and ‘male’ come into the equation.

A meta-analysis of 53 studies found that on average, the cognitive interview caused a 34% increase in the accuracy of information recall than when standard interviewing techniques were used.  However, this might be due to individual differences: when only individual components of the CI were used alongside a control condition, results were broadly similar across all five conditions.  When ‘report everything’ and ‘mental reinstatement’ were used, recall was higher than in the other conditions.  This suggests that the cognitive interview as a whole is an important method of increasing accuracy of eyewitness testimony.

Leading on from this, it should be noted that the cognitive interview is not really one technique, but a collection of techniques used to enhance eyewitness memory.  As such, its real-world effectiveness is difficult to assess, as many police forces – such as the Thames Valley Police – use only two or three of the steps rather than all four.

It should be noted that the cognitive interview does not guarantee accuracy.  Kohnken found that whilst there was an 81% increase in correct information using the cognitive interview, there was also a 61% increase in incorrect information.  This suggests that the police need to be very scrupulous indeed in their assessment of information gathered through use of the cognitive interview.

Furthermore, the cognitive interview is time-consuming and expensive.  We can’t guarantee that all police forces have the finances to fund training members in carrying out the cognitive interview, and when they need to apprehend a dangerous criminal, they  generally prefer to minimise time taken and do things as quickly as possible.  As such, the use of the cognitive interview is not widespread.

In spite of this, the Cognitive Interview is often very advantageous when interviewing older witnesses, as these are the witnesses who are most likely to struggle with retrieving information, often due to self-confidence issues.  This was evidenced in a study by Mello and Fisher.

And we’re done with memory.  Next up: attachment.

Memory: Accuracy of Eyewitness Testimony

This should be a fun topic for any (future) lawyers, private investigators, members of the police force, criminologists…  Yeah, I think it’s just a pretty fun topic.

Obviously, when someone commits a crime, we have to find evidence to convict them, or to find the right person to convict.  Sometimes, the physical evidence isn’t present, so we have to use eyewitness testimony, which is sort of what it sounds like.  It’s when people who’ve seen an incident taking place tell an investigator – or a courtroom – what they saw.  It’s the sort of thing you see in every episode of Law and Order, and probably about every fifth episode of Coronation Street, if memory serves.  (My best wishes and thoughts to David Platt, the true symbol of British pride).

However, eyewitness testimony is not always accurate, and this can be for a number of reasons.  The first is misleading information, and the second is anxiety.  Both of these are quite heavily study-based, but it’s not like Social Influence, where you’re given far too much information about a single study at once.  Accuracy of Eyewitness testimony is quite doable, if you ask me.  And, considering you’re reading this blog, I’d wager that you are sort of asking me, a bit.

Let’s start with misleading information.  Within misleading information, you have leading questions and post-event discussion.  A leading question is when someone is asked a question which causes them to alter their perception of the way an event took place, subconsciously or otherwise.  Post-event discussion is when witnesses talk to each other after an event, and their memory of the event becomes contaminated.

Loftus and Palmer carried out a study on leading questions.  This is one of those studies where they write ‘key study’ in the title, so if you’re taking notes, you should probably note this one down, just to be safe.  The first study they carried out, they got 45 participants and asked them to watch a video of seven different traffic accidents.  There was a critical question: “how fast were the cars going when they hit each other?”.  In each condition, the verb was different, with the other four conditions using the words smashed, collided, bumped or contacted.  These impacted speed estimates, with ‘smashed’ causing participants to estimate a speed of 40.8 miles per hour, whilst ‘contacted’ caused the lowest mean estimate of 31.8 miles per hour.

Loftus’ research led her to suggest that eyewitness testimony was generally unreliable, and should not be used as evidence in court (she’s quite passionate about this, if you wanted to look up one of her Ted Talks to see her talking about it in action).  Other researchers point out that a lab experiment does not arouse a witness’s emotional state in the same way as actually witnessing an event.  Yuille and Cutshall found that eyewitnesses to a real armed robbery in Canada generally gave very accurate information about the event, further reinforcing the idea that emotional state impacts accuracy.  We’ll cover this further a little bit further down, in the effects of anxiety on eyewitness testimony – with another armed robbery study, no less.

In fairness to Loftus, there is evidence to suggest that inaccurate information from eyewitnesses is one of the number one reasons behind people being falsely convicted.  DNA exoneration cases have proven these concerns hold some weight.

In the second experiment, the same participants were asked if they had seen any broken glass.  In spite of the fact that there was no broken glass on the scene, those in the ‘smashed’ condition answered ‘yes’ 16 out of 50 times, compared to just 7 out of 50 times in the ‘hit’ condition and 6 out of 50 times in the control condition.  This suggests that a leading question can change a participant’s actual memory of an event.

In post-event discussion, there are a couple of reasons why information can become contaminated.  One is the conformity effect and the other is repeat interviewing.  The conformity effect suggests that witnesses can come to a consensus view on the details of an incident.  This was the case in a study by Gabbert, wherein 71% of witnesses who had discussed an event went on to mistakenly report false details when questioned.  Repeat Interviewing considers the fact that comments from an interviewer can become ingrained in a witness’s memory.

Loftus studied a group of college students, all of whom had visited Disneyland, and asked them to evaluate the advertising material, which involved either Bugs Bunny or Ariel, in spite of the fact that neither of these characters could have been at Disneyland at the time.  In spite of this, participants in the ‘Bugs’ or ‘Ariel’ conditions were likely to report having shaken hands with the characters, suggesting that misleading information does have an impact on the accuracy of eyewitness testimony.

The elderly are thought to be more easily misled than younger witnesses, as they tend to struggle with remembering the source of the information (though the memories themselves are not impaired).  This suggests the importance of individual differences in eyewitness testimony.

There’s one more evaluation point, on response bias, but it’s not laid out very clearly, and I’d rather not try to explain something I don’t understand.  Four evaluation points for misleading information is more than enough, though.  You’ll do fine – and, hopefully – so will I, knock on wood.

We’ve done misleading information, so let’s move on to Anxiety.  In anxiety, we have anxiety itself and whether it improves or worsens memory, for which we have a study called the Weapon Focus Effect.  I like the Weapon Focus Effect, because the whole thing inspires the memory of a ridiculous newspaper comic, where everyone has a very round head and over-exaggerated facial expressions.

Johnson and Scott carried out the weapon focus effect.  There were two conditions, and in both, participants sat in a quiet waiting room.  In the first condition, a confederate ran through the room holding a pen covered with grease (low anxiety).  In the second condition, the confederate was holding a knife covered in blood (high anxiety).  They were then asked to identify the person holding the instrument.  In the ‘pen’ condition, mean accuracy of identification was 49%.  In the ‘knife’ condition, it was 33%.  Loftus suggests that anxiety pulls witnesses’ focus to the details of the crime, which was supported when the focus of people’s vision was found to be on the weapon.

This being said, the Weapon Focus Effect might not be caused by anxiety, but by surprise.  To test this, Pickel carried out a study with high and low threat items and high and low surprise items in a hair salon (most crucially scissors and a whole raw chicken were used).  She found that identification of people was least accurate in the high surprise conditions, supporting the hypothesis that surprise is the cause of the weapon focus effect.

However, anxiety can also produce a positive effect on the accuracy of eyewitness testimony.  Christianson and Hubinette found enhanced recall when they interviewed 58 eyewitnesses to bank robberies in Sweden.  They conducted interviews on bank tellers and bystanders four to fifteen months after the event had taken place.  All showed recall higher than 75%, and bank tellers, who were in the highest anxiety condition, had the best memory of all.  This suggests that anxiety actually has a positive effect on the accuracy of eyewitness testimony.

This is a helpful study because it uses real-life studies, which improves its ecological validity.  However, Deffenbacher’s meta-analysis of 34 studies suggests that whilst lab experiments show that anxiety reduces the accuracy of eyewitness testimony, real-life events show that it reduces accuracy even more.  This puts Christianson and Hubinette’s research at odds with other existing research.

As such, it should perhaps be noted that there are no simple conclusions that we can draw about the effects of accuracy on eyewitness testimony.  Halford and Milne found that witnesses to violent crimes tended to have more accurate recall than witnesses to non-violent crimes.  This could account for the contradictory research by Christianson and Hubinette.

Yet, the research by Christianson and Hubinette doesn’t remove the research by Johnson and Scott – so we have a contradiction to resolve.   Deffenbacher studied 21 studies on the effect of anxiety on eyewitness testimony.  Of these, 10 showed anxiety as having a positive effect on eyewitness testimony, and 11 showed it as having a negative effect.  Deffenbacher’s explanation for this was something called the Yerkes-Dodson effect, which suggests that a moderate amount of anxiety strengthens accuracy, whilst an extreme amount lessens it.  It might be helpful to think of this as being like elastic, in that you can only stretch it up to a certain point before it snaps or loses its elasticity (my thoughts go out to all the hairbands I’ve lost like this).

A pair of researchers called Fazey and Hardy redeveloped the Yerkes-Dodson model, with their newer model being favoured over the Yerkes-Dodson model by Deffenbacher.  They call it the catastrophe theory.  They suggest that when there is an increase in cognitive anxiety, not just physiological arousal, it can cause a sharp decline in accuracy of recall.  Individuals with higher self-confidence – or a more “stable”, as opposed to “neurotic” personality – are more likely to exhibit the inverted ‘U’ characteristics of the Yerkes-Dodson Model.

That’s this all finished up.  We’re onto our last topic in Memory now, and then it’s onto Attachment!  Pray for me, if you’re of the praying inclination, to finish writing these posts before Wednesday, which is when my exam is.  I’m very glad I did those broad overview posts beforehand, or I would be – in crude terms – “up shit creek”.  Or not – because, believe it or not, this isn’t the only psychology revision I’m doing.

Memory: Explanations for Forgetting

I’m assuming everyone reading this has forgotten something, at some point in their life.  If you haven’t, I’m impressed, and would like to ask you to tell me your secret in the comments, because I could really do with that kind of magic in my life.

We’ve got two different explanations for forgetting: interference and retrieval failure.  I’m sorry to tell you that interference is confusing as heck, and that’s why it’s the one we’re starting with.

There are two types of interference: proactive and retroactive.  Proactive interference is sort of like interference going forward, and it’s when past learning interferes with current attempts to learn something.  I imagine that this is sort of like the stories I’ve heard about when people start taking a science at A Level, and the first thing their teacher tells them is that everything they learned at GCSE is incorrect.  Underwood found that when participants in an experiment were given multiple word lists to memorise, they performed more highly on lists learned earlier than lists learned later.  Participants who had only learned one list had a recall of 70%, whilst those who had learned 10 lists had a recall of only 20%.  Kane and Engle found that participants with a greater working memory span were less effected by proactive interference than others, suggesting the role of individual differences in interference.

Retroactive interference is like interference going backwards.  It’s when current learning interferes with remembering past learning.  That probably best applies to when you have to put down something like your previous two addresses, and you can only remember your current address.  A researcher called Georg Muller was the first person to study retroactive interference.  He gave participants a list of nonsense words to remember and gave half of them an intervening task, then tested them again six minutes later.  Those who had been given an intervening task did poorly on the test compared to those who had not been given an intervening task.

McGeoch also found that if items being remembered were similar, participants were likely to find them difficult to remember.  For example, he carried out an experiment with three conditions.  In one condition, the participants were given a list of words and a list of their synonyms.  In the second, the second list was nonsense syllables.  In the third, the second list was numbers.  In the first condition, recall was 12%, in the second it was 26%, and in the third it was 37%.  This suggests that interference is stronger if items are similar.

Because similarity of items is required for interference to occur, some researchers have pointed out that interference really doesn’t happen very often.  As such, it isn’t considered to be a very important explanation for forgetting.  We still have to learn about it, though, so it sort of feels like examiners are taunting us a little bit with that one.

Baddeley and Hitch also tested a rugby team for examples of real-world effects of interference.  The length of the season was the same for all of them, but some had not played in all games due to illness or injury.  When asked to list teams they had played against, those who had played in the most games had poorer recall than those who had played in less games.  This demonstrates the effects of interference in everyday life.  We’ll do another fun Baddeley study when we get onto retrieval failure, too.  That one is up on my wall.

In spite of this real-world study, most research into interference has been quite artificial, and failed to replicate the way that interference works in real-life.  This means that it lacks ecological validity.  Others use the study by Baddeley and Hitch to counteract this, as it demonstrates a real-life effect of interference.  As with anything, there’s no right or wrong to this one – it’s up to you to develop your own opinions on it.  Additionally, Danaher studied the impact of advertising on interference and found that individuals exposed to advertisements for competing brands in a short time found both difficult to remember.  This is a problem for advertising companies, who invest a fair amount of money into adverts, only for people to get confused.

There’s also a point about accessibility versus availability.  I’m not going to talk about that as an evaluation point, because we are literally about to talk about it as a topic by itself.  The textbook I’m working from is really well-written, if you couldn’t already tell.

The proper term for accessibility and availability is called retrieval failure.  I’m sure you can already sort of imagine what this one is.  It’s sort of why you might walk into an important exam and find yourself staring helplessly at the wall.

Tulving and Thomson, because they hate us all, decided to name the main theory behind retrieval failure the ‘Encoding Specificity Principle’.  It’s okay – it’s not actually that complicated.  It just means that we find it easier to remember things if the cues present at learning are present at recall.  Tulving and Pearlstone carried out a study using fruits and word categories, but I actually think that this is best explained by using the Bahrick study we covered earlier.  Do you remember how free recall had a lower recall rate than photo recall?  The same applies in Tulving and Pearlstone’s study, where free recall had a recall rate of 40%, whilst cued recall had a rate of 60%.  Not all cues are related to the learning material – some can be things like environmental stimuli or emotional context.

This theory is a bit dangerous, though, because it’s circular.  A circular theory means that if someone remembers something, it works, and if someone doesn’t remember something… it still works.  This makes it impossible to test, and so it cannot be relied upon as a theory.

Ethel Abernathy (female researcher!  Rejoice!) studied context-dependent forgetting.  She tested a group of students each week and found that those tested in the same room as they were taught in performed better than those who were tested in a different room to the one they were taught in.  The same went for instructors, too – if students had the same instructor testing them as the one teaching them, they usually performed better than if the instructor was different to the one teaching them.  The other study on context-dependent forgetting is by Godden and Baddeley – and I quite like this one.  They got a group of scuba divers, and tested their memory under four combinations of on land and in water.  Those who had learnt on land performed better if they were tested on land than if they were tested in water, and those who had learnt in water performed better if they were tested in water than on land.  Jury is still out on how they were able to communicate underwater.  I don’t know much about water.

The other type of forgetting is state-dependent forgetting.  It was tested by Goodwin, and it is wild.  Goodwin got a group of male participants and asked them to learn a list of words when they were either drunk or sober.  Those who had learned drunk performed better when tested drunk, and those who had learned sober performed better when tested sober.  One can reasonably assume that those who were sober throughout had the best performances, but if anyone wants to buy me a drink to test that out, I will not complain.

Obviously, there’s a lot of research here, and that’s a really good thing.

Real-world applications of retrieval cues might help you in your exams.  You might not be able to revise in the examination room, but research by Smith has found that imagining the room is actually just as effective as being in it.  This is called mental reinstatement, and we’ll go over it when we cover the cognitive interview.  That being said, when you’re learning, you’re making a lot of complex associations, and a context-based cue isn’t always going to cut it.  This is called the outshining hypothesis: if a better cue is available, it’ll lead to better remembering.

I’ll point out at the bottom here, to bring both theories together, that cued recall reduces the effects of interference.  This suggests that retrieval failure is a more important theory of forgetting than interference, but be careful with those kinds of statements, because we’ve already discussed the fact that retrieval failure can’t actually be tested because it is circular.

That’s that for explanations for forgetting – the next thing we’ll talk about is the accuracy of eyewitness testimony – but I’d like to drop an email to my psychology teacher about exam technique first.

Memory: Types of Long-Term Memory

I really need to find a way of opening these that isn’t ‘And we’re back’.  I’m working on it.  Be patient with me, I’ve only been 18 for three days – I’m not used to decision-making yet.

There are a few types of Long-Term Memory – three, to be exact, and they’re split into two categories.  That sounds like it confuses things, but once you get the hang of it, it actually makes the topic a lot easier.

Those two categories are explicit memories and implicit memories.  Explicit means that they come from the things around us, whereas implicit means that they’re ingrained into us.  The types of explicit memory are episodic and semantic memories, whereas the type of implicit memory is procedural memory.

Episodic Memories are our memories of events that have happened.  When you remember something that’s happened, you usually remember the context, like what you were doing before and after the event.  You might remember the time and place, and things like what the weather was like, and you might remember the emotional context, or how you felt.  For example, I can pinpoint the ambulance ride after I broke my leg.  Before the event, I was cycling in my grandma’s drive.  After the event, I was lying on a stretcher in a big white room whilst people came and looked at the leg.  I remember that it was early August in the morning, and that it had been raining, and I know that I felt terrified, but that I was also just a tiny bit excited.

A semantic memory is knowing something like a fact, or common knowledge.  For example, by making this blog, I am contributing to both your semantic memories and my own semantic memories.  However, a semantic memory can also be something like knowing how to behave in a certain situation, like knowing that you should shake hands when you meet someone important for the first time.

Researchers have shown interest in whether semantic memories are all formed through episodic memories, or whether a semantic memory can form independently of an episodic memory.  Research on patients with Alzheimer’s Disease who are unable to form episodic memories has found that they are able to form semantic memories, which suggests a dissociation between the two types of memory.  However, researchers also look for a second dissociation, as it is otherwise possible that the brain struggles with episodic memories because they place greater demands on mental functioning as a whole.   In this case, a second dissociation has been found, in which patients with the same disorder have generally in tact episodic memories but poor semantic memories.  This suggests that semantic memories may be able to form independently of episodic memories, but that there is an association between the two.

Finally, you have procedural memories.  Procedural memories are knowing how to do something.  These are processes like tying your shoelaces, making tea, or even walking along.  Generally, you don’t think about procedural memories when performing them, and doing so makes them slower.  They are automatic memories learned through repetition and rehearsal, which means that we can focus on other things whilst we do them.

Brain scans have shown that different areas of the brain are active when different types of long-term memory are being used.  For example, episodic memory is associated with the hippocampus and surrounding parts of the temporal lobe, whilst semantic memory is associated with the frontal lobe and procedural memories are associated with the cerebellum.  All types of long-term memory are thought to induce some level of activity in the hippocampus.

HM, who we talked about when we went over the Multi-Store Memory Model, was actually able to form long-term memories – but only procedural ones.  He was able to learn how to draw a star from looking at its reflection (not an easy task – certainly not something that I could do, and I don’t have any brain damage!), but had no memory of actually learning how to do so, which would be episodic or semantic.  However, we have already discussed the issues with using a brain-damaged patient as evidence, and this problem occurs a second time when one considers the fact that we cannot know exactly which area of the brain has been damaged until the patient has died, which means that they may have damage to a relay station in the brain, for example, rather than a unitary store in the brain.

There may also be a second type of procedural memory, called priming.  This is based on the idea that if you show an individual a picture of a banana and ask them to name a colour, they’re likely to respond with ‘yellow’, because they have been subconsciously primed to think of yellow by the yellow fruit.  This is called the Perceptual-Representation System, and it is separate from episodic and semantic memories, and is present in those with damage to their episodic and semantic remembering.

That’s a wrap on the types of long-term memory.  I feel a little bit like I whizzed through that, but I don’t think it’s too unclear.  Any questions, as always, can be posed to me in comments.

Memory: Long- and Short-Term Memory

Welcome to Memory, six days before Paper One.  If you can hear vaguely anxious noises, they’re coming from me.  Although, thankfully, I’m a bit better at Memory than I am at Social Influence, so with any luck, this won’t take too long.

The first things we’re going to talk about are Long- and Short-Term Memory, what they are, and how they work.  I mean, the whole topic is about those things, but this is your starter building block, where we’ll start to implement an understanding of it, and that sort of thing.

The first thing you need to know is that there are three key points to either side of memory: capacity, duration, and coding.  You really need to know these to understand everything else, so try to get the figures to stick in your mind as best you can.

Let’s talk about capacity first.  Capacity refers to how much information the brain can remember.  It can be pretty difficult to remember a lot of stuff at once – I’m sure everyone’s been in a situation where you’ve just woken up or just come home and a parent, roommate, sibling or spouse immediately bombards you with information of chores you need to do, and it’s just noise – you can’t take any of it in.  Capacity is the reason for this.  Your Short-Term Memory can only hold five to nine items at a time (seven is the key number here – we even refer to it in Memory as ‘Miller’s Magic Number’).  Any more than that, and the human brain can’t process the information.  We assess the short-term memory using something called a digit-span test, which I’m sure you’ll be able to find with ease on the internet.  This is, however, in contrast to the Long-Term Memory, which most psychologists believe can store information for an infinite amount of time.

Cowan, contesting Miller’s magic number, reviewed a number of studies on the short-term memory and found that the capacity of the short-term memory may be even more limited.  Cowan’s estimate was that it was more accurate to around four chunks.  A study by Vogel on visual, as opposed to verbal stimuli, found that the number was indeed closer to four chunks.  Simon also found that the size of the chunk matters, as an eight word phrase is much harder to remember than a single number.

I’m going to cease evaluating here, very briefly, to explain what a ‘chunk’ is.  The textbook doesn’t really go into it, but mercifully, I have a very good teacher who recognised this and explained it to us.  A chunk is essentially the same thing as an item; chunking is a technique we use to help us remember things, according to a three hour lecture on Lynda dot com.  This makes a lot of sense – when you have to remember a phone number, do you try and remember the whole 11 numbers, or do you break it down into three or four parts?  Most people would answer that by saying that they break it down into three or four parts.  For example, my chunking method for phone numbers is 5 numbers, then 3, then 3.  That is chunking, put simply.

The other criticism of testing capacity comes from Jacobs, who points out that there are individual differences in the duration of memory.  Eight year olds, for example, had an average capacity of 6.6 items, whereas 19 year olds had a mean average capacity of 8.6 digits.  There are a couple of theories about why this is: one suggests that people’s brain capacity increases with age, whilst another suggests that as we get older, we develop strategies to help us retain information – like chunking, which we’ve covered just above, there.

An infinite long-term memory is really nifty.  It doesn’t quite account for the fact that most of us don’t have any memories preceding our third year of life, but theories surrounding that are still very much foetal, and we don’t cover them in A Level Psychology.  If you’re interested, though, popular opinion amongst researchers is split between the lack of memory being due to trauma and the lack of memory being due to underdevelopment of the brain.

After capacity, we move onto duration.  Duration means how long things can stay in the brain for, and it has been tested in various ways.  For the short-term memory, a pair of researchers called Peterson and Peterson (married, not siblings), gave participants a consonant syllable (e.g. FRB) and a three-digit number to memorise.  They were asked to recall the consonant syllable after a retention interval of 3, 6, 9, 12, 15 or 18 seconds.  The reason for the three-digit number was to prevent participants from rehearsing the information, as this would allow it to pass into the long-term memory, therefore reducing the internal validity of the study.  The end result was that participants were 90% correct after 3 seconds, 20% correct after 9 seconds, and only 2% correct after 18 seconds.  As a result of this, it is generally believed that the duration of the short-term memory is less than 18 seconds, which – to me, at least – makes a lot of sense.  There are criticisms of this, however, which we’ll talk about shortly.

Peterson and Peterson’s study on the Short-Term Memory’s duration has also been criticised.  One popular criticism is that it’s an artificial way of testing Short-Term Memory, as memorising a random string of letters and numbers isn’t true to the things we have to remember in everyday life.  However, the study has been defended by some, who rightly point out that there are situations in which we do have to recall strings of numbers and letters.  Such examples would be car license plates or phone numbers.  More people still point out that these things have some meaning attributed to them – it’s really up to you, as someone studying the subject, to decide which side of the debate you’re on.

Furthermore, whilst the use of numbers prevented rehearsal of the letters, and therefore a transfer into the long-term memory, its effectiveness in improving internal validity has been questioned.  This is because there’s also a phenomenon called displacement in memory.  This means that a short-term memory is replaced with something else that an individual is trying to remember – such as a string of three numbers.  Reitman used auditory tones to test the short-term memory, and found that participants could remember up to 96 seconds worth of information.  This suggests that duration in Peterson and Peterson’s study may be due to displacement, rather than decay, as originally thought.

The duration of the LTM is supposedly unlimited, however, a study by Bahrick – intentionally or otherwise – calls this into question.  Bahrick tested 400 participants between the ages of 17 and 74 on their memory of their classmates under two different conditions.  One condition was photo recall, in which the participant was shown a photo of classmates and asked to name them.  The other condition was free recall, in which the participant was asked to name as many classmates as possible from memory.  Photo recall showed 90% accuracy after 15 years and 70% accuracy after 48 years.  Free recall showed 60% accuracy after 15 years and 30% accuracy after 48 years.  Both conditions suggest some kind of decay occurs during memory.  The other explanation is that cues, rather than the memories themselves, decay – which we’ll come onto in a couple of days’ time.

Finally, we have coding.  Coding is how things are remembered – or encoded, if we’re going to be technical – and we probably should be technical, as this part of psychology is quite scientific.  There are two main types of coding: acoustic and semantic.  Acoustic means sound-based – think acoustic guitar, or the acoustics in a concert hall.  Semantic means meaning-based – there’s nothing in particular that we can link this to, but it might help you to remember that there are three main facets of language, and semantics is one of them, because it refers to what the words mean.

Baddeley found that a list of words that are acoustically similar but semantically different (cat, cap, can, cad, cab, etc.) were easily confused in the short-term memory, but not the long-term memory, suggesting that short-term memories are encoded acoustically.  This makes quite a lot of sense to me, as someone who had to try and remember the words in the short term to transfer them over here (spoiler alert: they’re in a different order to the one they appear in in the textbook – this will not matter in an exam).  On the flipside, he found that semantically similar words (great, large, big, wide, tall, etc.) were confused in the long-term memory, but not the short-term memory, suggesting that the long-term memory codes things semantically.

However, Baddeley also tested LTM by waiting 20 minutes.  I’m sure that you’re already wondering whether 20 minutes can really be considered long-term memory – it isn’t short-term memory, but it’s also not really long-term memory either.  This is the conclusion that most researchers have come to, too.

Although the STM seems to rely on acoustic coding, it is thought that there is also a visual element to coding.  Brandimote showed participants an image and prevented any verbal rehearsal.  The result was that participants found a way to code the image visually, rather than verbally.

The same applies to the LTM.  Frost found evidence of visual coding taking place in the LTM, whilst Nelson and Rothbart found evidence for acoustic coding occurring in the LTM.  This suggests that coding depends on circumstance as well as the type of memory.


And that’s the different types of memory.  Next, we get to talk about the Multi-Store Memory Model, otherwise known as my favourite.

Memory: The Working Memory Model

Another day, another post.  Actually, I might schedule this post for tomorrow.  I think it makes more sense to have both models on both days.  So, uh… three posts in a day!  Wow!

The Working Memory Model is supposedly a new and improved Multi-Store Memory Model, but the two are completely different.  For one, there are more parts in the Working Memory Model.  Here’s a list of them:  Central Executive, Episodic Buffer, Visuo-Spatial Sketchpad, Phonological Loop, and Long-Term Memory.  The Visuo-Spatial Sketchpad also contains a visual cache and an inner scribe, whilst the phonological loop contains a phonological store and an articulatory process.  Lots of parts, lots of words.  Don’t worry – I’m about to go through them.  All will be fine.

Let’s start at the beginning (a very good place to start).  The Central Executive is kind of like the big boss of the Working Memory Model.  It’s mostly involved with decision-making and critical thinking.  It makes sure that the entire system carries on running smoothly, and takes over when something goes wrong.  Sometimes, it’s analogised as being like the Fat Controller in Thomas the Tank Engine, if that helps you at all with remembering it.

This being said, the definition we’ve been given of the Central Executive system is vague – like the Episodic Buffer below, nobody is quite sure exactly what it does.  Furthermore, critics believe that there must be more than one branch of the Central Executive system – one brain-damaged patient, EVR, had good reasoning skills but poor decision-making skills, which suggests that the Central Executive as a whole could not have been damaged, or both would be affected.  Essentially, the Central Executive system is too vague in its current form.

Next, you have the Episodic Buffer.  The Episodic Buffer was actually added later in response to criticisms, and it’s supposed to act as a point of transmission between the Long-Term Memory and the Central Executive Memory.  You’ll notice that this explanation is very short; that’s because nobody really knows what the episodic buffer is, or what it does.  That’s one of the main criticisms of the Working Memory Model.

Branching off from the Central Executive, you have the Visuo-Spatial Sketchpad and the Phonological Loop.  I’ll start with the Visuo-Spatial Sketchpad.  The Visuo-Spatial Sketchpad is the part of the memory concerned with visual information: it helps us to remember what things look like – their properties, such as colour – and also where they are in relation to each other.  Those things are actually covered by different areas of the Visuo-Spatial Sketchpad.  The visual cache is what stores the properties of individual objects – that’s things like shapes and colours – the basic information about individual pieces of information.  The inner scribe is what stores information about where different objects are in relation to each other – or, in shorter terms, spatial information about objects.

A patient called LH was better with spatial information than visual properties, which supports the idea that there are two branches of the Visuo-Spatial Sketchpad.

The Phonological Loop helps us deal with sound; that includes isolated sounds as well as verbal information.  It also helps to preserve the order of information, which means that information doesn’t get jumbled up in the brain.  Like the Visuo-Spatial Sketchpad, it’s split into two different parts: the phonological store and the articulatory process.  The phonological store holds information you hear directly – a little bit like the inner ear, but inside the brain.  The articulatory process processes information that you don’t hear directly, like words you read in your head.

A patient called KF is thought to have had damage to the Phonological Loop, as his short-term forgetting of verbal information was much greater than his forgetting of visual information.  However, he could recognise meaningful sounds, like a telephone ringing, but struggled with verbal material.  SC is also thought to have had damage to the phonological loop, as his learning abilities were generally good with the exception of him being unable to learn word pairs.

There are issues with using brain-damaged patients as a support, however, as most brain-damaged patients have also undergone some amount of trauma.  This means that the effects of brain damage are not isolated to physical damage to the brain, but also the psychological damage of undergoing trauma.

The Visuo-Spatial Sketchpad and Phonological Loop don’t have any direct connection to the Long-Term Memory.  Instead, all information passes back to the Central Executive, which filters through it, then transfers the information to the Long-Term Memory itself.

Some of our evidence for the Working Memory Model comes from dual task performance.  Dual task performance is based on the idea that if one part of the brain is performing a task, it won’t be able to perform a second task, but another part of the brain will be able to perform a task.  That isn’t worded very well, but if you’re listening to music and drawing, the drawing is concerned with the Visuo-Spatial Sketchpad and the music is concerned with the Phonological Loop, so you can do both.  This was demonstrated by Baddeley and Hitch, who used processes involving the Central Executive and the Articulatory Process, which had involvement from the Central Executive in one condition, to examine the effects of dual task performance.  The tasks were both quicker if the second task did not also involve the Central Executive.

That’s the Working Memory Model!  Next up, we’ll be going through the types of long-term memory.  I like that one – it means I get to crack out the analogies and anecdotes!