Showing posts with label Perception. Show all posts
Showing posts with label Perception. Show all posts

Tuesday, December 13, 2011

Do urban environments trigger a mindset that's focused on the bigger picture?

To focus on details or the whole? This is one of the major ways that people differ in their style of mental processing. Past research has shown that people on the autism spectrum tend to focus more on details. Other studies reveal cross-cultural differences. People from collectivist cultures like Japan show a bias for focusing more on the bigger picture, known as "global processing", whilst citizens in individualist cultures like Britain show a comparatively greater bias for detail or "local processing". Now a study, led by Serge Caparos at Goldsmiths, of a remote African society, makes the case that this cultural difference is caused, not so much by degrees of collectivism or individualism, but rather by exposure to varying levels of urbanisation.

Caparos and his team used two kinds of stimuli presented on-screen to measure processing bias. The first is known as the Ebbinghaus illusion, in which the perceived size of a central circle is affected by the relative size of the circles surrounding it. A circle surrounded by bigger circles will generally be perceived as smaller, especially by people with a bias towards more global processing.

The second stimuli involved large letters comprised of little letters or shapes. Participants had to make a similarity judgement - for example, they were presented with a large X made up of little x's and had to say whether it was more similar to a large circle made up of little x's or a large X made up of little squares. People with a bias towards global processing would be expected to say the two large X's are more similar.

To gauge the effect of urbanisation, the researchers tested dozens of people from the remote Himba society of Namibia, as well as dozens of undergrads from Japan and Britain. Crucially, some of the Himba lived traditionally in village huts and homesteads whereas others had moved to, and lived for several years in, Opuwo, the Himba's only permanent, urban settlement. Also, some of the traditional Himba had visited Opuwo, either once, twice or three times.

The Japanese were more sensitive to the Ebbinghaus illusion than the Brits (indicative of a greater global processing bias, consistent with past research); the Brits, in turn, were more sensitive to it than the traditional Himba. Critically, though, the urban Himba were just as sensitive to the illusion as the British. Visits to the town Opuwo made no difference to the performance of the traditional Himba on this task.

On the similarity judgement task, the Japanese and Brits showed the most global choices, more than both groups of Himba. However, the urban Himba made more global choices than the traditional Himba and, moreover, global choices were made more often by traditional Himba who'd visited the town than those who hadn't. Indeed, just two visits to Opuwo increased global choices by ten per cent.

Age and levels of schooling made no difference to any of these results and past research has confirmed that the Himba are unfazed by testing with a computer monitor.

The more established theory for cross-cultural differences in local/global processing bias would predict that the Himba should show even more of a global processing bias than the Japanese, given the highly collectivist nature of their society. Also, this social orientation account would predict that experience of more individualistic urban living should lead to more local processing bias, not the greater global processing that was observed.

"Our proposal," the researchers said, "is that exposure to the urban environment investigated here introduced visual clutter with consequent changes in global/local processing." Their claim tallies with past research showing the opposite effect - that exposing townies to natural environments increases their bias for details.

"Further research will need to determine the processes by which cluttered visual input and/or other aspects of the urban environment come to change perceptual foci of interest in the dramatic way observed here," the researchers concluded.
_________________________________

ResearchBlogging.org
Caparos, S., Ahmed, L., Bremner, A., de Fockert, J., Linnell, K., & Davidoff, J. (2012). Exposure to an urban environment alters the local bias of a remote culture Cognition, 122 (1), 80-85 DOI: 10.1016/j.cognition.2011.08.013


Related studies covered on the Digest:
--
Post written by Christian Jarrett for the BPS Research Digest.

Tuesday, November 29, 2011

How to make the ceiling of your room seem higher

If you've ever witnessed would-be buyers looking around a house, you'll have noticed their observations about each room are usually limited to: "hmm, it's a good size" or "hmm, it's rather small". Little wonder then that home-improvers are so often fixated on making their rooms appear as spacious as possible. Design lore will tell them that to do so, they should paint their ceilings as light as possible, and in particular make the ceiling lighter than the walls. This contrast between ceilings and walls, so the advice goes, will increase the perceived room height. Does it really?

The answer, until recently, would have remained elusive. Interior design and architecture are strangely disconnected from psychology research. But a new study by Daniel Oberfeld and his team has defied this tradition. Across two experiments they had 32 participants don 3-D glasses and use a sliding scale to judge the ceiling height of dozens of virtual rooms. The rooms were empty and the colours were in shades of grey so that only lightness was varied. In particular, the ceiling, walls and floor were varied to be either low, medium or high in lightness. The depth (6m) and width (4.5m) of the rooms were fixed, whilst the actual ceiling height varied between 2.9 to 3.1m.

Increasing the lightness of the ceiling did increase its perceived height, so that aspect of design lore was supported. However, contrary to the traditional advice, the rooms also appeared higher when the walls were lighter. Moreover, the effect of ceiling lightness and wall lightness was additive. So the contrast effect endorsed by traditional design lore was refuted. Floor lightness made no difference to estimates of ceiling height, so it can't be overall room lightness that's crucial, but only the combination of wall and ceiling lightness.

Oberfeld and his colleagues said that practical guidelines for increasing perceived room height should be modified in light of their findings. "A rule of thumb consistent with our data," they wrote, "would be: 'If you intend to make the room appear higher, paint both the ceiling and the walls in a light colour. You are free to choose the colour of the floor because it has no effect on the perceived height."

From a theoretical perspective the new results are somewhat puzzling. Traditional research in psychophysics has shown that brighter objects usually appear closer. If people judge the height of a room by estimating the distance between their eyes and the ceiling, you'd think a lighter ceiling would appear lower. The present results suggest people must use some other means to judge ceiling height. Another possibility is that people look at the angles in the corner of the room, where the walls meet the ceiling. Perhaps increased lightness alters the angles via a geometric illusion to make the room seem taller. No, that isn't it either: Oberfeld's team said ceiling and wall lightness should have opposite effects on those crucial angles, which is inconsistent with the finding that both led to an increase in perceived height.

So, thanks to this research, we now know how to make our rooms seem higher, but we don't yet know why the technique works!
_________________________________

ResearchBlogging.orgOberfeld, D., Hecht, H., and Gamer, M. (2011). Surface lightness influences perceived room height. The Quarterly Journal of Experimental Psychology, 63 (10), 1999-2011 DOI: 10.1080/17470211003646161

Post written by Christian Jarrett for the BPS Research Digest.

Tuesday, November 22, 2011

The "multiple reflection error" - yet another way that we misunderstand mirrors

On her trolley! Janine, the mannequin
Considering the ubiquity of mirrors in everyday life, it's amazing how confused we are about them. For example, many of us are oblivious to the small size of our heads as they appear reflected in the mirror. A new study by Rebecca Lawson has provided a compelling demonstration of the "multiple reflection error" - yet another striking way that we misunderstand mirrors.

Imagine you're at the entrance to a narrow corridor and further down, several feet away, hanging on the right-hand wall, there are three rectangular mirrors (30cm x 45cm) at head height. At what point, as you proceed down the corridor, do you think you'll be able to see your face in the mirrors?

The correct answer is that your face will only be visible in each mirror when you are passing directly opposite. At no point will your face be visible in more than one mirror.

Lawson first tested people's understanding of this idea by having them stand at one end of a corridor and say in which of four positions in the corridor a mannequin "Janine" (moved about on a trolley) would be able to see herself in each of the three mirrors. Only two of the four positions in question were actually directly opposite one of the mirrors. So, of the 12 possible position/mirror combinations, the answer "yes, Janine can see her face" should only have been given twice. In fact, the 18 Liverpool University students answered yes an average of 6.1 times, grossly overestimating how often the mannequin could see herself in the mirrors. The errors weren't randomly distributed, they tended to be made when the mannequin was located near to the mirrors, but not directly opposite them.

The same errors were made when a single, larger mirror was divided up into three using duct tape (to ensure that participants realised the mirror surfaces were all flat against the wall and not angled like a dressing-table mirror), and also when the original three mirrors were arranged on the wall vertically, rather than horizontally.

Perhaps, Lawson reasoned, the participants were performing so poorly because it was confusing assuming the perspective of a mannequin. And also, perhaps because they were asked about each position and each mirror one at a time, so that they didn't realise the full implications of what they were saying: that the mannequin could see her face in multiple mirrors from a single position, and in the same mirror from different positions (an optical impossibility in the situation as described).

To avoid these issues, Lawson created another set-up in which more participants (prospective students and their parents) were shown a photograph of a person sat facing five mirrors arranged on the wall in the shape of a cross, with the central mirror at head height. The participants were given a piece of paper with five rectangles on it arranged in a cross shape, and they had to draw crudely what the person in the photo would be able to see of themselves in each mirror. Once again, there was a striking overestimation of where the person would be able to see reflections of their own head and face (participants should have indicated that the person's head/face would only be visible to them in the central mirror).

Finally, more participants actually sat in front of this set up of five mirrors in a cross shape. Half of them had just the central mirror uncovered then re-covered before they used the pencil and paper to indicate what they'd see in all the mirrors (the remaining mirrors were covered throughout). The other half of the participants had all the mirrors uncovered first, then re-covered before they gave their answers with the paper and pencil. In the first case, 58 per cent of the participants made multiple reflection errors - again, overestimating where they'd be able to see themselves in the mirrors. In the latter case, with the chance to experience the entire mirror set up, 24 per cent made such errors.
"This multiple reflection error is particularly surprising," Lawson said "because it directly contradicts our everyday experience that mirrors reflect a single coherent scene."

So why do people misunderstand mirrors in this way? Lawson said there are probably multiple reasons. One participant described her naive belief that whenever you turn your eyes towards a mirror, wherever it is, you will see yourself reflected in it - "mirrors look back at you," she said. No doubt this belief was held implicitly by many of the other participants.

"Almost nobody will have a clear, thought-through and self-consistent theory of optics which they use to guide their predictions," Lawson said. "Most people probably use a set of underspecified beliefs and heuristics, some of which are incompatible, leading them to make unsophisticated, noisy and inaccurate predictions. People rarely think explicitly about optics and what determines what they can see in a mirror or a window - or indeed, what they can see directly."
_________________________________

ResearchBlogging.orgLawson, R. (2012). Mirrors, mirrors on the wall…the ubiquitous multiple reflection error. Cognition, 122 (1), 1-11 DOI: 10.1016/j.cognition.2011.07.001

Further reading:
Link to Mirrors and the mind (Psychologist magazine article).
Link to New York Times article on the psychology of mirrors.

Post written by Christian Jarrett for the BPS Research Digest. Thanks to Rebecca Lawson for providing images of her experiment.

Friday, November 11, 2011

"Change deafness" - the scant attention we pay to the voice on the end of the phone

Our perception of the world is so restricted by the brain's finite attentional resources that large changes to the visual scene can occur without us noticing. Psychologists have studied this extensively and they call it "change blindness". But what about our limited vigilance to the world of sound? In a new study, Kimberly Fenn and her team have tested whether people notice when, mid phone-conversation, the person they're talking to changes. They found that unless there was a change of gender, most people didn't notice they were talking to someone else - a phenomenon the researchers call "change deafness".

Across five experiments, Fenn's team followed a similar procedure. Participants were interviewed on the telephone, ostensibly as part of a study into memories of smells. A young female interviewer greeted them, explained that there'd be twelve questions, then proceeded to fire away. After the third question, a different interviewer, usually another female, took over the questioning without warning or announcement (the four women who played the role of interviewer across the different experiments had voice frequencies of 200Hz, 202Hz, 218Hz, and 239Hz). After the twelfth question, participants were told the phone would be passed to a "supervisor". The supervisor took the phone, introduced herself, and asked progressively more specific questions to find out if participants had noticed the earlier voice change, ranging from "Did anything unusual happen during the interview?" to, "Did the experimenter's voice change at all during the interview?"

In the first two experiments, just 1 person out of 16 (6 per cent) and 1 out of 24 (4 per cent), respectively, noticed the voice change, even after they were asked about this directly. Moreover, none of the participants made any mention during the interview when the voice of the interviewer changed.

After the initial interview, but before the supervisor questions about a voice change, the participants were played recordings of the two interviewer voices and asked by the supervisor to say which was the voice of the interviewer (remember, at this point nearly all of them thought there was just one interviewer). Participants picked out the first interviewer voice just as often as the second voice - so it's not that one was particularly more memorable or dominant. However, presented with either one of the interviewer voices and a strange, unfamiliar voice, most participants (74 per cent) correctly picked out the interviewer voice. This means that in spite of the "change deafness" some aspects of the interviewer voices must have been encoded.

In another experiment, participants were warned in advance that the voice of the interviewer might change at some point during the interview. In this case, 75 per cent correctly reported afterwards that the voice of the interviewer had changed, and six of these nine participants knew the precise moment that the switch occurred. This suggests "change deafness" doesn't occur because we're incapable of detecting a change, but because in usual circumstances we don't bother paying enough attention to voices to detect such a change. This makes strategic sense, leaving more processing resources available for focusing on what's actually being said, rather than who's saying it.

"If language use evolved in service of face-to-face conversation ... There is no reason for language processing to develop an alarm mechanism that would continuously monitor the talker's identity and automatically signal a talker change," the researchers said. "Given the assumption of interlocutor stability, listeners are free to focus attention on the linguistic message."

"Change deafness" has its limits. In yet another experiment, the interviewer voice changed without warning from a woman's to a man's, and in this case eleven out of twelve participants noticed the change. "When talkers differ in vocal tract sufficiently, such as when talkers differ in gender, these bottom-up acoustic differences may grab attention even in the absence of top-down expectations," the researchers said.
_________________________________

ResearchBlogging.orgFenn, K., Shintel, H., Atkins, A., Skipper, J., Bond, V., & Nusbaum, H. (2011). When less is heard than meets the ear: Change deafness in a telephone conversation. The Quarterly Journal of Experimental Psychology, 64 (7), 1442-1456 DOI: 10.1080/17470218.2011.570353

Previously on the Digest: Phonagnosia - the inability to recognise people by their voice alone.

Post written by Christian Jarrett for the BPS Research Digest.

Monday, October 24, 2011

Wine tastes like the music you're listening to

We often think of our sensory modalities as like separate channels. In fact, there's a lot of cross-talk and interference between them. Consider how the prick of a needle is more painful if you watch it go in. Under-researched in this respect is the way that sound can affect our taste of food and drink. We know that such interactions occur. For instance, crisps taste fresher when they make a louder crunching noise. In a new study, Adrian North has shown that when people drink wine to the accompaniment of music, they perceive the wine to have taste characteristics that reflect the nature of that concurrent music. If you want your Merlot to taste earthy and full-bodied, try savouring it to the tune of Tom Jones. To add a little zing to your Pinot, perhaps try some Gaga?

North tested out the taste perceptions of 250 university students as they drank either Montes Alpha 2006 Cabernet Sauvignon (red wine) or Chardonnay (white wine) - both are Chilean. Crucially, some of the participants sampled their glass to the tune of music previously identified by a separate group of people as powerful and heavy (Carmina Burana by Orff); others drank their wine to music rated earlier as subtle and refined (Waltz of the Flowers from Tchaikovsky's 'Nutcracker'); others to the tune of zingy and refreshing music (Just Can't Get Enough by Nouvelle Vague); and lastly, the remaining participants drank their wine with mellow and soft music in the background (Slow Breakdown by Michael Brook). There was also a control group who drank the wine with no music.

After they'd savoured their wine for five minutes, the participants were asked to rate how much they felt the wine was powerful and heavy; subtle and refined; mellow and soft; and zingy and refreshing. The results showed that the music had a consistent effect on the participants' perception of the wine. They tended to think their wine had the qualities of the music they were listening to. So, for example, both the red and white wines were given the highest ratings for being powerful and heavy by those participants who drank them to the tune of Carmina Burana.

It remains for future research to establish whether these effects would hold among participants who had a greater knowledge of wine (a factor not assessed in the current study). Also, it's not clear how much it's the cultural connotations of the music that influences the perception of the wine, or how much it's the physical properties of that music. Finally, it perhaps would have been better if the music had stopped whilst the wines were rated.

This research builds on some earlier, related findings. People buy more French wine when French music is playing (and ditto for German music and wine). Past research has also shown that people eat and drink their way to a higher dinner bill when the restaurant plays classical music as opposed to pop, presumably because of the "upmarket" connotations of the classical accompaniment.
_________________________________

ResearchBlogging.orgNorth, A. (2011). The effect of background music on the taste of wine. British Journal of Psychology DOI: 10.1111/j.2044-8295.2011.02072.x

Further reading:
--
Post written by Christian Jarrett for the BPS Research Digest.

Wednesday, July 6, 2011

Why is a touch on the arm so persuasive?

A gentle touch on the arm can be surprisingly persuasive. Consider these research findings. Library users who are touched while registering, rate the library and its personnel more favourably than the non-touched; diners are more satisfied and give larger tips when waiting staff touch them casually; people touched by a stranger are more willing to perform a mundane favour; and women touched by a man on the arm are more willing to share their phone number or agree to a dance. Why should this be? Up until now research in this area has been exclusively behavioural: these effects have been observed, but we don't really know why. Now a study has made a start at understanding the neuroscience of how touch exerts its psychological effects.

Annett Schirmer and her colleagues used EEG to record the surface electrical activity of the brains of dozens of female participants who were tasked with looking at neutral or negative pictures (e.g. a basket or a gun to the head). Before each picture appeared, the participants were sometimes touched on the arm by a female friend; touched by a mechanical device (a pressure cuff); or they received no touch. The idea was to see whether and how being touched changed the way the brain responded to emotional and neutral pictures.

A further detail is that the mechanical touch was described as either under the friend's control, with the friend located elsewhere, or under computer control. This was to see if physical proximity matters and whether it matters who does the touching. For comparison, a final experiment also tested the effect of an auditory tone, which preceded some pictures but not others.

The most important finding is that a touch on the arm enhanced the brain's response to emotional pictures, as revealed by the size of what's known as the late positive component (LPC) of electrical brain activity. The LPC is thought to be associated with evaluative mental processes and a touch led to a greater LPC for emotional pictures compared with neutral ones.

Touch had this effect regardless of how it was administered and who did the administering (friend or machine). This suggests the reported effects of touch are largely "bottom up" - that is, based mainly on the incoming stimulation - rather than "top down", to do with beliefs about the meaning of the touch. Unlike touch, the auditory tone didn't increase the brain's sensitivity to emotional pictures.

"Emotional information presented concurrently with touch may be more motivating such that more processing resources are allocated to them than to emotional information presented without touch," the researchers said.

One consequence of this, Schirmer's team speculated, could be that the touched person is primed to be more altruistic, consistent with previous behavioural results. "Based on the present findings," they explained, "we propose that such behaviour occurs because the tactile signal alerts its recipient and enhances the processing of concurrent events, particularly if they are emotional. Such enhanced processing may then, among others, boost empathy and increase the likelihood that the touch recipient acts in favour of the toucher."
_________________________________

ResearchBlogging.orgSchirmer, A., Teh, K., Wang, S., Vijayakumar, R., Ching, A., Nithianantham, D., Escoffier, N., and Cheok, A. (2011). Squeeze me, but don't tease me: Human and mechanical touch enhance visual attention and emotion discrimination. Social Neuroscience, 6 (3), 219-230 DOI: 10.1080/17470919.2010.507958

This post was written by Christian Jarrett for the BPS Research Digest.

Monday, June 13, 2011

What colour is your breast-stroke? Or why synaesthesia is more about ideas than crossed-senses

People with synaesthesia experience odd sensations that make it seem as though their neural wires are crossed. A certain word might always come served with the same particular taste, or a letter or numeral might reliably evoke the same particular colour. But an emerging view among experts is that synaesthesia is grounded in concepts, not crossed senses. By this account, it's certain ideas, regardless of which sense perceives them, that trigger a particular concurrent experience. The latest evidence for this comes from Danko Nikolic and his colleagues at the Max-Planck Institute for Brain Research. They've documented two synaesthetes, HT and UJ, who experience different swimming strokes, whether performing them, watching them or merely thinking about them, as always being a certain colour.

HT and UJ, both now aged 24, began swimming competitively at an early age and the sport continues to be an important part of their lives. The first test that Nikolic's team performed was to present the pair with four black and white close-up photos of different swimming strokes and have them say which colour the strokes triggered using a book of 5500 colour shades. This was repeated four weeks later for HT and three weeks later for UJ. Three non-synaesthete control participants, all swimmers, were recruited for comparison. They similarly reported which colours the photos made them think of and they repeated the exercise after just a two-week gap.

The clear finding was that the difference from the first test to the second test in the precise colours chosen for each stroke by the synaesthetes was eight times smaller than the test-retest difference shown by the controls, thus supporting the synaesthetes' claim that different strokes always provoke the same colours.

Next the researchers administered a version of the Stroop test: the synaesthetes and controls were presented with the same swimming stroke photos as before, but this time they were shown with different coloured tones, for example in blue or yellow. The participants' task was to name the colour. If certain swimming strokes really do evoke particular colours for the synaesthetes then their colour naming ought to have been affected by the precise stroke/colour pairing on any given trial, such that you'd expect them to be quicker if the photo's colour matched the colour evoked by the stroke shown in the image. That's exactly what was found - UJ, for example, was 101ms slower when naming incongruent colours versus congruent ones. No such effect was observed for two control participants.

According to the classic view of synaesthesia as cross-wiring between senses, you'd think that swimming-style synaesthesia would require the act of swimming (via proprioception) to evoke a concurrent experience, but this study suggested it was enough to merely activate the concept of the different swim strokes by looking at pictures. This is consonant with past research showing, for example, that letter/number-colour synaesthesia can be triggered merely by imagining the necessary letter or number. Other research has documented synaeshetic experiences devoid of any particular sensory element, including so-called time-unit-space synaesthesia, in which units of time are experienced as existing in particular locations relative to the body.

"Hence, the original name of the presently investigated phenomenon syn + aesthesia (Greek for union of senses) may turn out to be misleading in respect of its true nature," the researchers said. "The term ideaesthesia (Greek for sensing concepts) may describe the phenomenon much more accurately." For more detailed discussion of how, when and why synaesthetic triggers and their concurrent experiences are acquired, it's worth checking out the full-text of the article.
_________________________________

ResearchBlogging.orgNikolić, D., Jürgens, U., Rothen, N., Meier, B., and Mroczko, A. (2011). Swimming-style synesthesia. Cortex, 47 (7), 874-879 DOI: 10.1016/j.cortex.2011.02.008

This post was written by Christian Jarrett for the BPS Research Digest.

Tuesday, June 7, 2011

The neuroscience of batman, or how the human brain performs echolocation

Over the last few years it’s become apparent that humans, like bats, can make effective use of echolocation by emitting click sounds with the tongue and listening for the echoes that result. Now a team led by Lore Thaler at the University of Western Ontario has conducted the first ever investigation into the neural correlates of this skill.

Thaler and her colleagues first had to overcome the practical challenge of studying echolocation in the noisy environment of a brain scanner, in which participants are required to keep their heads still. The researchers established that two blind, experienced echo-locators, EB and LB, were able to interpret with high accuracy the recordings of tongue clicks and echoes they’d made earlier, and so this form of passive echolocation was studied in the scanner.

Among several remarkable new insights generated by the research, the most important is that EB and LB exhibited increased activity in their visual cortices, but not their auditory cortices, when they listened to clicks and echo recordings taken outside, compared with when they listened to the exact same recordings with the subtle echoes omitted. No such differential activity was detected among two age-matched, male sighted controls.

The finding suggests that it is the visual cortex of the blind participants that processes echoes, not their auditory cortex. This visual cortex activity was stronger in EB who was blind from an earlier age than LB, and is more experienced at echolocation. However the echolocation skill of both blind participants is remarkable. Both are able to cycle and they can identity objects, and detect movement. EB, but not LB, showed evidence of a contra-lateral pattern in his echo-processing brain activity, just as sighted people do with the processing of light. That is, activity was greater in the brain hemisphere opposite to the source of stimulation.

Just how the visual cortex extracts meaningful information from subtle echo sounds must await future research. The researchers best guess is that the relevant neurons perform ‘some sort of spatial computation that uses input from the processing of echolocation sounds that was carried out elsewhere, most likely in brain areas devoted to auditory processing.’ Establishing the functional role of the cerebellar processing that was also differentially activated by echo sounds in the echo-locators must also await future research.

‘… [O]ur data clearly show that EB and LB use echolocation in a way that seems uncannily similar to vision,’ the researchers concluded. ‘In this way, our study shows that echolocation can provide blind people with a high degree of independence and self-reliance in their daily life. This has broad practical implications in that echolocation is a trainable skill that can potentially offer powerful and liberating opportunities for blind and vision-impaired people.’

If this research has piqued your interest in echolocation, a previous research paper on the topic by Antonio Martinez and his co-workers explained that anyone, blind or sighted, is able to learn the skill. In fact they said that after two hours practice a day for two weeks you should be able to recognise blindfolded whether there is an object in front of you or not.
_________________________________

ResearchBlogging.orgL Thaler, S Arnott, and M Goodale (2011). Neural Correlates of Natural Human Echolocation in Early and Late Blind Echolocation Experts. PLOS One DOI: 10.1371/journal.pone.0020162

This post was written by Christian Jarrett for the BPS Research Digest.

Friday, June 3, 2011

How our visual system is guided by gossip radar

The kind of negative tittle-tattle that appears daily in the tabloids seems to bear little merit. But experts believe that historically, paying attention to such gossip played an important role in our survival chances, such that today negative hearsay continues to bias our visual system.

Eric Anderson at Northeastern University in Boston and his colleagues have shown this in a new study that paired photos of neutral faces with lines of positive, negative or neutral gossip, and presented these to 61 participants on-screen. Typical lines of gossip were ‘threw a chair at his classmate’, ‘helped an elderly woman with her groceries’ and ‘passed a man on the street’. Each face was paired four times with its designated nugget of social information.

These faces were then presented in a binocular rivalry paradigm with pictures of houses. This means that using a piece of a equipment called a stereoscope, a face was presented exclusively to one eye and a house exclusively to the other, which would have led the two images to compete for access to the participant’s conscious awareness. For the participants, a fluctuating perceptual experience would then have ensued, first one image seen, then the other, and back again until the trial finished after ten seconds.

Participants were asked to press a keyboard key to indicate which image they could see at any given time and Anderson’s finding is that faces previously paired with negative gossip tended to dominate and be seen for longer, by more than half a second, than faces paired previously with positive or neutral gossip, or entirely new faces.

In case negative gossip was simply learned more effectively than the other gossip types, a second study controlled for how well participants learned the initial face-gossip associations and the main finding was replicated. This follow-up study also showed that neutral faces paired with negative gossip dominated in consciousness longer than neutral faces paired with non-social negative information, such as ‘had a root canal performed.’

Anderson’s team said it was easy to see the survival value in the brain prioritising the visual perception of people tagged with negative gossip, thereby allowing them to be seen for longer and for more information about them to be garnered. ‘Our results … [show] that top-down affective information acquired through gossip influences vision,’ the researchers said, ‘so that what we know about someone influences not only how we feel and think about them, but also whether or not we see them in the first place.’ The finding lends scientific credence to the established PR wisdom that for entertainers vying for the spotlight, there's no such thing as bad press.
_________________________________

ResearchBlogging.orgEric Anderson, Erika Siegel, Eliza Bliss-Moreau, and Lisa Feldman Barrett (2011). The visual impact of gossip. Science DOI: 10.1126/science.1201574

This post was written by Christian Jarrett for the BPS Research Digest.

Wednesday, April 13, 2011

Your brain unscrambles words in the mirror but then switches them back again

We humans can recognise things from different angles and orientations. As Jon Duñabeitia and his colleagues observe in their new paper, a tiger is still a tiger whether you see it facing rightwards or leftwards. When it comes to words, though, this skill largely vanishes - mirror-reversed words are especially tricky to read. It makes sense that the brain becomes sensitive to orientation in this way because, unlike the tiger, a 'd' isn't a 'd' when it faces the other way: 'b' (and the same is true for other letters).

The question that Duñabeitia set out to answer is what happens, in the case of letters, to the brain's usual ability to recognise things regardless of their orientation? Is the automatic reversal process somehow unlearned for letters, or is it merely suppressed at a later stage of processing? Given how recently in our evolutionary history we started reading and writing, the latter seems more likely.

However, a recent brain imaging study using fMRI, led by Stanislas Dehaene, suggested that the automatic reversal process was completely blocked when dealing with letters. Dehaene's team found that mirror-reversed words failed to produce a priming effect, either in terms of brain activity or behavioural performance. That is, the subliminal flash of a mirror-reversed word didn't speed up participants' recognition of that same word when it subsequently re-appeared the right way around. This suggests the mirror-reversed words weren't switched around and processed normally by the brain.

But what if the temporal resolution of fMRI is too poor to detect early mirror reversal processes? Duñabeitia's team performed an experiment in which normal and mirror-reversed words were flashed up subliminally prior to repeated presentations of those same words, but they used electroencephalography (EEG) to measure their participants' brain activity. Unlike fMRI, EEG can measure changes in brain activity over sub-second periods (although its spatial resolution is much poorer).

In contrast with Dehaene, Duñabeitia did observe a priming effect for mirror-reversed words. Although at 150ms after a prime, brain activity was different between mirror-reversed and normally oriented prime words, by 250ms the brain's response to these two kinds of prime was the same. In other words, the brain detects the mirror-reversed orientation but by 250ms it has switched it around the right way. By 400ms (still less than half a second) after the prime, the pattern had changed again, so that now the mirror-reversed prime and normally oriented prime provoked different patterns of activity (located towards the back of the brain). This could be the postulated suppression process in action.

The intriguing implication of this research is that when reading mirror-reversed words your brain automatically flips them the right way around - for an imperceptible instant you have a mirror-reading ability - but then it suppresses that effect, putting the mirror reversal back in place again, hence the words appear as awkward to read. This interpretation is consistent with the finding that many young children are capable of spontaneous mirror-writing and reading, perhaps because they have yet to develop the suppression of the automatic reversal process. There are also reports of brain injury prompting the onset of mirror reading.

This new research is more than just curiosity, it could help further our understanding of dyslexia, which in some cases is associated with the unwelcome automatic rotation of letters and words. 'Now we know that rotating letters is not a problem that is exclusive to some dyslexics, since everybody does this in a natural and unconscious way,' said Duñabeitia. 'But what we need to understand is why people who can read normally can inhibit this, while others with difficulties in reading and writing cannot.'
_________________________________

ResearchBlogging.orgDuñabeitia, J., Molinaro, N., and Carreiras, M. (2011). Through the looking-glass: Mirror reading. NeuroImage, 54 (4), 3004-3009 DOI: 10.1016/j.neuroimage.2010.10.079 [Article pdf via author website].

Thursday, March 31, 2011

People who are more aware of their own heart-beat have superior time perception skills

What underlies our sense of time? A popular account claims an internal pacemaker emits regular pulses, which are detected by an accumulator. The amount of accumulated pulses represents the amount of time that's passed.

Trouble is, this is all very theoretical and no-one really knows how or where in the brain these functions are enacted. One suggestion is that the pulses are based on bodily feedback and in particular the heart-beat. Consistent with this is a recent brain imaging study that showed activity in the insular (a brain region associated with representing internal bodily states) rose linearly as people paid attention to time intervals (pdf). Now a behavioural study by Karin Meissner and Marc Wittmann has built on these findings by showing that people who are more sensitive to their own heart-beat are also better at judging time intervals.

Thirty-one participants listened to auditory tones of either 8, 14, or 20 seconds duration. After each one, they heard a second tone and had to press a button when they thought its duration matched the first. Counting was forbidden during the task and a secondary, number-based memory task helped enforce this rule. Heart-beat perception accuracy was measured separately and simply involved participants counting silently their own heart-beats over periods of 25, 35, 45 and 60 seconds.

The take away message is that the participants who were more in tune with their heart-beats also tended to perform better at the time estimation task. A further detail is that physiological measures taken during the encoding part of the task showed that as time went on, the participants' heart-rate slowed progressively, and their skin conductance (i.e. amount of sweat on the skin) reduced. Moreover, the rate of change in a participant's heart-rate (but not skin conductance) was linked with the accuracy of their subsequent time estimates.

'These results suggest that the processing of interoceptive signals [i.e. of internal bodily states] in the brain might contribute to our sense of time,' Meissner and Wittmann concluded.

The new findings add to past research showing that patients with cardiac arrhythmia are poorer than controls at time estimation tasks, and that drug-induced speeding or slowing of the autonomic nervous system (including heart-rate) affects people's under- or over-estimation of time intervals.
_________________________________

ResearchBlogging.orgMeissner, K., and Wittmann, M. (2011). Body signals, cardiac awareness, and the perception of time. Biological Psychology, 86 (3), 289-297 DOI: 10.1016/j.biopsycho.2011.01.001

Wednesday, November 24, 2010

The 'smell' of other people's anxiety makes us take more risks

When people are anxious they release a chemical signal that's detectable on a subconscious level by those close to them. That's the implication of a new study that collected sweat from people as they completed a high-rope obstacle course, and then tested the effect of that sweat on study participants as they played a gambling game.

Katrin Haegler's team placed the sweat samples inside odourless tea bags which were attached with an elastic band to the underside of the gambling participants' noses. For comparison, the participants were also exposed to sweat collected from non-anxious riders of an exercise bike.

When exposed to the anxious sweat, the participants took longer to decide over, but were more likely to bet on, the highest risk scenarios - wagering that the next playing card in a pair would be higher than a 9 (where 10 was as high as the cards went) or lower than a 2 (where 1 was the lowest). In other words, the detection of another person's anxiety made them more willing to take risks. Quite why this should be remains unclear. However, the idea that humans can detect the anxiety of others via chemical signals is not new. For example, a 2009 study showed that sweat collected from an anxious person, compared with from an exerciser, triggered extra activity in a range of emotion-related brain areas.

The participants in the present study rated the anxiety-laced sweat and anxiety-free sweat as equally unpleasant and intense, suggesting, consistent with past research, that they couldn't consciously tell the difference between the two. So the effect of anxiety-laced sweat on risk-taking seems to have been a non-conscious influence.

'Although it is not fully understood if perception of emotional chemical signals in humans may have the ability to alert conspecifics about possible danger [as happens with some animals],' the researchers said, 'our findings suggest that anxiety in humans can be communicated through chemical senses.'
_________________________________

ResearchBlogging.orgHaegler K, Zernecke R, Kleemann AM, Albrecht J, Pollatos O, Brückmann H, and Wiesmann M (2010). No fear no risk! Human risk behavior is affected by chemosensory anxiety signals. Neuropsychologia, 48 (13), 3901-8 PMID: 20875438

Tuesday, September 21, 2010

What do I want? Don't ask me: Choice blindness at the market stall

Imagine you sampled two jams, chose your favourite, and were then offered another taste of it before being asked to explain your preference. Would you notice that you'd been offered the wrong one, that you were actually tasting the jam you'd turned down? A new study conducted at a market stall by Lars Hall and colleagues found that even for tastes as dramatically different as spicy Cinnamon-Apple and bitter Grapefruit, fewer than 20 per cent of participants realised that they'd just tasted the jam they'd moments earlier turned down. Even after being told the truth, fewer than half said they'd suspected they'd been offered the wrong jam.

This striking lack of insight has been dubbed choice blindness. Before now, it had only been demonstrated for visual preferences, in relation to women's faces, in a lab environment. This new study finds the effect in the real world, and in the context of taste and smell (as well as choosing between pairs of jams, participants also used smell to choose between pairs of specialist teas including Pernod vs. Mango).

To test the choice blindness effect, researchers used sleight of hand and double-ended jam jars or tea jars with a divide in the middle. Each jar contained a different jam/tea option at each end. Participants were presented with a pair of jars and tasted/smelt a sample from each. Then, by surreptitiously inverting the jars, the researchers were able to offer participants a second taste/smell from what appeared to be the same jar they'd just selected as their favourite, but actually now contained the jam/tea choice that they'd turned down.

Remarkably, on trials in which the tea or jam had been swapped, participants were just as confident about their choice as they were on control trials. However, as you'd expect, participants more often detected that the jams/teas had been swapped when choosing between pairs that pilot work had established were more different from each other. Another twist was that some participants were told they could actually take away their favoured jam or tea as a reward. However, this made no difference to the rates at which they detected their choice had been swapped, thus undermining the idea that the choice blindness effect may have to do with a lack of motivation.

People's apparent lack of awareness about choices they themselves have just made not only raises awkward questions about the limits of conscious awareness, but surely also has real-world implications. The researchers put it this way: 'The fact that participants often fail to notice mismatches between a taste of Cinnamon-Apple and Grapefruit, or a smell of Mango and Pernod is a result that might cause more than a hiccup in the food industry, which is critically dependent on product discrimination and preference studies to further the trade.'
_________________________________

ResearchBlogging.orgHall L, Johansson P, Tärning B, Sikström S, & Deutgen T (2010). Magic at the marketplace: Choice blindness for the taste of jam and the smell of tea. Cognition, 117 (1), 54-61 PMID: 20637455

Wednesday, July 28, 2010

Football fouls more likely to be given when play heads left

A simple perceptual bias could influence football referees' judgements about whether a foul occurred or not. That's according to Alexander Kranjec and colleagues, who had 12 football players at the University of Pennsylvania look for half a second each at 268 static images of one player tackling another and decide whether a foul had been committed. Unbeknown to the participants, 134 of the pictures were simply mirror opposites of the other 134.

The key finding was that more fouls (66.5 vs. 63.3 - a statistically significant difference) were judged to have occurred when assessing the images in which movement was captured in a leftward direction than when assessing the same images mirror-reversed and therefore featuring implied rightward motion. The researchers think this anomaly may have to do with our bias (at least in cultures that read from left to right) for rightward motion. Motion from right to left is perceived as less natural and this may be responsible for influencing judgements about fouls during play in that direction. Apparently film directors exploit this same bias by having villains arrive on-screen from the right.

Kranjec's team said their finding has implications for refereeing. The most popular system, known as the 'left diagonal refereeing system' (see picture), in which the referee runs a diagonal axis between the two left-hand corners of the pitch, results in the referee witnessing tackles in both goal areas primarily from a right-to-left perspective, thus making judgments of fouls in these areas more likely - an advantage to attackers. This is okay because it applies to both teams. What's important, Kranjec and colleagues warn, is that the referee doesn't switch to a 'right diagonal system' half-way through a match, potentially penalising a losing side that needs to attack yet no longer enjoys the benefits of this perceptual bias when playing in offensive areas.

'These results ... suggest that the effects of a low-level perceptual mechanisms could alter a decision, change the result of a game and perhaps, the fortunes of nations,' the researchers said.
_________________________________

ResearchBlogging.orgKranjec A, Lehet M, Bromberger B, & Chatterjee A (2010). A sinister bias for calling fouls in soccer. PloS one, 5 (7) PMID: 20628648

Tuesday, March 23, 2010

A social version of a basic cognitive mechanism

We're slower to direct our attention to the same location twice in succession, a well-established phenomenon that cognitive psychologists call 'inhibition of return' (IoR). It's thought the mechanism may act to make our search of the visual scene more efficient by deterring us from looking at the same spot twice. Now Paul Skarratt and his colleagues have documented a new 'social' form of inhibition of return, in which people are slower to attend to a location that social cues, such as gaze direction, suggest another person has already attended to.

Twelve participants sat at a table with an animated character projected opposite. Each participant and their animated partner had two lights and two buttons in front of them, near the middle of the table (see figure above). One light/button pair was to the left, the other pair was to the right. The basic task was to press the corresponding button as fast as possible when its light came on. Participants were slower to respond to a light when the animated partner had just responded to the adjacent light on their side of the table - this is what you might call a weak version of social inhibition of return. However, when two large vertical barriers were put up with a gap in the middle, so that the participants could only see their partner's eyes and initial reaching action, and not their actual button presses, this social IoR disappeared.

In a second experiment, the animated partner was replaced with a human. This time, the social IoR effect occurred even when the barriers were erected and only the partner's eye gaze and initial hand movement could be seen. In other words, inferences about where the partner was going to attend, based on their eyes or early hand movement, seemed to be enough to inhibit a participant's own attention to the same location. For some reason, this strong version of social IoR only occurred with a real, human partner, not the animated, computer-controlled partner of the first experiment.

The final experiment added yet another visual barrier, which left only the partner's eyes or only their early hand movement visible. This was to try to establish which cue was the more important for provoking social IoR. The answer was that both cues were equally effective.

It's only supposition at this stage, but Skarratt and his team think social IoR could be supported by the postulated mirror neuron system. Monkey research has shown, for example, that there are mirror neurons in the premotor cortex that fire whether a monkey sees another person grasp an object or if they just see the initial part of that grasping movement.

'Although the critical mechanisms underlying social IoR remain to be discovered,' the researchers said, 'the current study indicates that it can be generated independently of direct sensory stimulation normally associated with IoR, and can occur instead on the basis of an inference of another person's behaviour.'
_________________________________

ResearchBlogging.orgSkarratt, P., Cole, G., & Kingstone, A. (2010). Social inhibition of return. Acta Psychologica, 134 (1), 48-54 DOI: 10.1016/j.actpsy.2009.12.003

Figure courtesy of Paul Skarratt.

Sunday, March 7, 2010

We're slower at processing touch-related words than words related to the other senses

People are slower at responding to tactile stimuli than to input from the other senses. It's not immediately obvious why this should be. It's unlikely to be for mechanical reasons: the retina in the eye is slower at converting input into a neural signal than is the skin. Psychologists think the answer may have to with attention. Perhaps we're not so good at keeping our attention focused on the tactile modality compared with the others. Now Louise Connell and Dermot Lynott have added to the picture by showing that the tactile disadvantage extends to the conceptual domain. That is, we seem to be slower at recognising when a word is tactile in nature than we are at recognising whether words are visual, to do with taste, sound, or smell.

The researchers had dozens of participants look at words on a screen, presented one at a time, and press a button to say if they were related to the tactile modality (e.g. 'itchy') or not. Some words were tactile-related whilst others were fillers and related to the other senses.

The same task was then repeated but with participants judging whether the words were visual-related, auditory and so on, with each sense dealt with by a new block of trials. The key finding is that participants were much slower at this task in the tactile condition than for the other senses. This was the case even when words were presented for just 17ms, which is too fast for conscious detection but long enough for accurate responding.

To make sure the slower performance in the tactile condition wasn't to do with the response requiring a button press (which inevitably causes tactile stimulation), the researchers repeated the experiment with vocal responding via a microphone. The results were pretty much the same.

Ensuring they left no stone unturned, Connell and Lynott also conducted a final experiment to check that there isn't something about tactile words, besides their touchiness, that makes them slower to process. To do this they used words that have both visual and tactile qualities - examples include shaggy and spiky - and they mixed these in among filler words that related to the other senses. The same words were used in the tactile condition (in which participants had to say whether each word was tactile-related or not) and a visual condition. Once again, participants were significantly slower in the tactile condition.

Connell and Lynott say their findings provide further evidence for the tactile sense having a processing disadvantage relative to the other senses. They think this is because there's little evolutionary advantage to sustaining attention to the tactile modality whereas there are obvious survival advantages with the other senses, for example: '...in hunting, where efficacious looking, listening and even smelling for traces of prey could afford an advantage.' You may think of pain and damage detection as reasons for paying sustained attention to the tactile domain, but remember these are served by spinal reflexes. 'We do not wait for the burning or stinging sensation to register with the attentional system before responding,' the researchers said.
_________________________________

ResearchBlogging.orgConnell L, & Lynott D (2010). Look but don't touch: Tactile disadvantage in processing modality-specific words. Cognition, 115 (1), 1-9 PMID: 19903564

Wednesday, January 27, 2010

Time flew by ... I must have been enjoying myself

Have you ever been in the cinema and felt the time drag? It's happened to me. A glance at my watch and then the thought that I can't be enjoying the film all that much or else the time would surely have flown. My experience matches the findings from a series of studies by Aaron Sackett and colleagues. The folk psychology belief 'time flies when you're having fun' is so powerful and ubiquitous, the researchers say, that whenever we feel an event has passed more quickly than we expected, we infer that we must have been enjoying ourselves, and vice versa for events that drag.

The researchers first had dozens of undergrads look through passages of text and underline any words with adjacent repeats of a particular letter. Crucially, the researchers told the participants that the task would last ten minutes, but in reality it lasted either five minutes or twenty minutes, thus creating the illusion of time flying or dragging, respectively. A sneaky switch of stop-watches helped create the illusion. Afterwards, the participants who'd experienced the sense of the time flying rated the task as far more enjoyable than did the participants who'd experienced the sense of time dragging.

Further experiments showed that provoking the feeling of time flying led participants to be more tolerant of an irritating noise, and led them to enjoy their favourite song more than usual. This last finding was important because there was a possibility that it would feel unpleasant for a pleasurable activity to end earlier than expected.

If people really do use the 'time flies when you're having fun' adage to evaluate their own enjoyment, then challenging or encouraging the truth of the adage ought to affect the kind of findings described above. That's exactly what Sackett's team found. When participants read a scientific article challenging the 'time flies' adage, speeding up their subjective sense of time no longer increased their enjoyment of a word-based task.

It was a similar story when participants were given an alternative explanation for why time might have raced by. Participants were given ear plugs, which they were told could speed people's time perception. Again, the illusion of time flying didn't lead these participants to enjoy a task more, presumably because they attributed the sense of time flying to the ear plugs rather than to their enjoyment.

'Taken together, these findings have important implications for understanding and changing hedonic experience,' the researchers said. The Digest got in touch with lead author Aaron Sackett, Marketing Professor at the University of St. Thomas, to ask him how this might apply in the real world. He said the first thing to do is minimise people's access to accurate time cues. Next, alter their subjective time perception. There are numerous ways to do this. For example, physiological arousal speeds time perception so a free coffee at the start of a long queue could work (as long as no clocks were in sight). Even music that's incongruent with the context (e.g. Chinese music in an English restaurant) has been found to speed time. Finally, you need the surprise moment, when people are alerted to the true passage of time. That provokes in people the sensation of time having flown, followed by the gratifying inference that they must therefore have been enjoying themselves.
_________________________________

ResearchBlogging.orgAM Sackett, LD Nelson, T Meyvis, BA Converse, & AL Sackett (2010). You're having fun when time flies: The hedonic consequences of subjective time progression. Psychological Science : 10.1177/0956797609354832

Wednesday, January 20, 2010

Scared face processed more quickly when seen out of the corner of the eye

The brain processes fearful faces more quickly when seen out of the corner of the eye than when viewed straight on. Dimitri Bayle and colleagues, who made their finding using magnetoencephalography (MEG) brain scanning, believe this bias has probably evolved because threats are more likely to come from side-on.

Eleven participants had their brains scanned while they judged whether faces on a computer screen were happy or not. Unbeknown to the participants, each of these visible faces was actually preceded by a subliminally presented fearful face, either straight ahead or in the periphery.

The striking finding was that a peripherally presented fearful face led to much quicker activation of brain regions known to be involved in emotion processing. Specifically, a peripherally presented fearful face was followed by increased activation in the right anterior fronto-medial region - including the famous amygdala - within just 130ms. By contrast, a fearful face presented straight on triggered activity in these emotional-processing centres only after 210ms.

Bayle's team think that fearful stimuli seen out of the corner of the eye are processed more quickly because of the preponderance of so-called 'magnocellular' receptors in the eye's periphery. These feed into the magnocellular visual pathway, known for its fast and dirty processing, which routes subcortically via the superior-colliculus. In contrast, stimuli viewed straight ahead in our full attentional glare are preferentially processed by the so-called parvocelluar pathway, which is more thorough and travels rather more leisurely via the visual cortex at the back of the brain.

The researchers concluded: 'An adaptive advantage is conferred by the fast automatic detection of potential threat outside the focus of attention, as danger in the external world mostly appears in the peripheral vision, requiring a rapid behavioural reaction before conscious control.'
_________________________________

ResearchBlogging.orgBayle DJ, Henaff MA, & Krolak-Salmon P (2009). Unconsciously perceived fear in peripheral vision alerts the limbic system: a MEG study. PloS one, 4 (12) PMID: 20011048