Showing posts with label Decision making. Show all posts
Showing posts with label Decision making. Show all posts

Monday, December 5, 2011

The brain basis of "unrealistic optimism"

Life is a little like going for a walk in the rain. Sooner or later you're going to get wet - be that in the form of bad health, unrequited love or job redundancy. It's remarkable that we ever venture out. We do so sheltered under the umbrella of "unrealistic optimism". Depressed people aside, the rest of us underestimate the likelihood that bad things will happen to us and overestimate the likelihood of good outcomes. Asked to imagine positive scenarios, we do so with greater vividness and more immediacy than when asked to picture negative occurrences - our images of those are hazy and distant.

Now Tali Sharot (author of the forthcoming book The Optimism Bias) and her colleagues have investigated the brain mechanisms underlying this rosy outlook. Sharot had participants estimate their likelihood of experiencing 80 adverse life events from developing Alzheimer's to being robbed. After they gave each estimate, the participants were given the correct average probability for a person in their socio-economic circumstances. In a subsequent testing session, participants had a second chance to forecast their risk of experiencing the same 80 misfortunes. Throughout this process, Sharot scanned the activity of the participants' brains.

One key finding is that the participants showed a bias in the way that they updated their estimates, being much more likely to revise an original estimate that was overly pessimistic than to revise an original estimate that was unduly optimistic (79 per cent of participants showed this pattern). The researchers checked and this difference wasn't to do with the positive feedback being remembered better, but purely to do with it being taken account of more than negative feedback.

There were some intriguing neural insights. Discovering that an initial estimate was unduly pessimistic was associated with increased activity across the frontal lobes, in left inferior frontal gyrus, left and right medial frontal cortex/superior frontal gyrus, and also in the right cerebellum - and this increased activity correlated with the participants' subsequent updating of their estimate in the second round of predictions. By contrast, discovering that they'd been overly optimistic was associated with reduced activity in the inferior frontal gyrus extending into precentral gyrus and insula, and again this activity change was related to the likelihood that the participants would revise their estimate in the second round of predictions.

The researchers also compared the brain activity between the most and least optimistic participants. High scorers in trait optimism showed less of the activity drop in inferior frontal gyrus when they discovered they'd been overly optimistic. That is, their brains seemed to ignore information educating them about the depressing reality of their chances of experiencing adversity later in life. In contrast, the brains of the high and low optimists responded to desirable feedback (in which they learned they'd been unduly pessimistic) in exactly the same way.

"Our findings offer a mechanistic account of how unrealistic optimism persists in the face of challenging information," said Sharot and her team. "We found that optimism was related to diminished coding of undesirable information about the future in a region of the frontal cortex (right inferior frontal gyrus) that has been identified as being sensitive to negative estimation errors."

The researchers also reflected on the wider implications of their research. They said that unrealistic optimism likely evolved to enhance exploratory behaviour and has the benefit of reducing stress and anxiety. However, they said that this rosy view comes at a cost. "For example," they said, "unrealistic assessment of financial risk is widely seen as a contributing factor in the 2008 global economic collapse."
_________________________________

ResearchBlogging.orgSharot, T., Korn, C., and Dolan, R. (2011). How unrealistic optimism is maintained in the face of reality. Nature Neuroscience, 14 (11), 1475-1479 DOI: 10.1038/nn.2949

Post written by Christian Jarrett for the BPS Research Digest.

Tuesday, August 16, 2011

Take vitamin pill, eat cake. How supplements can encourage unhealthy behaviour

Have you ever had that feeling, after an energetic gym session, or perhaps a long walk, that you've earned the right to a mountainous slice of cake, or to lounge lazily in front of the telly? Psychologists call these licensing effects and a new study has documented a similar phenomenon following the simple act of taking a vitamin pill. The researchers say the finding could help explain why the explosive rise in the consumption of dietary supplements (approximately half the US population take them, according to recent data) has not led to a commensurate improvement in public health.

Wen-Bin Chiou and his colleagues gave an inert pill to 82 participants recruited via posters in the Taiwanese city of Kaohsiung. Half the participants were told it was a placebo; the other half were told it was a vitamin pill. They were instructed to suspend their usual intake of supplements, if any, for the duration of the study.

Afterwards, compared with placebo participants, the participants who thought they'd taken a vitamin pill rated indulgent but harmful activities like casual sex and excessive drinking as more desirable; healthy activities like yoga as less desirable; and they were more likely to choose a free coupon for a buffet meal, as opposed to a free coupon for a healthy organic meal (these associations held even after controlling for participants' usual intake of vitamin pills. Participants also said at the end that they hadn't guessed the purpose of the study).

The vitamin-takers also felt more invulnerable than the placebo participants, as revealed by their agreement with statements like "Nothing can harm me". Further analysis suggested that it was these feelings of invulnerability that mediated the association between taking a postulated vitamin pill and the unhealthy attitudes and decisions.

A second study with student recruits was similar to the first, but this time, participants who'd taken what they thought was a vitamin pill opted to walk a shorter distance to return a pedometer to a researcher located elsewhere on campus (even though they'd just been reminded of the health benefits of walking). Again, this association, between the vitamin pill and behaviour, was mediated by feelings of invulnerability.

"People who rely on dietary supplements for health protection may pay a hidden price: the curse of licensed self-indulgence," the researchers said. "Policy interventions that remind individuals to monitor the licensing effect may help translate the increased use of dietary supplements into improved public health."
_________________________________

ResearchBlogging.orgChiou, W., Yang, C., and Wan, C. (2011). Ironic Effects of Dietary Supplementation: Illusory Invulnerability Created by Taking Dietary Supplements Licenses Health-Risk Behaviors. Psychological Science DOI: 10.1177/0956797611416253

This post was written by Christian Jarrett for the BPS Research Digest.

Tuesday, March 8, 2011

How anger can make us more rational

Anger can de-bias our thinking
Imagine you're in a room with four people, one is lip-snarling angry, the others are calm. Who among them would you consider the most likely to think rationally? A surprising new study suggests that in at least one important respect it's actually the angry individual who will be the more rational decision maker. How come? Because they'll be less prone to the confirmation bias - our tendency to seek out information that supports our existing views.

Maia Young and her colleagues had 97 undergrads take part in what they thought were two separate experiments. The first involved them either recalling and writing about a time they'd been exceptionally angry (this was designed to make them angry), or a time they'd been sad, or about mundane events.

Next, all the participants read an introduction to the debate about whether hands-free kits make speaking on a mobile phone while driving any safer. All participants had been chosen because pre-study they believed that they do. The most important part came next, as the participants were presented with one-sentence summaries of eight articles, either in favour, or against, the idea that hands-free kits make driving safer. The participants had to choose five of these articles to read in full.

Which participants tended to choose to read more articles critical of hands-free kits and therefore contrary to their own position? It was the participants who'd earlier been made to feel angry. What's more, when the participants' attitudes were re-tested at the study end, it was the angry participants who'd shifted more from their original position on the debate.

These findings were supported in a follow-up involving 89 adults, with the controversial issue pertaining to who should be the next US president, in what was then the upcoming 2008 election. Once again, participants provoked into feeling angry tended to choose to read articles that ran counter to their original position (be that favouring Obama or McCain). Another detail was that this effect of anger was entirely explained by what the researchers called a 'moving against' tendency, measured by participants' agreement, after the anger induction, with statements like 'I wanted to assault something or someone'.

Young and her team said their results provided an example of anger leading to a cognitive pattern characterised by less bias. 'Although the hypothesis disconfirming behaviour that anger produces may well be an aggressive act, meant to move or fight against the opposition's opinion,' they said, 'its result is to provide those who feel angry with better information.'

What are the real-life implications of this result? The researchers conceded that it's unrealistic to make people angry as a way to improve their decision making. However, they said that in a work meeting, if someone is angry, they might be the one best placed to play the role of devil's advocate on behalf of the group. 'By encouraging angry group members to select information necessary for group discussion,' the researchers explained, 'the group as a whole may get the benefit of being exposed to diverse views and, as a result, achieve a more balanced perspective.'
_________________________________

ResearchBlogging.orgYoung, M., Tiedens, L., Jung, H., and Tsai, M. (2011). Mad enough to see the other side: Anger and the search for disconfirming information. Cognition and Emotion, 25 (1), 10-21 DOI: 10.1080/02699930903534105

Tuesday, March 1, 2011

How thinking for others can boost your creativity

Distancing ourselves from a problem can help us reach the solution
The next time you're struggling to solve a creative problem, try solving it for someone else. According to Evan Polman and Kyle Emich, we're more capable of mental novelty when thinking on behalf of strangers than for ourselves. This is just the latest extension of research into construal level theory, an intriguing concept that suggests various aspects of psychological distance can affect our thinking style.

It's been shown, for example, that greater physical and temporal distance lead us to think more abstractly, such that you're more likely to solve a problem if you imagine being confronted by it in a far-off place and/or at a future time (read Jonah Lehrer's take on what this says about the importance of holidays). Now Polman and Emich have shown that social distance can have the same psychological benefit.

Across four studies involving hundreds of undergrads, Polman and Emich found that participants drew more original aliens for a story to be written by someone else than for a story they were to write themselves; that participants thought of more original gift ideas for an unknown student completely unrelated to themselves, as opposed to one who they were told shared their same birth month; and that participants were more likely to solve an escape-from-tower problem if they imagined someone else trapped in the tower, rather than themselves (a 66 vs. 48 per cent success rate). Briefly, the tower problem requires you to explain how a prisoner escaped the tower by cutting a rope that was only half as long as the tower was high. The solution is that he divided the rope lengthwise into two thinner strips and then tied them together.

The researchers were careful to consider a range of possible confounding factors, including confidence in our knowledge of ourselves versus others, emotional involvement and feelings of closeness. None of these made much difference to the main result. On the other hand, among participants who tackled the tower problem, it was those who said afterwards that they felt the tower was further away, who tended to have found the solution. This reinforces the researchers' claim that solving a problem for a stranger is easier because of the feeling of psychological distance that it creates.

The study has some limitations - the participants didn't know who they were solving a problem for, other than that they were another student. When it comes to applying the lessons of this research to real life, it will surely make a difference who we think we're solving a problem for - be they a stranger, a relative or a manager. Future research could look at this.

'The practical implications of our findings are striking in the extent of their reach,' the researchers concluded with gusto. 'That decisions for others are more creative than decisions for the self is not only valuable information for researchers in social psychology, decision making, marketing, and management but also should prove of considerable interest to negotiators, managers, product designers, marketers, and advertisers, among many others.'
_________________________________

ResearchBlogging.orgPolman E, and Emich KJ (2011). Decisions for Others Are More Creative Than Decisions for the Self. Personality and social psychology bulletin PMID: 21317316

Monday, January 31, 2011

Closing our eyes affects our moral judgements

We experience emotion more intensely with our eyes closed
The simple act of closing our eyes has a significant effect on our moral judgement and behaviour. Eugene Caruso and Francesca Gino, who made the observation, think the effect has to do with mental simulation, whereby having our eyes closed causes us to simulate scenarios more vividly. In turn this triggers more intense emotion.

Throughout the study, Caruso and Gino concealed the true aim of the research from participants by telling them that part of the investigation was about judging the quality of head-phones. Participants were asked to listen to the rest of the study instructions through a pair of head-phones with a view to rating the sound quality. Crucially, half the participants were asked to listen to the different instructions and scenarios with their eyes closed - ostensibly to help their judgment of the sound quality - whilst the remainder listened with their eyes open.

Across the first three studies, the following effects were observed: participants with their eyes closed who heard a hypothetical scenario in which they deliberately over-estimated hours worked (so as to charge more) judged the act as more unethical than participants who heard the same scenario with their eyes open. Participants who heard the instructions for a simple financial game with their eyes closed subsequently shared money more fairly than participants who heard the instructions with their eyes open. And participants who listened to a hypothetical scenario with their eyes closed, in which nepotism and self-interest had biased a recruitment decision they'd made, judged that act as more unethical than did participants who heard the same scenario with their eyes open. Follow-up questions showed that the eyes-closed participants had visualised the scenario more vividly.

A fourth study was similar to the last except that some of the participants were given an explicit instruction to visualise the nepotism scenario as vividly as they could. This instruction led the eyes-open participants to judge the nepotistic act more harshly, similar to the eyes-closed participants. Overall, there was no evidence that the eyes-closed participants had simply paid more attention to the scenario than the eyes-open participants, but they did experience more negative, guilt-based emotion and it's this effect that probably underlies the study's central finding.

'Although scholars from different fields have provided important insights in understanding why people commonly cross ethical boundaries, little research has examined potential solutions that are easily implementable,' the researchers said. 'Here we identified a simple strategy: closing one's eyes, people are likely to simulate the decision they are facing more extensively and experience its emotional components more vividly. As a result ... people may be more sensitive to the ethical nature of their own and others' decisions, and perhaps behave more honestly as a result.'
_________________________________

ResearchBlogging.orgCaruso, E., and Gino, F. (2011). Blind ethics: Closing one’s eyes polarizes moral judgments and discourages dishonest behavior. Cognition, 118 (2), 280-285 DOI: 10.1016/j.cognition.2010.11.008

Monday, January 17, 2011

Coffee helps women cope with stressful meetings but has the opposite effect on men

For men working together, stress plus coffee could be toxic
If a meeting becomes stressful, does it help, or make things worse, if team members drink lots of coffee? A study by Lindsay St. Claire and colleagues that set out to answer this question has uncovered an unexpected sex difference. For two men collaborating or negotiating under stressful circumstances, caffeine consumption was bad news, undermining their performance and confidence. By contrast, for pairs of women, drinking caffeine often had a beneficial effect on these same factors. The researchers can't be sure, but they think the differential effect of caffeine on men and women may have to do with the fact that women tend to respond to stress in a collaborative, mutually protective style (known as 'tend and befriend') whereas men usually exhibit a fight or flight response.

The study involved 64 male and female participants (coffee drinkers at the University of Bristol with an average age of 22) completing various construction puzzles, negotiation and collaborative memory tasks in same-sex pairs. They did this after drinking decaffeinated coffee, which either had or hadn't been spiked covertly with caffeine (the equivalent of about three cups' worth of coffee). Stress was elevated for some of the pairs by telling them they would shortly have to give a public presentation, and by warning them that their participation fee would be performance dependent.

How large were the caffeine effects? The men's memory performance under stressful conditions with caffeine was described by the researchers as 'greatly impaired' whereas caffeine didn't affect women in the same situation. For the construction puzzles, caffeine under high stress conditions led men to take an average of twenty seconds longer (compared with no caffeine) whereas it led women to solve the puzzles 100 seconds faster.

A short-coming, acknowledged by the researchers, was that there were overall few effects of stress on the participants' performance, no doubt in part because they'd been told they could bail out any time they liked (although none of them did). Further research is clearly need to replicate the findings and explore the possible underlying mechanisms. Such work is urgent, the researchers concluded, 'because many ... meetings, including those at which military and other decisions of great import are made, are likely to be male-dominated. Our research suggests that men's effectiveness is particularly likely to be compromised. Because caffeine is the most widely consumed drug in the world, it follows that the global implications are potentially staggering.'
_________________________________
ResearchBlogging.org

St. Claire, L., Hayward, R., and Rogers, P. (2010). Interactive Effects of Caffeine Consumption and Stressful Circumstances on Components of Stress: Caffeine Makes Men Less, But Women More Effective as Partners Under Stress. Journal of Applied Social Psychology, 40 (12), 3106-3129 DOI: 10.1111/j.1559-1816.2010.00693.x

Wednesday, November 10, 2010

If-then plans help protect us from the 'to hell with it' effect

You're probably familiar with what could be called the 'to hell with it' effect. It's when (as demonstrated by lots of research) a bad mood causes us to take risky decisions or engage in risky behaviour. Like when you're feeling down and you drive home dangerously fast or go out and get drunk. Now a team led by Thomas Webb at the University of Sheffield says that we can protect ourselves from this effect by forming 'if-then' implementation decisions in advance. These are self-made plans which state that if a certain situation occurs, then I will respond in a pre-specified way.

A first study used a trick anagram task to put some students in a bad mood. They were told the task was easy and should only take them five minutes when in fact three of the anagrams were insoluble (pilot work had shown that this puts students in a grump). Other students were told the truth, so the task wasn't expected to put them in a bad mood. Next, all the students said how they would behave in three imaginary scenarios - whether to drive an old car with brake problems, whether to disclose a secret to a room-mate, and whether to return deliberately damaged shoes to a shop for a refund.

Would being provoked into a bad mood encourage riskier behaviour? It depended whether the students had formed if-then plans in advance. During the previous week, ostensibly as part of a separate study, the students had been asked to keep a mood diary and to try to stay in as positive a mood as possible. Half the students (the control group) followed the simple instruction 'I will try to stay in a positive mood', which they were asked to repeat to themselves three times during the week. The others followed the if-then plan: 'If I am in a negative mood, then I will ... breathe deeply / think only positive thoughts / think how I've dealt successfully with previous situations' (they could choose which ending to use). Again they had to repeat this three times during the week. The key finding was the being provoked into a bad mood by the impossible anagrams led the control students to make riskier decisions (the 'to hell with it effect' in action) but not the students who'd made the if-then implementation plans during the prior week. They seemed to have been inoculated.

This pattern of results was replicated in second study in the context of arousal and a gambling task. Like being in a bad mood, being more aroused is also associated with taking more risks. In this case, Bach's Brandenberg concerto no. 3 was used to increase greater arousal in half the participants (the others listened to Beethoven's moonlight sonata). The gambling task involved betting points on whether a token was hidden inside blue or red boxes on a computer screen. After the arousal induction but before the task, half the students formed the if-then plan 'If I am asked to make a bet, then I will pay close attention to the number of red versus blue boxes'. The control students were simply told to end the game with as many points as they could. Consonant with the first study, increased arousal led the control students to play more riskily, but not the students who'd formed a protective if-then plan.

'Taken together,' the researchers said, 'the findings of the two experiments suggest that people can strategically avoid the detrimental effect of unpleasant mood and arousal on risk taking by forming implementation intentions directed at controlling either the experience of mood or risky behaviour.'

How do if-then plans exert their protective effects? Webb and his colleagues can't be sure, but they think they help form strong links between specific circumstances (e.g. when in a bad mood) and responses (e.g. breathe deeply) thereby making those responses easier to enact. 'Future studies will need to confirm that these processes ... explain how implementation intentions shield behaviour from the deleterious effects of mood,' they said.
_________________________________

ResearchBlogging.orgWebb TL, Sheeran P, Totterdell P, Miles E, Mansell W, and Baker S (2010). Using implementation intentions to overcome the effect of mood on risky behaviour. The British journal of social psychology / the British Psychological Society PMID: 21050527

Monday, November 1, 2010

Higher intelligence associated with "thinking like an economist"

As the world economy dusts itself down and edges towards recovery, a provocative new paper claims that people with higher intelligence are more likely to think like economists. That is, they're more likely to be optimistic about the economy; to recognise the economic advantages of markets free from government interference, and the advantages of foreign trade and foreign workers; and to appreciate the economic benefits of achieving greater productivity with less man-power. The lead author is Bryan Caplan, an economics professor at George Mason University. Past essays by him include 'The 4 Boneheaded Biases of Stupid Voters (And we're all stupid voters.)'

Prior research has established that the more time a person spends in education, the more likely their broad economic views are to match that of the typical economist (pdf). Caplan and his colleague Stephen Miller point out that these studies failed to take into account the influence of intelligence. After all, it's known that people with higher IQ tend to spend longer in education and intelligence itself may also directly influence economic beliefs.

To overcome this problem, Caplan and Miller have focused on answers to the General Social Survey, a massive US poll of national opinions performed every two years. Crucially, it includes questions about the economy and a small test of verbal IQ.

Caplan and Miller's finding is that the link between educational background and 'thinking like an economist' is weakened when IQ is taken into account because IQ is the more important factor associated with economic beliefs. It's a complicated picture because IQ and education may be mutually influential. However, if one assumes that education is unable to raise IQ, but that IQ affects time spent in education, then the researchers said 'the net effect on economic beliefs of intelligence is more than double the net effect of education.' Even if one assumes that education can also affect IQ, 'intelligence still has a larger estimated effect [on economic beliefs],' they said.

Does the link between higher intelligence and 'thinking like an economist' mean that economists are generally right and the public wrong? In answer to this question, Caplan and Miller cite Shane Frederick, a decision-making scholar at Yale's School of Management, who's previously argued that it depends on the type of question. For financial issues, he argued, it pays to emulate those 'with higher cognitive abilities'. However, Frederick noted that 'if one were deciding between an apple or an orange, Einstein's preference for apples seems irrelevant.'

Caplan and Miller say they agree with Frederick about this, before concluding boldly: 'The fact that the beliefs of economists and intelligent non-economists dovetail is another reason to accept the "economists are right, the public is wrong" interpretation of lay-expert belief gaps.'
_________________________________

ResearchBlogging.orgCaplan, B., and Miller, S. (2010). Intelligence makes people think like economists: Evidence from the General Social Survey. Intelligence, 38 (6), 636-647 DOI: 10.1016/j.intell.2010.09.005

Wednesday, October 27, 2010

Five minutes with the discoverer of the Scientific Impotence Excuse, Geoffrey Munro

When attempting to change people’s behaviour – for example, encouraging them to eat more healthily or recycle more – a common tactic is to present scientific findings that justify the behaviour change. A problem with this approach, according to recent research by Geoffrey Munro at Towson University in America, is that when people are faced with scientific research that clashes with their personal view, they invoke a range of strategies to discount the findings.

Perhaps the most common of these is to challenge the methodological soundness of the research. However, with newspaper reports and other brief summaries of science findings, that’s often not possible because of lack of detail. In this case, Munro's research suggests that people will often judge that the topic at hand is not amenable to scientific enquiry. What’s more, he’s found that, having come to this conclusion about the specific topic at hand, the sceptic will then generalise their belief about scientific impotence to other topics as well (further detail). Munro says that by embracing the general idea that some topics are beyond the reach of science, such people are able to maintain belief in their own intellectual credibility, rather than feeling that they’ve selectively dismissed unpalatable findings.

The Digest caught up with Professor Munro to ask him, first of all, whether he thinks there are any ways to combat the scientific impotence excuse or reduce the likelihood of it being deployed.
"One of the most difficult things to do is to admit that you are wrong. In cases where a person is exposed to scientific conclusions that contradict her or his existing beliefs, one option would be to accept the scientific conclusions and change one’s beliefs. It sounds simple enough, and, for many topics, it is that simple. However, some of our beliefs are much more resistant to change. These are the ones that are important to us. They may be linked to other important aspects of our identity or self-concept (e.g., “I’m an environmentalist ”) or relevant to values that are central to who we are (e.g., “I believe in the sanctity of human life”) or meaningful to the social groups to which we align ourselves (e.g., “I’m a union man like my father and grandfather before him”) or associated with deeply-held emotions (e.g., “Homosexuality disgusts me”). When scientific conclusions challenge these kinds of beliefs, it’s much harder to admit that we were wrong because it might require a rethinking of our sense of who we are, what values are important to us, who we align ourselves with, and what our gut feelings tell us. Thus, a cognitively easier solution might be to not admit our beliefs have been defeated but to question the validity of the scientific conclusions. We might question the methodological quality of the scientific evidence, the researcher’s impartiality, or even the ability of scientific methods to provide us with useful information about this topic (and other topics as well). This final resistance technique is what I called “scientific impotence”.

So, how can strongly-held beliefs be changed? How can scientific evidence break through the defensive tenacity of these beliefs? Well, I hope the paragraph above illustrates how scientific evidence can be threatening when it challenges an important belief. It makes you feel anxious, upset, and/or embarrassed. It makes you question your own intelligence, moral standing, and group alliances. Therefore, the most effective ways to break the resistance to belief-challenging scientific conclusions is to present such conclusions in non-threatening ways. For example, Cohen and his colleagues have shown that affirming a person’s values prior to presenting belief-challenging scientific conclusions breaks down the usual resistance. In other words, the science is not so threatening when you’ve had a chance to bolster your value system. Relatedly, framing scientific conclusions in a way that is consistent with the values of the audience is more effective than challenging those values. Research from my own laboratory shows that reducing the negative emotional reactions people feel in response to belief-challenging scientific evidence can make people more accepting of the evidence. We achieved this by giving participants another source (something other than the scientific conclusions they read) to which they could attribute their negative emotional reactions. While this might be difficult to implement outside of the laboratory, we believe that other factors can affect the degree to which negative emotional reactions occur. For example, a source who speaks with humility is less upsetting than a sarcastic and arrogant pundit. Similarly, the use of discovery-type scientific words and phrases (e.g., “we learned that…” or “the studies revealed that…”) might be less emotionally provocative than debate-type scientific words and phrases (e.g., “we argue that…” or “we disagree with so-and-so and contend that…”). In fact, anything that draws the ingroup-outgroup line in the sand is likely to lead to defensive resistance if it appears that the science or its source is the outgroup. So, avoiding culture war symbols is crucial. Finally, as a college professor, I believe that frequent exposure to critical thinking skills, practice with critical thinking situations, and quality feedback about critical thinking allows people to understand how their own biases can affect their analysis of information and result in open-minded thinkers who are skeptical yet not defensive."
Next, the Digest asked Prof Munro whether he thinks psychology findings are particularly prone to provoke scientific discounting cognitions - and if so, should we as a discipline make extra effort to combat this?
"Yes, I believe psychological research (and probably social science research in general) is prone to provoke scientific discounting. The term “soft science” illustrates how social sciences are perceived differently than the “hard sciences”. There are a number of reasons why this might be true. First, much psychological research is conducted without the use of technologically-sophisticated laboratories containing the fancy equipment that comes to many people’s minds when the word science is used. In other words, psychological research doesn’t always resemble the science prototype. Supporting this position, psychological research that is conducted in high-tech labs (e.g., neuroscience imaging studies) is, in my opinion, perceived with less skepticism by the general public. Second, psychological research often investigates topics about which people already have subjective opinions or, at least, can easily call to mind experiences from their own lives that serve as a comparison to the research conclusions. In other words, people often believe that they already have knowledge and expertise about human thought and behavior. When their opinions run counter to psychological research conclusions, then scientific discounting is likely. For example, there is a common belief that cathartic behaviors (e.g., punching a punching bag) can reduce the frustrations that sometimes lead to aggression. Psychological research, however, has contradicted the catharsis hypothesis, yet the belief remains entrenched, possibly because it has such a strong intuitive appeal. In contrast, people will quickly reveal their lack of expertise on topics in physics or chemistry and have a harder time calling to mind examples from their own lives. Third, there is likely some belief that people’s thoughts and behaviors are less predictable, more mysterious, and affected by more variables than are inanimate objects like chemical molecules, planets in motion, or even the functioning of some parts of the human body (e.g., the kidneys). Furthermore, psychological conclusions are based on probability (e.g., the presence of a particular variable makes a behavior more likely to happen), and probability introduces the kind of ambiguity that makes the conclusions easy to discount. Fourth, some psychological research is perceived to be derived from and possibly biased by a sociopolitical ideology. That is, there is the belief that some psychologists conduct their research with the goal of providing support for some political viewpoint. This is somewhat less common among the “hard sciences” although the controversy over climate change and the researchers who investigate it suggest that if the topic is one that elicits the ingroup-outgroup nature of the cultural divide, then the “hard sciences” are also not immune to the problem of scientific discounting.

I think that the discipline of psychology has already made vast improvements in managing its public impression and is probably held in higher esteem than it was 50 or even 20 years ago. However, continued vigilance is essential against those (both within and outside of the discipline) who contribute to the perception of psychology as something less than science. The field of psychology has much to offer – it can generate important knowledge that can inform public policy and increase people’s health and happiness, but it cannot do so if its scientific conclusions fall on deaf ears."
_________________________________

ResearchBlogging.orgMunro, G. (2010). The Scientific Impotence Excuse: Discounting Belief-Threatening Scientific Abstracts. Journal of Applied Social Psychology, 40 (3), 579-600 DOI: 10.1111/j.1559-1816.2010.00588.x

Monday, October 25, 2010

'Don't do it!' - how your inner voice really does aid self-control

As you stretch for yet another delicious cup cake, the abstemious little voice in your head pleads 'Don't do it!'. Does this self talk really have any effect on your impulse control or is it merely providing a private commentary on your mental life? A new study using a laboratory test of self-control suggests that the inner voice really does help.

Alexa Tullett and Michael Inzlicht had 37 undergrads perform the Go/No Go task. Briefly, this involved one on-screen symbol indicating that a button should be pressed as quickly as possible (the Go command) whilst another indicated that the button press should not be performed (No Go). Because the Go symbol was far more common, participants tended to find it difficult to suppress making a button press on the rare occasions when a No Go command was given. People with more self-control would be expected to make fewer errors of this kind.

Crucially, Tullett and Inzlicht also had the participants perform a secondary task at the same time - either repeating the word 'computer' with their inner voice, or drawing circles with their free hand. The central finding was that participants made significantly more errors on the Go/No Go task (i.e. pressing the button at the wrong times) when they also had to repeat the word 'computer' to themselves, compared with when they had the additional task of drawing circles. This difference was exacerbated during a more difficult version of the Go/No Go task in which the command symbols were periodically switched (so that the Go command became the No Go command and vice versa). It seems that the participants' self-control was particularly compromised when their inner voice was kept busy saying 'computer' so that it couldn't be used to aid self-control.

'By examining performance on a classic self-control task, this study provides evidence that when we tell ourselves to "keep going" on the treadmill, or when we count to ten during an argument, we may be helping ourselves to successfully overcome our impulses in favour of goals like keeping fit, and preserving a relationship,' the researchers said.
_________________________________

ResearchBlogging.orgTullett AM, and Inzlicht M (2010). The voice of self-control: Blocking the inner voice increases impulsive responding. Acta psychologica, 135 (2), 252-6 PMID: 20692639

Wednesday, October 6, 2010

How to form a habit

This has nothing to do with nuns' clothing. Habits are those behaviours that have become automatic, triggered by a cue in the environment rather than by conscious will. Health psychologists are interested for obvious reasons - they want to assist people in breaking unhealthy habits, while helping them adopt healthy ones. Remarkably, although there are plenty of habit-formation theories, before now, no-one had actually studied habits systematically as they are formed.

Phillippa Lally and her team recruited 96 undergrads (mean age 27) and asked them to adopt a new health-related behaviour, to be repeated once a day for the next 84 days. The new behaviour had to be linked to a daily cue. Examples chosen by the participants included going for a 15 minute run before dinner; eating a piece of fruit with lunch; and doing 50 sit-ups after morning coffee. The participants also logged onto a website each day, to report whether they'd performed the behaviour on the previous day, and to fill out a self-report measure of the behaviour's automaticity. Example items included 'I do it automatically', 'I do it without thinking' and 'I'd find it hard not to do'.

Of the 82 participants who saw the study through to the end, the most common pattern of habit formation was for early repetitions of the chosen behaviour to produce the largest increases in its automaticity. Over time, further increases in automaticity dwindled until a plateau was reached beyond which extra repetitions made no difference to the automaticity achieved.

The average time to reach maximum automaticity was 66 days, although this varied greatly between participants from 18 days to a predicted 254 days (assuming the still rising rate of change in automaticity at the study end were to be continued beyond the study's 84 days). This is much longer than most previous estimates of the time taken to acquire a new habit - for example a 1988 book claimed a behaviour is habitual once it's been performed at least twice a month, at least ten times. In fact, even after 84 days, about half of the current study participants had failed to achieve a high enough automaticity score for their new behaviour to be considered a habit.

Unsurprisingly perhaps, more complex behaviours were found to take longer to become habits. Participants who'd chosen an exercise behaviour took about one and a half times as long to reach their automaticity plateau compared with the participants who adopted new eating or drinking behaviours.

What about the effect of having a day off from the behaviour? Writing in 1890, William James said that a behaviour must be repeated without omission for it to become a habit. The new results found that a single missed day had little impact on later automaticity gains, either early in the study or later on, suggesting James may have overestimated the effect of a missed repetition. However, there was some evidence that too many missed repeats of the behaviour, even if spread out over time, had a cumulative effect, reducing the maximum automaticity level that was ultimately reached.

It seems the message of this research for those seeking to establish a new habit is to repeat the behaviour every day if you can, but don't worry excessively if you miss a day or two. Also be prepared for the long haul - remember the average time to reach peak automaticity was 66 days.

This research has a serious shortcoming, acknowledged by the researchers, which is that it depended entirely on participants' ability to report the automaticity of their own behaviour. Also, the amount of data made it hard to form clear conclusions about the need for consistency in building a habit. However, the study provides an exciting new approach for exploring habit formation and future research could easily remedy these shortcomings.
_________________________________

ResearchBlogging.orgLally, P., van Jaarsveld, C., Potts, H., and Wardle, J. (2010). How are habits formed: Modelling habit formation in the real world. European Journal of Social Psychology DOI: 10.1002/ejsp.674

Tuesday, September 21, 2010

What do I want? Don't ask me: Choice blindness at the market stall

Imagine you sampled two jams, chose your favourite, and were then offered another taste of it before being asked to explain your preference. Would you notice that you'd been offered the wrong one, that you were actually tasting the jam you'd turned down? A new study conducted at a market stall by Lars Hall and colleagues found that even for tastes as dramatically different as spicy Cinnamon-Apple and bitter Grapefruit, fewer than 20 per cent of participants realised that they'd just tasted the jam they'd moments earlier turned down. Even after being told the truth, fewer than half said they'd suspected they'd been offered the wrong jam.

This striking lack of insight has been dubbed choice blindness. Before now, it had only been demonstrated for visual preferences, in relation to women's faces, in a lab environment. This new study finds the effect in the real world, and in the context of taste and smell (as well as choosing between pairs of jams, participants also used smell to choose between pairs of specialist teas including Pernod vs. Mango).

To test the choice blindness effect, researchers used sleight of hand and double-ended jam jars or tea jars with a divide in the middle. Each jar contained a different jam/tea option at each end. Participants were presented with a pair of jars and tasted/smelt a sample from each. Then, by surreptitiously inverting the jars, the researchers were able to offer participants a second taste/smell from what appeared to be the same jar they'd just selected as their favourite, but actually now contained the jam/tea choice that they'd turned down.

Remarkably, on trials in which the tea or jam had been swapped, participants were just as confident about their choice as they were on control trials. However, as you'd expect, participants more often detected that the jams/teas had been swapped when choosing between pairs that pilot work had established were more different from each other. Another twist was that some participants were told they could actually take away their favoured jam or tea as a reward. However, this made no difference to the rates at which they detected their choice had been swapped, thus undermining the idea that the choice blindness effect may have to do with a lack of motivation.

People's apparent lack of awareness about choices they themselves have just made not only raises awkward questions about the limits of conscious awareness, but surely also has real-world implications. The researchers put it this way: 'The fact that participants often fail to notice mismatches between a taste of Cinnamon-Apple and Grapefruit, or a smell of Mango and Pernod is a result that might cause more than a hiccup in the food industry, which is critically dependent on product discrimination and preference studies to further the trade.'
_________________________________

ResearchBlogging.orgHall L, Johansson P, Tärning B, Sikström S, & Deutgen T (2010). Magic at the marketplace: Choice blindness for the taste of jam and the smell of tea. Cognition, 117 (1), 54-61 PMID: 20637455

Wednesday, July 21, 2010

We're happier when busy but our instinct is for idleness

Forced to wait for fifteen minutes at the airport luggage carousel leaves many of us miserable and irritated. Yet if we'd spent the same waiting time walking to the carousel we'd be far happier. That's according to Christopher Hsee and colleagues, who say we're happier when busy but that unfortunately our instinct is for idleness. Unless we have a reason for being active we choose to do nothing - an evolutionary vestige that ensures we conserve energy.

Consider Hsee's first study. His team offered 98 students a choice between delivering a completed questionnaire to a location that was a 15-minute round-trip walk away, or delivering it just outside the room and then waiting 15 minutes. A twist was that either the same or different types of chocolate snack bar were offered as a reward at the two locations.

If the same snack bar was offered at both locations then the majority (68 per cent) of students chose the lazy option, delivering the questionnaire just outside the room. By contrast, if a different (black vs. white) bar was offered at each location then the majority (59 per cent) chose the far away 'busy' option. This was the case even though earlier research showed both snack bar options were equally appealing, and even though the location of the two snack bar types was counterbalanced across participants. In other words, Hsee said, the students' instinct was for idleness, but when they were given a specious excuse for walking further, most of them took the busy option. Crucially, when asked afterwards, the students who'd taken the walk reported feeling significantly happier than the idle students, consistent with Hsee's theory that we're happier when busy (a repeat of the study in which students were allocated without choice to the idle or busy condition led to the same outcome - the busier students felt happier).

In a variant of this first study, students asked to evaluate a bracelet had the option of either spending fifteen minutes waiting time sitting idle or spending the same time disassembling the bracelet and rebuilding it. Those given the option of rebuilding it into its original configuration largely chose to sit idle - consistent with our having an instinct for idleness. By contrast, those told they could re-assemble the bracelet into a second, equally attractive and useful design tended to take up the challenge - again, an excuse, however superficial, for activity seems to be all it takes to spur us on. As before, those who spent the fifteen minutes busy subsequently reported feeling happier than those who sat idle.

Given that being busy makes us happier but that our instinct is for idleness, Hsee's team say there is a case for encouraging what they call 'futile busyness,' that is: 'busyness serving no purpose other than to prevent idleness. Such activity is more realistic than constructive busyness and less evil than destructive busyness.'

The researchers proceed to argue that, unfortunately, most people will not be tempted by futile busyness, so there's a paternalistic case for governments and organisations tricking us into more activity: 'housekeepers may increase the happiness of their idle housekeepers by letting in some mice and prompting the housekeepers to clean up. Governments may increase the happiness of idle citizens by having them build bridges that are actually useless.' In fact, according to Hsee's team, such interventions already exist, with some airports having deliberately increased the walk to the luggage carousel so as to reduce the time passengers spend waiting idly for luggage to arrive.
_________________________________

ResearchBlogging.orgHsee CK, Yang AX, & Wang L (2010). Idleness aversion and the need for justifiable busyness. Psychological science : a journal of the American Psychological Society / APS, 21 (7), 926-30 PMID: 20548057

Monday, June 28, 2010

How hunger affects our financial risk taking

The hungrier an animal becomes, the more risks it's prepared to take in the search for food. Now, for the first time, Mkael Symmonds and colleagues have shown that our animal instinct to maintain a balanced metabolic state influences our decision-making in other contexts, including finance.

Nineteen male participants performed the same gambling task on three occasions, a week apart: either after a fourteen hour fast; immediately after eating a standard two-thousand calorie meal; or one hour after eating a two-thousand calorie meal. The task simply required participants to choose repeatedly between pairs of gambles, one of which was always riskier but more lucrative than the other.

The immediate effect of the meal was to neutralise risk aversion. For the men with more adipose tissue and higher baseline levels of leptin (a hormone that suppresses appetite), who are generally more risk averse, this meant they became less risk averse when performing the task right after eating. By contrast, for men with less adipose tissue and lower leptin levels, who are generally low risk averse, their risk aversion was increased immediately after eating, just as you'd expect based on the behaviour of hungry animals.

An hour after eating gives time for hormonal effects to kick in. As expected, men who reported feeling less hungry an hour after eating, and whose levels of acyl-ghrelin (a hormone that increases appetite) in the blood stream had fallen, played the gambling game in more cautious fashion. 'This parallels findings in foraging animals,' Symmonds told the Digest, 'where changes in metabolic state promote changes in behaviour to maintain or reach a metabolic benchmark (to take more risk if intake rate is relatively low, and less risk if intake is relatively high), but here we see the effect in the economic domain.'

The researchers said their findings have implications for understanding the behaviour of dieters, the obese and people with eating disorders. 'Prandial ghrelin suppression is reduced in obesity,' Symmonds and his co-authors wrote. 'Thus we predict greater risk-seeking in obese individuals following feeding, augmented by larger immediate post-prandial effects on risk taking due to higher baseline adiposity. This mechanism may underpin a component of the aberrant decision-making seen in obese individuals, including impulsivity and reward-seeking behaviour. We also predict profound effects on decision-making for individuals operating at very low baseline energy reserves [i.e. dieters and people with eating disorders]'
_________________________________

ResearchBlogging.orgSymmonds, M., Emmanuel, J., Drew, M., Batterham, R., & Dolan, R. (2010). Metabolic State Alters Economic Decision Making under Risk in Humans. PLoS ONE, 5 (6) DOI: 10.1371/journal.pone.0011090

Monday, June 21, 2010

Does greater competition improve performance or increase cheating?

What happens when you recruit dozens of students to perform a maze-based computer task and then you ratchet up the competitive pressure? Does their performance improve or do they just cheat more?

Christiane Schwieren and Doris Weichselbaumer found out by having 33 men and 32 women at the Universitat Pompeu Fabra in Barcelona spend 30 minutes completing on-screen mazes. Crucially, half the students were paid according to how many mazes they completed whereas the half in the 'highly competitive' condition were only paid per maze if they were the top performer in their group of six students.

The students in the highly competitive condition narrowed their eyes, rolled up their sleeves, focused their minds and cheated. That's right, the students playing under the more competitive prize rules didn't complete any more mazes than students in the control group, they just cheated more.

To be more specific, the female students in the highly competitive condition cheated more. That is, although across both conditions there was no overall difference between men and women in the amount they cheated, only women responded to the competition intensity by cheating more. Schwieren and Weichselbaumer dug deeper into their results and actually this wasn't a gender issue. Competition increased cheating specifically among poorer performers and it just happened that the poorer performers tended to be female.

How did the researchers measure cheating? After a brief practice, the students were told to continue completing mazes on level 2 difficulty, but they could choose to break the rules by switching to an easier level. The game also gave the option of clicking a button to be guided through the maze solutions. Finally, the students could lie at the end on a score sheet about how many mazes they'd completed. Earlier the researchers had loaded a spy programme on the computers. This took a screen shot on each mouse click, thus revealing the students' true actions.

'It turns out that individuals who are less able to fulfill the assigned task do not only have a higher probability to cheat, they also cheat in more different ways,' the researchers said. 'It appears that poor performers either feel entitled to cheat in a system that does not give them any legitimate opportunities to succeed, or they engage in "face saving" activity to avoid embarrassment for their poor performance."
_________________________________

ResearchBlogging.orgSchwieren, C., & Weichselbaumer, D. (2010). Does competition enhance performance or cheating? A laboratory experiment Journal of Economic Psychology, 31 (3), 241-253 DOI: 10.1016/j.joep.2009.02.005

Monday, June 14, 2010

Lucky number plates go up in value when times are bad

The basis for many superstitious beliefs may be little more than fantasy but their economic effects are all too real. According to Travis Ng and colleagues at the Chinese University of Hong Kong, casual estimates suggest that between $800 and $900 million is wiped off the value of US businesses every Friday the Thirteenth! Now Ng's team has explored the economic cost of superstition by comparing the value of Hong Kong car number plates purchased through auction from 1997 to 2009.

The new research focuses particularly on the presence of 4s and 8s in Hong Kong plates. There's a consensus in Hong Kong that '8', which rhymes in Cantonese with 'prosper' or 'prosperity', is a lucky number, whereas '4', which rhymes with 'die' or 'death', is an unlucky number.

Controlling for visual factors that affect price (for example, plates with fewer digits are more sought-after) Ng's team found that an ordinary 4-digit plate with one extra lucky '8' was sold 63.5 per cent higher on average. An extra unlucky '4' by contrast diminished the average 4-digit plate value by 11 per cent. These effects aren't trivial. Replacing the '7' in a standard 4-digit plate with an '8' would boost its value by roughly $400.

As well charting the monetary value of superstitious beliefs, Ng's study was also able to record how the economic influence of superstition varies according to ongoing macroeconomic circumstances. For instance, the presence of a '4' in a plate always drops its value, but during bad economic times, the diminution in value is greater. On a day that the stock market had dropped by 1 per cent, the 'cost' of having a '4' in a standard 4-digit plate was increased by 19.9 per cent. 'A "4" is bad,' the researchers wrote, 'but it is even worse in bad times.'

Curiously, the effect of ongoing market conditions on the impact of 4s and 8s wasn't equal. Changes to the stock market index exaggerated the 'cost' associated with an extra '4' on both 3-digit and 4-digit plates, but it only affected the premium associated with having an extra lucky '8' on 3-digit plates. 'We are not able to come up with a good explanation for the asymmetric effects,' the researchers said.

'We have shown that the value of superstitions can be economically significant,' the researchers concluded. 'We have also shown that some results are consistent with the view that people tend to be more superstitious in bad times.'
_________________________________

ResearchBlogging.orgNg, T., Chong, T., & Du, X. (2010). The value of superstitions. Journal of Economic Psychology, 31 (3), 293-309 DOI: 10.1016/j.joep.2009.12.002

Monday, May 10, 2010

Why it's time for the media to help our politicians believe they can succeed

A psychology study fresh off the presses shows the importance of positive expectations for the successful resolution of awkward negotiations. The results couldn't be more timely as our senior politicians negotiate over terms for a new coalition British government - the first since the 1970s. The finding suggests that the media has a vital role to play. By fostering optimism in the likely success of the negotiations, the media could help increase the likelihood of a successful resolution.

In an initial experiment, Varda Liberman and colleagues had undergrads negotiate with a postgrad (actually a confederate working for the researchers) over the division of university funds between the undergrad and post-grad student populations. Crucially, half the 34 participants were told that every single previous negotiating pair had managed to reach an agreement (the 'Positive Expectations' condition), whereas the other half of the participants acted as a control and were merely told to try their best to reach a mutually acceptable agreement. The negotiations followed a format whereby the confederate made an offer, the participant responded with a counter-offer, and the confederate replied with a final offer that the participant could either accept or reject. The confederate's offers were the same across the two conditions.

The key finding was that all 17 participants in the Positive Expectations condition accepted the final offer compared with just 5 out of 17 participants in the control condition. The Positive Expectations students also rated the offers of their negotiating partner, the confederate, as fairer and they felt more satisfied with the negotiation outcome.

The tension was raised in a second experiment which involved Jewish Israeli Business School students negotiating with an Arab Israeli woman over the division of funds between Israel and Palestine. Again, the positive expectations of half the students was manipulated by telling them that virtually all previous negotiations of this kind had ended in agreement. This time, 31 out 38 students in the Positive Expectations condition accepted their negotiating partner's final offer compared with just 13 out of 38 students in the neutral, control condition. Moreover, the Positive Expectation students were far happier about the negotiation outcome than the control students.

What was going on? Why should the knowledge that comparable prior negotiations ended in success change the way that people negotiate? One reason seemed to be that raised expectations of success led the students to make a more generous counter-offer which meant the gap between their counter-offer and the final offer was smaller. Positive expectations also seemed to change the way that the other party's offers were interpreted. The researchers said that under more pessimistic conditions the other party's offers are interpreted as likely to be in that party's own self interest. By contrast, positive expectations about the negotiation outcome foster a sense that the other party's offers are being made in a more constructive spirit, because they know 'that we need to reach an agreement'.

A problem when it comes to translating the lessons from this research to real life and particularly to the current negotiations among Britain's senior politicians, is that there isn't always a history of success available to inspire optimism. In fact Conservative Prime Minister Edward Heath's attempt to form a coalition with the Liberals in 1974 ended in failure. However, Liberman's team said this needn't be a fatal stumbling block - there are other means of encouraging a sense of optimism and positive expectations for a successful outcome, including 'mutual expressions of goodwill and commitment' or reference to successes in 'previous negotiations between the parties on other, more limited issues.' So far, that's exactly the spirit in which the negotiations seem to be taking place, with politicians on all sides making encouraging comments.

The researchers concluded: 'If our present research gives some basis for optimism about the possibility of bringing theory and research to bear in overcoming barriers to dispute resolution in a strife-worn world, we hope that such optimism will indeed prove to be self-fulfilling, and that practitioners and theorists will be able to find common ground in their efforts to resolve disputes peacefully.'
_________________________________

ResearchBlogging.orgLiberman, V., Anderson, N., & Ross, L. (2010). Achieving difficult agreements: Effects of Positive Expectations on negotiation processes and outcomes. Journal of Experimental Social Psychology, 46 (3), 494-504 DOI: 10.1016/j.jesp.2009.12.010

Images courtesy of Wikipedia.

Wednesday, April 21, 2010

Don't start group discussions by sharing initial preferences

When groups of people get together to make decisions, they often struggle to fulfil their potential. Part of the reason is that they tend to spend more time talking about information that everyone shares rather than learning fresh insights from each other. In a forthcoming paper, Andreas Mojzisch and Stefan Schulz-Hardt have uncovered a new reason groups so often make sub-optimal decisions. The researchers show that when a group of people begin a discussion by sharing their initial preferences, they subsequently devote less attention to the information brought to the table by each member, thus leading the group to fail to reach the optimal decision. The practical implications are clear - if you can, avoid beginning group decision-making sessions with the exchange of members' initial preferences.

Mojzisch and Schulz-Hardt began their investigation with a carefully controlled simulation of a real group discussion. Rather than exchanging ideas face-to-face, dozens of participants were presented with some selective written information about various job candidates and either told or not told about the initial preferences of other group members who'd received different information. Each participant then received the information that had been given to all the other group members.

Participants needed to consider the information available to the entire group if they were to identify the optimum candidate. Crucially, participants who began the session by hearing about other group members' initial candidate preferences were subsequently less successful at using the group's shared information to pick the optimum candidate. A memory test suggested this was because they'd paid less attention to the relevant information than had the participants who'd been kept in the dark about other members' initial candidate preferences.

A final study tested these effects in a real, face-to-face group decision-making situation. One hundred and eighty students participated in sixty three-person groups tasked with selecting the best among three job candidates. Each group member started off with a unique set of information about the three candidates and the optimum candidate selection could only be reached if group members shared with each other their unique information. Once again, groups were far less successful at sharing the necessary information, and therefore at reaching an optimal decision, if they began their session by sharing their initial candidate preferences. As before, the reason was that sharing initial preferences led group members to pay less attention to the relevant information during group discussion.

'The take-home-message of our study is simple,' Mojzisch told the Digest. 'Ninety per cent of group discussions start with the members exchanging their pre-discussion preferences. Our research shows that learning the other group members' preferences at the beginning of a group discussion has a negative effect on the quality of group decision-making.'
_________________________________

ResearchBlogging.orgAndreas Mojzisch, & Stefan Schulz-Hardt (2010). Knowing others' preferences degrades the quality of group decisions. Journal of Personality and Social Psychology.

PS. This study is due to be published in the Journal of Personality and Social Psychology in May. I will add a link to the abstract as soon as it's available.

PPS. The authors of the current study tipped off the Digest editor about their research findings. If you have some exciting peer-reviewed research in press, you too could tip off the Digest editor, for the chance to have your findings popularised on one of the world's leading psychology blogs. Email: christianjarrett[@]gmail.com Thanks!

Monday, February 8, 2010

How framing affects our thought processes

A take-away restaurant near my house offers customers free home delivery or a ten per cent discount if you pick up. It sounds much better than saying you get no discount for picking up and suffer a ten per cent fee for delivery – this is the power of ‘framing’. Now David Hardisty and colleagues have dug a little deeper into framing, to show first, that these kinds of effects can interact with people's political persuasion, and second, that they can act by altering the order of people's thoughts.

Hundreds of online participants chose between various flights, computers and so on. In each case they could plump for a cheaper option or a more expensive, greener option, the latter including either a 'tax' to help reduce carbon emissions, or an 'offset' to do the same – depending on how the choice was framed. Whether the expensive option was framed as a tax or offset made no difference to Democrat (left-wing) participants. By contrast, Republicans (right-wing) and Independents were much less likely to choose the more expensive option when it was labelled as a tax.

In a second study the researchers added a technique known as 'concurrent thought listing', which involved the participants sharing their thoughts as they made their product choices.

This process revealed that when the expensive option was labelled as a tax, the Republicans and Independents, but not Democrats, had a consistent tendency to weigh-up the advantages of the cheaper option first before they considered the benefits of the greener choice. This is significant because past research shows that when we appraise options in sequence, the first item we consider tends to be favoured. Consistent with this, the tax frame led Republican participants to not only consider the cheaper option first but also to generate more supporting evidence for it. By contrast, when the expensive, greener option was labelled as an offset, political affiliation was no longer associated with the order in which options were considered, nor the weight of evidence generated for each option.

A final study tested whether the order in which we consider options really does have a causal role in our decision making. Participants of all political persuasions were instructed to consider the benefits of the greener, more expensive option first, whether it was labelled as a tax or offset. Despite this instruction, 54 per cent of Republicans failed to comply (showing just how averse they were to the 'tax' label). However, among those participants who did comply, this instruction had the effect of eliminating the interaction between framing and political affiliation – that is, the Republicans were no longer repelled by the greener, expensive option even when it was labelled as a tax.

‘Policy makers would be wise to note the differential impact that policy labels may have on different groups,’ the researchers concluded. ‘What might seem like a trivial semantic difference to one person can have a large impact on someone else.’
_________________________________

ResearchBlogging.orgHardisty, D., Johnson, E., & Weber, E. (2009). A Dirty Word or a Dirty World?: Attribute Framing, Political Affiliation, and Query Theory. Psychological Science, 21 (1), 86-92 DOI: 10.1177/0956797609355572