Showing posts with label Methodological. Show all posts
Showing posts with label Methodological. Show all posts

Thursday, December 1, 2011

Questionable research practices are rife in psychology, survey suggests

Update (15 Dec 2011): the uncorrected proofs of this article have now been released online (pdf).

Questionable research practices, including testing increasing numbers of participants until a result is found, are the "steroids of scientific competition, artificially enhancing performance". That's according to Leslie John and her colleagues who've found evidence that such practices are worryingly widespread among US psychologists. The results are currently in press at the journal Psychological Science and they arrive at a time when the psychological community is still reeling from the the fraud of a leading social psychologist in the Netherlands. Psychology is not alone. Previous studies have raised similar concerns about the integrity of medical research.

John's team quizzed 6,000 academic psychologists in the USA via an anonymous electronic survey about their use of 10 questionable research practices including: failing to report all dependent measures; collecting more data after checking if the results are significant; selectively reporting studies that "worked"; and falsifying data.

As well as declaring their own use of questionable research practices and their defensibility, the participants were also asked to estimate the proportion of other psychologists engaged in those practices, and the proportion of those psychologists who would likely admit to this in a survey.

For the first time in this context, the survey also incorporated an incentive for truth-telling. Some survey respondents were told, truthfully, that a larger charity donation would be made by the researchers if they answered honestly (based on a comparison of a participant's self-confessed research practices, the average rate of confession, and averaged estimates of such practices by others). Just over two thousand psychologists completed the survey. Comparing psychologists who received the truth incentive vs. those that didn't showed that it led to higher admission rates.

Averaging across the psychologists' reports of their own and others' behaviour, the alarming results suggest that one in ten psychologists has falsified research data, while the majority has: selectively reported studies that "worked" (67 per cent), not reported all dependent measures (74 per cent), continued collecting data to reach a significant result (71 per cent), reported unexpected findings as expected (54 per cent), and excluded data post-hoc (58 per cent). Participants who admitted to more questionable practices tended to claim that they were more defensible. Thirty-five per cent of respondents said they had doubts about the integrity of their own research. Breaking the results down by sub-discipline, relatively higher rates of questionable practice were found among cognitive, neuroscience and social psychologists, with fewer transgressions among clinical psychologists.

John and her colleagues said that many of the iffy methods they'd investigated were in a "grey-zone" of acceptable practice. "The inherent ambiguity in the defensibility of research practices may lead researchers to, however inadvertently, use this ambiguity to delude themselves that their own dubious research practices are 'defensible'." It's revealing that a follow-up survey that asked psychologists about the defensibility of the questionable practices, but without asking about their own engagement in those practices, led to far lower defensibility ratings.

John's team think the findings of their survey could help explain the "decline effect" in psychology and other sciences - that is, the tendency for effect sizes to decline with replications of previous results. Perhaps this is because the original, large effect size was obtained via questionable practices.

The current study also complements a recent paper published in Psychological Science by Joseph Simons and colleagues that used simulations and a real experiment to show how toying with dependent variables, sample sizes and other factors (the kind of practices explored in the current study) can massively increase the risk of a false-positive finding - that is, claiming a positive effect where there is none.

"[Questionable research practices] ... threaten research integrity and produce unrealistically elegant results that may be difficult to match without engaging in such practices oneself," John and her colleagues concluded. "This can lead to a 'race to the bottom', with questionable research begetting even more questionable research."
_________________________________

ResearchBlogging.orgLeslie John, George Loewentstein, and Drazen Prelec (In Press). Measuring the prevalence of questionable research practices with incentives for truth-telling. Psychological Science


Pulled from the comments: Psychfiledrawer is a repository for non-replications of published results.


Post written by Christian Jarrett for the BPS Research Digest.

Wednesday, October 12, 2011

Steve Jobs' gift to cognitive science

The ubiquity of iPhones, iPads and other miniature computers promises to revolutionise research in cognitive science, helping to overcome the discipline's over-dependence on testing Western, educated participants in lab settings.

That's according to an international team of psychologists who say the devices allow for experimentation on an unprecedented scale. "The use of smartphones allows us to dramatically increase the amount of data collected without sacrificing precision," say Stephane Dufau and his colleagues, "and thus has the potential to uncover laws of mind that have previously been hidden in the noise of small-scale experiments." In contrast, they argue that conducting cognitive psychology experiments over the internet has not been a great success because of problems obtaining the necessary precision of timing.

To illustrate their point, the researchers developed an iPhone/iPad App that replicates the classic "lexical decision task" used by psychologists to study the sub-second mental processes involved in reading. Participants are presented with a series of letter strings and simply have to indicate as quickly as possible whether each one is a real word or not. The App was launched as a seven-language international effort in December 2010 and after just four months data had been collected from over four thousand participants. By way of comparison, it took more than three years to collect a similar amount of data via conventional means. It will be easy to add further languages to the App, including non-Romanic alphabet languages like Chinese.

The free Science XL App presents the task to users as a test of word power and offers a choice of task lengths from two to six minutes. Once enrolled, participants use Yes/No buttons on the touch-screen display to indicate whether the letter strings that appear are real words or not. Each participant's performance stats are presented at the end and they are given the option of forwarding their results to the researchers via email. Extreme negative outliers were excluded from further analysis. There is the obvious issue of participants choosing to only send in favourable performance data. However, this doesn't spoil the ability to examine the effect of different factors on performance. For example, the data collected via the App matched many known features of lexical decision time data: reaction times were quicker for more common words and mean reaction times correlated with data collected in psychology labs.

Using smartphones "has wide multidisciplinary applications in areas as diverse as economics, social and affective neuroscience, linguistics, and experimental philosophy," say Dufau and his collaborators. "Finally it becomes possible to reliably collect culturally diverse data on a vast scale, permitting direct tests of the universality of cognitive theories."

This isn't the first time that psychology researchers have aired their excitement about the potential of mobile technologies to revolutionise their methods. A 2009 study used mobile phones to monitor participants' social movements and phone calls.
_________________________________

ResearchBlogging.orgDufau, S., Duñabeitia, J., Moret-Tatay, C., McGonigal, A., Peeters, D., Alario, F., Balota, D., Brysbaert, M., Carreiras, M., Ferrand, L., Ktori, M., Perea, M., Rastle, K., Sasburg, O., Yap, M., Ziegler, J., and Grainger, J. (2011). Smart Phone, Smart Science: How the Use of Smartphones Can Revolutionize Research in Cognitive Science. PLoS ONE, 6 (9) DOI: 10.1371/journal.pone.0024974

-Thanks to Marc Brysbaert for the tip-off.

Post written by Christian Jarrett for the BPS Research Digest.

Wednesday, September 14, 2011

How not to spot personality test fakers

Personality tests are an effective recruitment tool: higher scorers on conscientiousness and lower scorers on neuroticism tend to perform better in the job. But a major weakness of such tests is people's tendency to answer dishonestly. A study now shows that a popular approach to spotting cheaters is likely to be ineffective.

This approach, which has gained momentum in the research literature, is to focus on applicants' response times. Honest test-takers show an inverted U-shaped response profile, being fast when they strongly agree or disagree with test items (these come in the form of statements about the self, such as "I pay attention to details"), and slower when they answer more equivocally. This is thought to reflect a process whereby test takers refer to their self-schema and find it easier to answer when statements clearly conform or contradict this schema.

At least two theories predict that fakers won't show this inverted U-shape, and that response times therefore offer a way to expose those who are cheating. One theory has it that fakers refer to their self-schema and then exaggerate the truth on key statements. This has the effect of extending answer times for unequivocal answers, flattening out the inverted U-shape response time profile shown by honest answerers. Another theory says that fakers don't refer to a self-schema at all - they simply assess the social desirability of each item and exaggerate answers where necessary. This is a cognitively simpler task than referral to a self-schema, and again the inverted U-shaped response profile is predicted to flatten.

To test these predictions, Mindy Shoss and Michael Strube had 60 undergrads (38 women) complete a personality test (the Revised NEO Personality inventory) three times: once honestly, once to create a general good impression, and lastly, either to create a good impression specifically for a public relations role, or specifically for an accountant role.

The key finding is that participants showed the inverted U-shaped response time profile regardless of whether they were answering honestly or not. Response times were faster overall for the fakery conditions, and the inverted U-shape was actually accentuated in the specific public relations fakery condition. Shoss and Strube said these results are consistent with the idea that fakers form, and refer to, an idealised personality schema in their mind when completing a personality test, and so their answers show a similar response time profile to an honest test-taker. The accentuated inverted U-shape for the PR-role condition comes from the fact that the schema for such a role is like a caricature, making unequivocal answers for certain items even easier to provide than usual.

Digging deeper, the researchers found that when striving to make a good impression, participants scored higher on extraversion, agreeableness, openness and conscientiousness and lower on neuroticism.  The inverted U-shape in response times was greater for agreeableness and conscientiousness in the fake conditions than when answering honestly.

"This study casts doubt on the validity of response times for detecting faking in general," the researchers said. "... it seems that researchers and practitioners interested in detecting and reducing faking would do well to focus on other strategies."

An alternative approach to reducing test fakery is to force applicants to choose between pairs of equally appealing statements about themselves, as reported previously on the Digest. Other recent research has shown that many recruitment measures might actually be testing applicants' ability to discern what's required of them, rather than anything more specific, as reported recently by the BPS Occupational Digest.
_________________________________

ResearchBlogging.orgShoss, M., and Strube, M. (2011). How do you fake a personality test? An investigation of cognitive models of impression-managed responding. Organizational Behavior and Human Decision Processes, 116 (1), 163-171 DOI: 10.1016/j.obhdp.2011.05.003

Post written by Christian Jarrett for the BPS Research Digest.

Tuesday, July 26, 2011

Brain scans could influence jurors more than other forms of evidence

It's surely just a matter of time until functional MRI brain scans are admitted in US and UK courts. Companies like No Lie MRI have appeared, and there have been at least two recent attempts by lawyers in the USA to submit fMRI-based brain imaging scans as trial evidence.

Functional MRI gauges fluctuating activity levels across the brain, with experts divided on the merits of using the technology as a high-tech lie detection measure (see earlier). The late David McCabe who died earlier this year, and his colleagues, have put that debate to one side. They asked: if fMRI evidence were to be allowed in courts, would it have a particularly influential effect on jurors' decisions? There's good reason to think it might. For example, a 2008 study by Deena Weisberg found that lay people and neuroscience students (but not neuroscience experts) were more satisfied by bad scientific explanations when they contained gratuitous mentions of neuroscience.

For the new study, 330 undergrads at Colorado State University read a vignette about a criminal trial in which a defendant was accused of killing his estranged wife and lover. Various points of evidence were mentioned and summaries of testimony and cross-examination were provided (the vignette amounted to two pages).

Crucially, a sub-set of the participants read a version in which fMRI evidence was cited: "... there was increased activation of frontal brain areas when Givens [the defendant] denied killing his wife and neighbour, as compared to when he truthfully answered questions." For comparison, other participants read a version that either included incriminating evidence from polygraph, from thermal imaging technology (which measures changes in facial skin temperature), or that contained no lie-detection technology.

The key finding was that participants who read the brain-imaging version were far more likely (76 per cent) to say they considered the defendant guilty, compared with participants who read the other versions (47 to 53 per cent). Moreover, the lie-detection evidence was more likely to be cited by participants in the fMRI condition as key to their decision, as compared with participants who read versions that didn't mention fMRI.

The participants were not entirely seduced by fMRI. Some of them were given a slightly different version of the fMRI vignette, in which the expert witness warned about the technology's unreliability. These participants came to a similar proportion of guilty verdicts as the participants who read the vignette versions that lacked fMRI evidence. So it seems the persuasive influence of fMRI evidence can be tempered easily enough if people are reminded of its limitations.

The researchers acknowledged the obvious weaknesses of their study: the use of students as mock jurors, the use of vignettes rather than a real trial, and so on. These caveats aside, they said their data show that fMRI evidence could be more influential than other types of evidence. "... [T]hough determining whether that indicates the evidence would lead to unfair prejudice, confusion of the issues, misleading the jury, or needless presentation of cumulative evidence is a complex issue," they said. "At the very least, it appears that juries should be informed of the limitations of fMRI evidence."
_________________________________

ResearchBlogging.orgMcCabe, D., Castel, A., and Rhodes, M. (2011). The Influence of fMRI Lie Detection Evidence on Juror Decision-Making. Behavioral Sciences and the Law DOI: 10.1002/bsl.993

Further reading: The brain on the stand, by Jeffrey Rosen, New York Times magazine.

This post was written by Christian Jarrett for the BPS Research Digest.

Monday, June 6, 2011

Beware the "super well" - why the controls in psychology research are often too healthy

Many studies in clinical psychology and psychiatry are making the mistake of using healthy controls who are too healthy. That's according to a thought-provoking opinion piece by Sharon Schwartz and Ezra Susser - experts in the epidemiology of mental health.

Schwartz and Susser invite readers to consider a hypothetical study that samples participants from a wider group made up of people exposed to a virus prenatally and people not exposed to that virus. Imagine that a psychiatric registry is used to identify all the participants from this wider group who are diagnosed with schizophrenia, and they are compared with a slice of healthy participants recruited from the same source. The aim is to see what proportion of the participants with schizophrenia were exposed to the virus and what proportion of the healthy controls were exposed to the virus. If the history of exposure is higher among the schizophrenia participants, then this would suggest there may be an association between the virus and the later development of schizophrenia. In Schwartz and Susser's hypothetical scenario, there is no difference between patients and controls in rates of virus exposure and so the virus seems unassociated with schizophrenia. So far, so good - this is a classic case-controlled study.

The problem identified by Schwartz and Susser is that many such studies apply an exclusion criterion or criteria to the healthy controls that they don't also apply to the patient group. For example, they might rule out healthy controls with an alcohol problem, or depression, or even a physical disorder. The motivation for this is often the fear that these other disorders will obscure the potential link between the cause of interest and the condition of interest (virus exposure and schizophrenia in our ongoing example).

But to apply such exclusion criteria in a one-sided fashion (to the controls but not the patients), creates a serious confound. In our example, imagine that depressed "healthy" controls are excluded and imagine too that there is an underlying association between the virus exposure and depression. Excluding healthy controls with depression in this scenario would distort the results such that the virus appeared wrongly to be associated with schizophrenia (check out the full paper for the data behind this).

"With all the potential sources of bias in a biologic case-control study, why do we focus on the use of well controls?" the researchers asked. "We do so because the use of well controls is a common, and often recommended, method to select controls. Yet it is time-consuming and expensive, can cause considerable bias and does not improve study results."

If researchers include patient participants with other co-morbid diagnoses in their case-controlled studies, Schwartz and Susser went on to explain, then they must also include "healthy" controls who happen to have these other conditions. On the other hand, if researchers want to exclude other conditions, so as to clean up their investigation, then they must exclude both patient participants and controls with these other diagnoses.
_________________________________

ResearchBlogging.orgSchwartz, S., and Susser, E. (2011). The use of well controls: an unhealthy practice in psychiatric research. Psychological Medicine, 41 (06), 1127-1131 DOI: 10.1017/S0033291710001595

This post was written by Christian Jarrett for the BPS Research Digest.

Monday, April 18, 2011

Psychologists like to cite themselves

In a striking case of the experts falling foul of a phenomenon studied by themselves and their colleagues - the self-serving bias - it turns out that psychologists have a tendency to over-cite their own research papers.

Marc Brysbaert and Sinead Smyth analysed one recent issue of Psychological Science and the Journal of Experimental Psychology: Learning, Memory, and Cognition and two recent issues of the Quarterly Journal of Experimental Psychology and the European Journal of Cognitive Psychology.

For each of the articles in these journals, Brysbaert and Smyth used the 'find related records' function on the ISI Web of Science to find the article out there in the wider literature with the greatest overlap in the references it cited, but which was written by a different set of authors. This way the researchers ended up with a list of original target articles, each one paired with a second comparison paper by a different research group, presumably on the same or a highly similar topic (hence the overlap in the reference lists).

To check for a self-citation bias, Brysbaert and Smyth simply looked to see how many times the authors of a target article cited themselves compared with how many times they cited the authors of the comparison paper (and vice versa). For target articles, the average number of self-citations was 4.1 (11 per cent) compared with 2.3 citations of the comparison paper's authors. For the comparison papers, the average number of self-citations was 9 (10 per cent), compared with 1.8 citations of the authors of the target article.

The researchers summed up: 'A typical psychology article contains 3 to 9 self-citations, depending on the length of the reference list ... In contrast, cited colleagues in general receive 1 to 3 citations. This is what we call the self-citation bias: the preference researchers have to refer to their own work when they [supposedly] guide readers to the relevant literature.' The finding adds to past research that's shown academics are biased towards citing other researchers from their own country, and towards citing the work of the editor of the journal their research is published in.

Brysbaert and Smyth believe that psychology researchers indulge in biased self-citation practices not because their own past papers are always necessarily useful to the reader, but because it's 'good for the researchers' esteem, by means of self-enhancement and self-promotion.'

If that's the case, does it work? The evidence for this is mixed. A 2006 study in the field of economics found that papers with more self-citations were no more likely to end up being cited by other research groups. However, another study published in 2007 (pdf), which involved the analysis of over 64,000 Norwegian journal articles, found that authors who self-cited more also tended to receive more citations from others. 'So, although self-citations may not increase the likelihood that a particular article is cited, they do increase the chances that a particular author is cited,' Brysbaert and Smyth explained.

So, what to do about this self-citation bias? One option proposed by Brysbaert and Smyth is for journals editors to impose a cap on self-citations, particularly for journals, like Psychological Science, that have a cap on the total number of references allowed per paper - articles in this journal tended to have the highest proportion of self-citations. What do you think?
_________________________________

ResearchBlogging.orgMarc Brysbaert, and Sinead Smyth (2011). Self-enhancement in scientific research: The self-citation bias, Psychologica Belgica. In Press. [pdf via author website].

Thursday, April 14, 2011

Out of the lab and into the waiting room - research on where we look gets real

You know how when you're in an elevator or an underground train, everybody seems to try their darnedest not to look anyone else in the eye. This everyday experience completely contradicts hundreds of psychology studies conducted in the lab, which show how rapidly our attention is drawn to other people's faces and especially their eyes.

Why the contradiction? Because psychologists have used pared down, highly controlled situations to study where people look, often involving faces and social scenes presented on a computer screen. And crucially, when participants look at a monitor, they generally know that the other person can't look back. In real life, things get more complicated - we might not want to engage in eye contact for all the social messages that can send.

Now psychologists are realising it's time to step out of the lab to see how social attention operates in the real world. One step at a time though - they've still kept things fairly basic. Kaitlin Laidlaw and her colleagues rigged 26 student participants up with a mobile eye-tracking head-set and had them sit in a waiting room for a short time.

There was some minor trickery. The participants thought they were waiting for a navigation task, in which their eye movements would be recorded as they went from room to room. That really did happen, but first, for two minutes, whilst an experimenter went to fetch an instruction sheet, the participants' eye movements were recorded for the purposes of the current study.

For half the participants, another student (female, aged 24, and actually an assistant working for the researchers) was sat nearby, fifty inches to the left and 40 inches in front. She was filling in a questionnaire quietly and looked directly at them, with a neutral expression, three times during the two-minute wait. For the other participants, no other person was physically present, but there was a TV monitor located a similar distance away to the right, on which was shown a student filling in a questionnaire (this was the same person as in the other condition, behaving in exactly the same way). The question - how would the participants' head and eye movements differ between the groups?

The participants in the video condition looked at the other student (shown on the monitor) far more often than they looked at a blank computer monitor located elsewhere in the room, and far more often than the participants in the physical presence condition looked at the student sat near them. In fact, the participants in the latter condition didn't look at the physically present student any more than they looked at a blank computer monitor in the room. 'Through the simple act of introducing the potential for social interaction, visual behaviour changed dramatically,' the researchers said.

A further detail was that participants who scored lower on a self-report measure of social skills tended to look more at the other student in the physically present condition. The researchers said this association could be because of their reduced awareness of social etiquette, and could help explain why studies of people diagnosed with autistic spectrum disorders have identified anomalies in social attention in real world scenarios, but have often failed to find them in the lab (looking behaviour was unrelated to self-reported social skills in the video monitor condition).

This study is just the start - all sorts of questions remain unanswered, from the effect of wearing sunglasses, so your gaze can't be seen, to cross-cultural comparisons. 'It is important to note that our results do not imply that humans do not possess a bias in real life to attend to other people, as the video-taped confederate condition clearly demonstrates that we do,' the researchers said. 'However, our live-confederate condition provides strong evidence that this behaviour is malleable, and can be influenced by the opportunity for an interaction with the other individual.'
_________________________________

ResearchBlogging.orgLaidlaw, K., Foulsham, T., Kuhn, G., and Kingstone, A. (2011). Potential social interactions are important to social attention. Proceedings of the National Academy of Sciences, 108 (14), 5548-5553 DOI: 10.1073/pnas.1017022108 [Hat tip: Sarcastic_f]

Monday, December 13, 2010

When and how psychological data is collected affects the kind of students who volunteer

Psychology has a serious problem. You may have heard about its over-dependence on WEIRD participants - that is, those from Western, Educated, Industrialised, Rich Democracies. More specifically, as regular readers will be aware, countless psychology studies involve undergraduate students, particularly psych undergrads. Apart from the obvious fact that this limits the generalisability of the findings, Edward Witt and his colleagues provide evidence in a new paper for two further problems, this time involving self-selection biases.

Just over 500 Michigan State University undergrads (75 per cent were female) had the option, at a time of their choosing during the Spring 2010 semester, to volunteer either for an on-line personality study, or a face-to-face version. The data collection was always arranged for Wednesdays at 12.30pm to control for time of day/week effects. Also, the same personality survey was administered by computer in the same way in both experiment types, it's just that in the face-to-face version it was made clear that the students had to attend the research lab, and an experimenter would be present.

Just 30 per cent of the sample opted for the face-to-face version. Predictably enough, these folk tended to score more highly on extraversion. The effect size was small (d=-.26) but statistically significant. Regards more specific personality traits, the students who chose the face-to-face version were also more altruistic and less cautious.

What about choice of semester week? As you might expect, it was the more conscientious students who opted for dates earlier in the semester (r=.-.20). What's more, men were far more likely to volunteer later in the semester, even after controlling for average personality difference between the sexes. For example, 18 per cent of week one participants were male compared with 52 per cent in the final, 13th week.

In other words, the kind of people who volunteer for research will likely vary according to the time of semester and the mode of data collection. Imagine you used false negative feedback on a cognitive task to explore effects on confidence and performance. Participants tested at the start of semester, who are typically more conscientious and motivated, are likely to be affected in a different way than participants who volunteer later in the semester.

This isn't the first time that self-selection biases have been reported in psychology. A 2007 study, for example, suggested that people who volunteer for a 'prison study' are likely to score higher than average on aggressiveness and social dominance, thus challenging the generalisability of Zimbardo's seminal work. However, despite the occasional study highlighting these effects, there seems to be little enthusiasm in the social psychological community to do much about it.

So what to do? The specific issues raised in the current study could be addressed by sampling throughout a semester and replicating effects using different data collection methods. 'Many papers based on college students make reference to the real world implications of their findings for phenomena like aggression, basic cognitive processes, prejudice, and mental health,' the researchers said. 'Nonetheless, the use of convenience samples place limitations on the kinds of inferences drawn from research. In the end, we strongly endorse the idea that psychological science will be improved as researchers pay increased attention to the attributes of the participants in their studies.'
_________________________________

ResearchBlogging.orgWitt, E., Donnellan, M., and Orlando, M. (2011). Timing and selection effects within a psychology subject pool: Personality and sex matter. Personality and Individual Differences, 50 (3), 355-359 DOI: 10.1016/j.paid.2010.10.019

Previously on the Digest: Just how non-clinical are so-called non-clinical community samples?
Just how representative are the people who volunteer for psychology experiments?

Friday, October 22, 2010

Asch's conformity study without the confederates

With the help of five to eight 'confederates' (research assistants posing as naive participants), Solomon Asch in the 1950s found that when it came to making public judgments about the relative lengths of lines, some people were willing to agree with a majority view that was clearly wrong.

Asch's finding was hugely influential, but a key criticism has been his use of confederates who pretended to believe unanimously that a line was a different length than it really was. They might well have behaved in a stilted, unnatural manner. And attempts to replicate the study could be confounded by the fact that some confederates will be more convincing than others. To solve these problems Kazuo Mori and Miho Arai adapted the MORI technique (Manipulation of Overlapping Rivalrous Images by polarizing filters; pdf), used previously in eye-witness research. By donning filter glasses similar to those used for watching 3-D movies, participants can view the same display and yet see different things.

Mori and Arai replicated Asch's line comparison task with 104 participants tested in groups of four at a time (on successive trials participants said aloud which of three comparison lines matched a single target line). In each group, three participants wore identical glasses, with one participant wearing a different set, thereby causing them to observe that a different comparison line matched the target line. As in Asch's studies, the participants stated their answers publicly, with the minority participant always going third.

Whereas Asch used male participants only, the new study involved both men and women. For women only, the new findings closely matched the seminal research, with the minority participant being swayed by the majority on an average of 4.41 times out of 12 key trials (compared with 3.44 times in the original). However, the male participants in the new study were not swayed by the majority view.

There are many possible reasons why men in the new study were not swayed by the majority as they were in Asch's studies, including cultural differences (the current study was conducted in Japan) and generational changes. Mori and Arai highlighted another reason - the fact that the minority and majority participants in their study knew each other, whereas participants in Asch's study did not know the confederates. The researchers argue that this is a strength of their new approach: 'Conforming behaviour among acquaintances is more important as a psychological research topic than conforming among strangers,' they said. 'Conformity generally takes place among acquainted persons, such as family members, friends or colleagues, and in daily life we seldom experience a situation like the Asch experiment in which we make decisions among total strangers.'

Looking ahead, Mori and Arai believe their approach will provide a powerful means of re-examining Asch's classic work, including in situations - for example, with young children - in which the use of confederates would not be practical.
_________________________________

ResearchBlogging.orgMori, K., and Arai, M. (2010). No need to fake it: Reproduction of the Asch experiment without confederates. International Journal of Psychology, 45 (5), 390-397 DOI: 10.1080/00207591003774485

Tuesday, September 14, 2010

What are participants really up to when they complete an online questionnaire?

Internet surveys are an increasingly popular method for collecting data in psychology, for obvious reasons, but they have some serious shortcomings. How do you know if a participant read the instructions properly? What if they clicked through randomly, completed it drunk or maybe their cat walked across the keyboard? Now a possible solution has arrived in the form of a tool, called the UserActionTracer (UAT), developed by Stefan Stieger and Ulf-Dietrich Reips.

The UAT is a piece of code that tells the participant's web browser to store information, including timings, on all mouse clicks (single and double), choices in drop-down menus, radio buttons, all inserted text, key presses and the position of the mouse pointer. Stieger and Reips tested this out with a survey of 1046 participants on the subject of instant messaging. The new tool revealed that 31 participants changed their reported age; 5.9 per cent made suspicious changes to opinions they'd given; 46 per cent clicked through at least some parts of the questionnaire at a suspiciously fast rate (mainly for so-called 'semantic differential' items in which the participant must choose a position between two contrasting adjectives); 3.6 per cent of participants left the questionnaire inactive for long periods; 6.3 per cent displayed excessive clicking; and 11 per cent showed excessive mouse movements (it's that cat again).

As a way of checking the usefulness of this extra behavioural data, the researchers concentrated on the fraction of participants for whom they had access to a secondary source of information that could be used to verify the questionnaire answers. This showed that participants who'd displayed more suspicious behaviour while filling out the questionnaire also tended to provide answers that didn't match up with the other information source.

'Our study shows that the UAT was successful in collecting highly detailed information about individual answering processes in online questionnaires,' Stieger and Reips said. Another application of the tool is in pre-testing of online questionnaires. Researchers could use the tool to test which items tend to prompt corrections or inappropriate click-throughs before rolling out a questionnaire to a larger sample.
_________________________________

ResearchBlogging.orgStieger, S., & Reips, U. (2010). What are participants doing while filling in an online questionnaire: A paradata collection tool and an empirical study. Computers in Human Behavior, 26 (6), 1488-1495 DOI: 10.1016/j.chb.2010.05.013

Wednesday, June 2, 2010

The homeless man and his audio cave

We're defined in part by where we are, the places we go and what we do there. We adorn our homes with paraphernalia caught in the net of life - the photos, the books and pictures. But what happens when you're homeless? How do you define your space and identity when your home is a public place? To find out, Darrin Hodgetts and colleagues have conducted an unusual 'ethnographic' case study with 'Brett', a 44-year-old homeless man in Auckland.

The researchers gave Brett a camera, asked him to take photos representative of his life and then they conducted two in-depth interviews with him, using the photos as spring-boards for discussion.

The clearest finding to emerge was the way that Brett used a portable radio to insulate himself from the outside world - what the researchers called an 'audio cave'. 'I've got a sound bubble around me,' Brett said, 'and I can wander through the streets without paying attention to what's going on around me.' At the same time, by consistently listening to his favourite station George FM, Brett was able to develop a sense of belonging with the station's other listeners. This provided Brett with a 'fleeting sense of companionship and "we-ness",' the researchers said.

Brett is a self-confessed loner who avoids contact with other people where possible and who tries to conceal his homeless status. He told the researchers about the places he went that enabled him to do this, including a former gun emplacement with stunning views of the sea; Judges Bay where there are free showers and gas barbecues; and in the city centre, the church, bookshops and libraries. These places allow Brett to experience 'life as a "normal" person who has interest in books and reading, or simply escaping the city to sit and reflect,' the researchers said. By contrast, returning to photograph the public toilets on Pitt Street was an ordeal for Brett, reminding him of this time as a drug addict.

Brett referred to how other homeless people spend a lot of time sitting round talking and how it [homelessness] psychologically unhinges them. By contrast, the researchers said Brett had never 'lost himself' to the streets. '...[H]is memories, imagination, and daily practices, including his use of space, provide anchorage to an adaptive sense of self and belonging.'
_________________________________

ResearchBlogging.orgHodgetts D, Stolte O, Chamberlain K, Radley A, Groot S, & Nikora LW (2010). The mobile hermit and the city: Considering links between places, objects, and identities in social psychological research on homelessness. The British journal of social psychology / the British Psychological Society PMID: 19531282

-The image, courtesy of Darrin Hodgetts, shows Brett's sun-glasses, portable radio and book, which help him create a personal space in public.
-For more on the psychology of homelessness, see this recent 'Helping the Homeless' feature article in The Psychologist magazine.

Friday, April 23, 2010

Face-to-face in a brain scanner

Many neuro-imaging studies claim to have investigated what happens in the brain when people interact socially. To overcome the awkward fact that participants have to lie entombed in the bore of a large magnet, these studies have used various means to simulate a social interaction. This includes: having participants watch videos of social interactions; interact with an animated character; or play a game with a human opponent (usually computer controlled) supposedly located in another room. Such methods score marks for improvisation but arguably none of them fully capture the dynamic cut and thrust of a real face-to-face social interaction between two people. That's why Elizabeth Redcay and her colleagues have devised the first ever experimental set up that allows for live face-to-face (via video link) interaction whilst participants are prostrate inside a brain-imaging magnet.

Participants in this study watched a live video feed of the experimenter. The experimenter in turn had a display showing them a live feed of where the participant was looking. Experimenter and participant then engaged in a series of 'games' that required social interaction. For example, in one, the experimenter picked up various toys and the participant had to look in the direction of the appropriately coloured bucket to which the toy belonged. Compared with watching a recording of this same interaction, the live interaction itself triggered increased activation in a swathe of social-cognitive, attention-related and reward processing brain regions.

The second experiment involved the participant identifying which screen quadrant a mouse was hidden in. In the live 'joint attention' condition, the experimenter's gaze direction cued the mouse's location and only when both experimenter and participant looked at the correct quadrant did the mouse appear. Compared with a solo condition in which a house symbol cued the mouse location, the interactive joint attention condition triggered increased activation in the right superior temporal sulcus and right temporal parietal junction. The former brain region has previously been associated with processing socially relevant stimuli such as eye gaze and reaching, whereas the latter temporal-parietal region is associated with thinking about other people's thoughts.

Past research using simulations of social interaction has identified the dorso-medial prefrontal cortex as a key area involved in social engagement. The quietness of this region in the current study suggests it may have been the competitive or social judgement elements of previous paradigms, rather than social interaction per se, that led to its activation.

'Social interaction in the presence of a live person (compared to a visually identical recording) resulted in activation of multiple neural systems which may be critical to real-world social interactions but are missed in more constrained, offline experiments,' the researchers said.

Redcay's group said their new set-up would be ideal for studying the social difficulties associated with autistic spectrum disorders (ASD). Attempts to identify the neural bases of these difficulties have previously met with mixed success. 'A neuroimaging task that includes the complexity of dynamic, multi-modal social interactions may provide a more sensitive measure of the neural basis of social and communicative impairments in ASD,' the researchers said.
_________________________________

ResearchBlogging.orgRedcay E, Dodell-Feder D, Pearrow MJ, Mavros PL, Kleiner M, Gabrieli JD, & Saxe R (2010). Live face-to-face interaction during fMRI: a new tool for social cognitive neuroscience. NeuroImage, 50 (4), 1639-47 PMID: 20096792

Image courtesy of Elizabeth Redcay.

Friday, January 15, 2010

Psychology researchers aren't paying enough attention to debriefing their participants

Deception was a fundamental part of some of the most famous experiments in psychology - just think of Milgram's obedience studies, in which participants thought they were administering an electric shock, or Asch's conformity research, during which participants were tricked into believing everyone else in the room thought a line was a different length than it was. Although ethical standards have been tightened, deception is still used widely in psychology. It's not uncommon for even the most sedate studies to involve giving participants false test feedback or misleading them about the true aims of the research. A vital element of psychological science, therefore, is to debrief participants after experimenting on them - telling them the truth about what happened and why, and listening to their feedback.

Even studies that don't deploy trickery have the potential to leave a lasting impression - consider all the tests of new interventions aimed at outcomes from improving memory to ameliorating depression. We know from past research that simply asking someone about a behaviour, such as drug taking, increases their likelihood of indulging in that behaviour. Of course, telling participants too much up front can be detrimental to the results, and fully informed consent is therefore far rarer than most researchers would care to admit. That's why it's so important to debrief them fully afterwards. And yet, having said all this, an alarming new survey of researchers by Donald Sharpe and Cathy Faye suggests that debriefing is a neglected practice in contemporary psychology. Ironically for a science that's supposed to be about people and behaviour, there's also scant research on what kinds of debriefing are even effective - for example is it enough to tell participants they were given false feedback or should they have the chance to complete a real test?

Sharpe and Faye surveyed over two hundred researchers who'd published during a twelve month period from 2006 to 2007, either in the American Psychological Association's flagship social psychology journal The Journal of Personality and Social Psychology or in the Journal of Traumatic Stress. Just one third of articles in the social psychology journal had mentioned debriefing and fewer than one in ten of the trauma journal articles had done so. Those mentionings that were found were usually cursory, such as 'Participants in this and all following experiments were debriefed prior to dismissal.' If the purpose of a particular study was obvious, the survey suggested most researchers considered debriefing to be unnecessary, with nearly all their focus placed instead on informed consent prior to the study.

Set against this worrying picture, Sharpe and Faye make a strong case for just how vital debriefing ought to be to good quality research. Taking their lead from a provocative article published on this topic thirty years ago by Frederick Tesch, the pair say that effective debriefing is vital not only for the ethical reasons outlined above, but for educational and methodological functions too.

Explaining to participants why and how a study was performed ought to be given far higher priority, they argue, especially when one considers how many studies are performed on psychology students. Even with non-psychology students, the exercise of carefully explaining the rationale, methodology, and perhaps even results, of a study, could help to promote the scientific cause. 'Participants would learn about doing research, the joys and frustrations, and the excitement of discovery,' Sharpe and Faye said.

Regarding the methodological benefits of debriefing, the authors said that the process ought to be two-way, and that information garnered from participants can illuminate study findings and help improve future procedures. 'Researchers would learn about how participants view the experimental task, what makes sense and what does not, and what the participants think it was all about,' Sharpe and Faye said.

Their paper ends with seven recommendations for how to improve the situation, including greater discussion of debriefing in the research literature; more thorough reporting of debriefing practices in journals' methods sections; use of online overflow pages for discussing debriefing; and formalising the debriefing procedure. 'Progress will be made when researchers recognise the importance of debriefing or when some unfortunate circumstance forces such recognition,' the authors said.
_________________________________

ResearchBlogging.orgSharpe, D., & Faye, C. (2009). A Second Look at Debriefing Practices: Madness in Our Method? Ethics & Behavior, 19 (5), 432-447 DOI: 10.1080/10508420903035455