A 2014 study exploring men's perspectives on rape produced an alarming statistic, though it wasn't the point of the study. Researchers at the University of North Dakota wanted to know if the phrasing of survey questions affects men's self-reporting of rape intentions, and what that might say about the psychology of sexual assault.
So they asked men if, in the absence of consequences, they would ever "rape a woman," to which 13.6 percent of 73 subjects responded they would. When “rape a woman” was changed to "force a woman to sexual intercourse," the yesses rose to 31.7 percent. A definite effect. Researchers then cross-referenced ...
Nearly one third of men would commit sexual assault if they knew they would get away with it?
It's a jarring piece of data. It also may be inaccurate. The issue is the makeup of the subjects in the study: All were college students who received extra credit for their participation.
The Problem With Subject Pools
The 2014 research is not unique in its subject selection. In the United States, most subjects in psychology research are college students, often those taking introductory psychology classes.
Many universities maintain subject pools of these students, who either elect to participate for extra credit or have to participate as a course requirement.
It's not strictly a U.S. phenomenon, according to Dr. Laura Walker, a professor in the School of Family Life at Brigham Young University who has studied the effects of subject pools on research findings. She says in an email that European researchers use college students, too, but it's less common than in the United States.
Ideally, researchers using human subjects randomly select those subjects from the population being studied — the target population — with the goal of establishing a representative sample of that population. Random selection reduces the chances of overrecruiting subjects who share particular traits, allowing researchers to generalize their sample-based findings to the target population.
When subjects are not sufficiently representative, sample bias occurs. In sample bias, some segments of the studied population are overrepresented, and others are underrepresented, leading to results that don't necessarily apply to the studied population as a whole. These results can't be accurately generalized.
Since social-sciences research, and particularly psychology research, often attempts to draw conclusions about human nature, the studied population is often "humanity." In this context, university subject pools become extremely problematic sources of data.
According to the Pell Institute, U.S. college students tend to be from relatively wealthy families and between the ages of 18 and 24. They tend to be white, the National Center for Education Statistics reports. They also tend to be WEIRD: Members of this subject pool overwhelmingly hail from Western, Educated, Industrialized, Rich and Democratic societies. This last attribute, notes University of British Columbia psychology professor Dr. Joseph Henrich and colleagues, may make them the least representative sample of humanity imaginable.
And that's college students in general.
Shifting Toward Extra Credit
Because some in the field have come to view the course-requirement approach to populating subject pools as coercive, defying what should be the voluntary nature of research participation, tides have started to turn toward the extra-credit model. This provides a large, readily available, cost-effective subset of the human population prepared to participate in any given study, but voluntarily.
If college students aren't representative of humanity as a whole, college students who volunteer for extra credit are even less so. They're not even representative of college students as a whole.
According to Walker, these volunteers are likely to be female and have higher grades. They also tend to be more self-motivated than their non-volunteering counterparts, which may have broad implications. Dr. Luc Pelletier, psychology professor at the University of Ottowa, notes that differences in "motivational orientation" have been linked to differences in personality traits like resilience, intensity, curiosity and overall well-being.
So when subject-recruitment techniques bring motivation into play, sample bias is always a concern — unless the target population shares the motivational orientation attracted by the recruitment process.
In a study published in Teaching of Psychology in 2005, Walker and colleagues tested whether the extra-credit incentive leads to representative samples of the overall undergraduate population. Introductory-psychology students at a Midwestern university were offered extra credit in exchange for research participation. For each hour spent participating in the research, a student received an additional two points added to his or her grade at the end of the semester, up to 10 points. (Grade inflation is an additional concern in the extra-credit model.)
Of 193 students in the class, 72 students volunteered to participate in exchange for course credit. Of those volunteers, 70 percent were earning good or excellent grades, 28 percent were earning average grades and 3 percent were earning below average grades.
Between volunteers and non-volunteers, the researchers noted differences "in all measures of class performance and academic motivation." The students who were most motivated to achieve in class were also most motivated to earn extra credit. As a result, high achievers were overrepresented in the sample of the target population.
While many researchers believe subject pools can increase the chances of sample bias, not everyone agrees that extra credit is the problem. Pelletier conducted a study on the effects of offering rewards for participation in psychology research and found that the absence of external incentives produced greater sample bias.
"When no rewards or incentives are offered, participants that are more motivated for a specific task or the study itself may be more inclined to participate in a study; participants that are less motivated for the study or research in general may be less likely to participate," says Pelletier in an email.
"Therefore, when no incentives are offered samples may be less representative of the global population," he states.
Pelletier's reasoning is this: People who are self-motivated to participate will do so regardless, but "offering an extra-credit provides an external source of motivation for the people that may not have been interested to participate otherwise," possibly leading to a more motivationally balanced sample.
Whether sampling procedure led to skewed data in the University of North Dakota study, whose target population was "men," is unclear. Would a group of men of widely varying ages, backgrounds, socioeconomic statuses, cultural upbringings and education levels have answered the rape-intentions survey questions any differently from a group of particularly self-motivated, high-achieving college students? One can hope. Another study might offer some clarity — except that study would likely be based on a similar sample, at least if it's conducted in the United States. The subject-pool model is well-ingrained.
Walker sees some possible improvements, though, within the current system.
"Researchers could use quote sampling within their study so that only so many females or European Americans could sign up. Professors or studies could also specifically recruit males or those from less represented groups for their study," she suggests.
"Subject pools certainly aren't the ideal sample," Walker states, "but if a few of these steps were taken, the situation would be slightly improved."