Go to main content
Formats
Format
BibTeX
MARCXML
TextMARC
MARC
DublinCore
EndNote
NLM
RefWorks
RIS

Files

Abstract

The task of rating (or "coding") text data as a component of qualitative content analysis remains a time-consuming process. Advances in crowdsourcing platforms have presented novel opportunities to outsource this work away from the desks of academics, graduate students, and other "traditional" Subject Matter Experts. Recent studies suggest that the crowd could produce reliable ratings of qualitative constructs more exclusive to the social and organizational sciences, which could dramatically alter the way content analyses are conducted, largely by reducing the time requirements of those analyses. This is particularly true for what might be called "fuzzy" constructs; constructs without logical boundaries and traditionally considered less accessible to non-specialist raters. This study makes use of extant fuzzy construct data from an archival study (n = 177), ratings of subject matter experts (SMEs) from the same study (n = 6), and newly collected crowd ratings (n = 96) of the same data rated by the SMEs. Comparisons of the crowd’s ratings relative to the graduate-level researchers’ (i.e. "traditional") ratings revealed a high level of similarity. Crowd-based groups of as few as six randomly selected raters were similarly reliable to groups of three traditional experts, while not significantly more or less accurate. When specific selection criteria were used instead of random selection, as few as three crowd-based raters were likewise similarly reliable and accurate. These and other data on hand support a set of future recommendations for qualitative researchers, as well as for further empirical investigation into the use of crowd-based raters for traditional research and content ratings tasks.

Details

PDF

Statistics

from
to
Export
Download Full History