CARELESSRESPONSE
PID2022-141339NB-I00
ABSTRACT AND OBJECTIVES
Careless responding (CR) refers to careless responses by people who do not pay enough attention when answering a questionnaire (Ward & Meade, 2018, 2022). Its presence is a source of error and threatens both data quality and the validity of research results. The prevalence of careless respondents (between 3-45%) makes it necessary to address the CR phenomenon, especially in the increasingly common online data collections, where administration is not supervised. This project proposes to advance three issues related to CR: 1) clarifying its nature, 2) determining how best to manage it, and 3) assessing which contextual aspects (e.g., type of instructions, questionnaire length, reversed items) CR depends on, which will allow designing strategies to prevent the phenomenon. To identify careless respondents, instructed response items (IRI, Kam & Chan, 2018) are used, in which participants are told which response option they should choose, since this method stands out for its simplicity, transparency and metric properties.
Regarding the nature of CR, it is not known whether it is a stable response pattern over time (trait), or a fluctuating behavior (state). This project will address this question by analyzing longitudinal data and assessing which factors (sociodemographic and personality) predict the presence of CR and its variability over time. In addition, it is necessary to understand which response patterns underlie CR (whether they are truly carelessness, random responses, or responses that imply a systematic error - such as consistently choosing a response alternative). This project will shed light on this question through a mixed design.
Regarding how to manage CR, the traditional recommendation has been to eliminate careless respondents (e.g., Abbey and Meloy, 2017; Bowling et al., 2016). But other authors consider this strategy problematic and propose using others, such as introducing CR as a control variable (e.g., Goldammer et al., 2020), or as a modulating variable (e.g., Edwards, 2019). However, there are hardly any studies on this subject and there is no consensus on which is the best strategy. To make progress on this issue, this project will analyze the impact that the three strategies mentioned (a. elimination; b. control; c. modulation, versus d. use of the entire sample without taking any action) have on the psychometric properties of the scales and on the results of the research. Since the impact of the strategies may depend on the proportion of IRIs presented and answered incorrectly, the proportion of careless respondents and their inattentive response pattern (e.g. random vs. systematic), the analysis of empirical data will be complemented by a study with simulated data in which these conditions will be manipulated.
Finally, regarding prevention measures, this project proposes an experimental study in which the instructions for completing the questionnaires and their design (length, proportion of IRIs, and presence (or not) of inverted items) will be varied, which will help to design preventive strategies to reduce CR.
Three general objectives are pursued:
General objective 1: to analyze whether the careless response represents a stable response pattern over time (trait), or on the contrary, is a transitory and fluctuating behavior (state). Additionally, to analyze which variables (e.g., sociodemographic and/or personality) can help predict the likelihood of showing careless response patterns and the intrapersonal variability in careless response over time.
General objective 2: to provide evidence of the impact of careless response on the psychometric properties of scales (reliability and validity) and on research results, based on the use of different strategies for managing careless response. Specifically, it is proposed to compare the following strategies: 1) using the total sample without taking any action regarding careless response; 2) using the clean sample, after eliminating careless respondents; 3) using the total sample but introducing the careless response variable as a control variable (or method factor); 4) using the total sample but introducing the careless response variable as a moderator variable.
General objective 3: to analyze which contextual factors (type of instructions, questionnaire length, proportion of IRI, presence (or not) of inverted items) have an effect on the likelihood of showing careless response patterns.
- Tomas Marco, Maria Ines
- PDI-Catedratic/a d'Universitat
- Hernandez Baeza, Ana Maria
- PDI-Titular d'Universitat
- Gonzalez Roma, Vicente
- PDI-Catedratic/a d'Universitat
- Cuevas Ureña, Clara
- PI-Invest Formacio Predoc Ministeri
Anna Brown. University of Kent (UK)
Jeffrey Edwards. University of North Carolina at Chapel Hill (EEUU)
Esther Ulitzsch. Centre for Educational Measurement - University of Oslo (Noruega)
Ministerio de Ciencia e Innovación
Proyecto PID2022-141339NB-I00, financiado por MCIU / AEI /10.13039/501100011033/y por FEDER Una manera de hacer Europa, UE