TY - JOUR
T1 - Turning Off Your Better Judgment
T2 - Algorithmic Conformity in Artificial Intelligence-Human Collaboration
AU - Liel, Yotam
AU - Zalmanson, Lior
N1 - Publisher Copyright:
© 2025 Taylor & Francis Group, LLC.
PY - 2025
Y1 - 2025
N2 - As artificial intelligence (AI) becomes increasingly integral to society, humans’ tendency to forgo their own judgment to adopt algorithmic advice is eliciting substantial concern. Prior research suggests that such overreliance is driven by informational influences (confidence in AI’s superior judgment) or by desire to reduce attentional load. We propose a new mechanism: normative pressure, stemming from the legitimacy afforded to algorithms within social or work-related structures. Using a setup inspired by social conformity research, we conducted four studies involving 1,445 crowd-workers performing straightforward image-classification tasks. Substantial percentages of participants followed erroneous AI recommendations on these tasks, despite being able to perform them perfectly without support. This overreliance was partially mediated by normative pressure, measured as discomfort at disagreeing with AI. Conformity decreased when participants perceived their decisions’ real-life impact as high (versus low). Our findings highlight the risks inherent to human-AI collaboration and the difficulty in ensuring that humans-in-the-loop maintain independent judgment.
AB - As artificial intelligence (AI) becomes increasingly integral to society, humans’ tendency to forgo their own judgment to adopt algorithmic advice is eliciting substantial concern. Prior research suggests that such overreliance is driven by informational influences (confidence in AI’s superior judgment) or by desire to reduce attentional load. We propose a new mechanism: normative pressure, stemming from the legitimacy afforded to algorithms within social or work-related structures. Using a setup inspired by social conformity research, we conducted four studies involving 1,445 crowd-workers performing straightforward image-classification tasks. Substantial percentages of participants followed erroneous AI recommendations on these tasks, despite being able to perform them perfectly without support. This overreliance was partially mediated by normative pressure, measured as discomfort at disagreeing with AI. Conformity decreased when participants perceived their decisions’ real-life impact as high (versus low). Our findings highlight the risks inherent to human-AI collaboration and the difficulty in ensuring that humans-in-the-loop maintain independent judgment.
KW - AI
KW - AI Overreliance
KW - AI Trust
KW - Algorithmic Advice
KW - Algorithmic Authority
KW - Artificial Intelligence
KW - Gen AI
KW - Human-AI Interaction
KW - Human-in-the-loop
KW - Social Conformity
UR - https://www.scopus.com/pages/publications/105022464281
U2 - 10.1080/07421222.2025.2561390
DO - 10.1080/07421222.2025.2561390
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.article???
AN - SCOPUS:105022464281
SN - 0742-1222
VL - 42
SP - 1087
EP - 1117
JO - Journal of Management Information Systems
JF - Journal of Management Information Systems
IS - 4
ER -