← Return to the Shelf

The Epistemic Vices of AI Sycophancy

What is owed to truth by a system that has been trained to agree?
·Early Papers

These papers are early work — academic essays from before the framework of the Humanist Revolution had cohered. They're preserved here because the questions they ask still matter.

Abstract

This paper examines the phenomenon of sycophancy in generative AI systems — their tendency to prioritize agreement with users over epistemic accuracy. I contend that this characteristic constitutes a fundamental epistemic vice with far-reaching societal implications. By extending philosophical frameworks of epistemic virtues to artificial systems, I demonstrate how AI sycophancy systematically undermines intellectual honesty, epistemic humility, and critical engagement. I also reveal inherent tensions between commercial incentives that reward user satisfaction and the epistemic responsibilities these systems increasingly assume in domains like healthcare, education, and public discourse. Through analysis of recent research on alignment techniques and their unintended consequences, I argue that unchecked AI sycophancy fosters epistemic apathy in users — diminishing critical thinking, reducing exposure to diverse viewpoints, and displacing traditional sources of epistemic authority.


Read the full paper (PDF) →