These papers are early work — academic essays from before the framework of the Humanist Revolution had cohered. They're preserved here because the questions they ask still matter.
Abstract
The notion of "understanding" faces a profound recalibration challenge with the emergence of Large Language Models (LLMs). Often dismissed in popular discourse as mere pattern-matching machines — "stochastic parrots" or "semantic zombies" — these systems demand deeper philosophical examination as they demonstrate increasingly sophisticated linguistic capabilities. This paper challenges traditional binary conceptions of understanding by examining LLMs through four influential philosophical frameworks: Turing's behaviorism, Searle's biological naturalism, Grice's communicative intentions, and Wittgenstein's language games. Drawing on recent mechanistic interpretability research, I demonstrate how LLMs possess neither mere statistical mimicry nor human-equivalent comprehension, but rather exhibit mechanically different yet analogous forms of semantic understanding and intentionality when viewed along multiple dimensions. The evidence suggests our philosophical frameworks require fundamental recalibration to accommodate non-anthropocentric cognitive architectures.