← back to essays
Research · Philosophy of Mind & AI

Mechanical Outputs
or Imaginative Acts

Claremont McKenna College
2025
Philosophy / AI / Cognitive Science

What would it mean for an AI system to exhibit something like imagination, not just pattern completion?

This project began as a philosophical problem: can the rich tradition thought surrounding human imagination translate into anything tractable about machine behavior? Rather than defaulting to either pure anthropomorphism or blanket skepticism, the work demanded conceptual precision first.

Working under philosopher Amy Kind, I built out a full dialectical map of the philosophical literature on imagination tracing positions, rebuttals, and where the fault lines in the arguments sit. I then examined how major philosophical frameworks from Descartes, Berkeley, Searle, and Chalmers define imagination and mapped where those definitions align or conflict with what current AI systems actually do. Key distinctions emerged early: imagination is not the same as memory recall, recombination, or statistical prediction. Yet, these are often conflated, until falling apart under pressure.

The core finding is that imagination, as a genuine mental process, presupposes two things current AI systems lack: semantic understanding and phenomenal consciousness. Creative output, regardless of impressiveness, is not evidence of either. A pufferfish creates geometric sand formations of stunning regularity; a simulation of simple rules reproduces them exactly. Appearance and process are not the same thing.

The practical conclusion: if we want AI systems capable of something genuinely imagination like, larger models trained on more data will not get there. It will require new architectures, embodied, recurrent, self-reflective, that engage with the world rather than model its surface. However, if all we are looking for is something that can mirror creativity, at least at the surface level, we have seemingly fulfilled this.