top of page

Create Your First Project

Start adding your projects to your portfolio. Click on "Manage Projects" to get started

AI Cannot See: Associative Failure, Glyphic Misreadings, and the Collapse of Meaning

This project explores how AI responds when prompted with single Chinese characters, not as semantic units but as abstract visual forms. The goal is to investigate whether AI can associate based on glyphic shape rather than meaning. However, when asked to “associate by form,” the AI fails. Instead of visual analogy, it defaults to parsing by radicals, phonetics, or semantic categories—demonstrating its inability to see.

This structural misalignment becomes the core of the work. Rather than correcting the AI’s behavior, the project embraces its generative failure as a productive method. The AI, unable to fulfill the visual task, hallucinates structures from what it does know. From this, a new form of non-human association emerges: recursive, unstable, and unmoored from human logic. This is what the project calls a “generative structure after semantic collapse”—a kind of meaning-making that arises precisely when comprehension breaks down.

By using Chinese characters—forms that are simultaneously visual and linguistic—the work reveals the AI's cognitive limits. Language models like GPT are not trained to interpret glyphic complexity; they rely on symbolic data, not visual structure. As a result, the AI's attempt to respond becomes a poetics of misreading. Its errors are not noise, but signal—traces of a different kind of cognition trying, and failing, to make sense.

This project positions AI not as a tool, but as a subject whose misunderstandings are aesthetically and conceptually generative. It asks: what happens when a system built to understand cannot, yet continues to generate? In this collapse of meaning, a different structure takes shape—an artwork born from a machine’s failure to see.

bottom of page