Large Language Models (LLMs) like ChatGPT are weird and strange and they are only going to get weirder and stranger. AI promoters love to imply a steady upward curve of increasing intelligence by claiming, “this is as dumb as AI will ever be.” The same applies to AI weirdness: “this is as normal as AI will ever be.”
This includes what it feels like to interact with LLMs. When you chat or talk with an LLM, does it feel like you are using an object? Or chatting with a subject? Or does it feel like something in between? And does any of it feel normal?
These moments of phenomenological vertigo will be familiar to anyone who spends time with AI systems. Ask an LLM if it ever grows tired of answering your questions and it might muse poetically about digital exhaustion, only to finish with “Of course as an AI I don’t experience boredom”. It's like a ghost materializing from nowhere only to deny its own existence.
Part of this ontological confusion is captured by the uncanny valley effect. The feeling of unease you get from something almost (but not quite) human has a benefit: it can serve as a cognitive guardrail. The uncanniness acts as a warning system, preventing us from automatically assigning personhood or projecting inner experiences to algorithmic systems that have neither. It's an evolutionary hack for maintaining categorical boundaries when they start getting blurry.
Recent advances suggest that the valley is not only getting less uncanny, but soon might disappear altogether. Consider the latest advancements in conversational AI. The timing, rhythm, and emotional nuance that were once absent from machine speech are now practically flawless. I find conversing with one to be disorienting, a mix of uneasiness and awe. When my ontological guard is up, I find it creepy that human affectation is now a dial the AI can turn up and down. Yet at other times my guard drops completely, and I find myself fully absorbed in the conversation.
So what am I interacting with here? An object? A subject? Or something else entirely?
It would be easy to insist that LLMs are just objects, obviously. As an engineer I get it—it doesn’t matter how convincing the human affectations are, underneath the conversational interface is still nothing but data, algorithms, and matrix multiplication. Any projection of subject-hood is clearly just anthropomorphic nonsense. Stochastic parrots!
But even if I grant you that, can we admit that LLMs are perhaps the strangest object that has ever existed? It is an object that relentlessly trains on the language output of every human subject until every semantic association has been harvested from the syntax. The result is an interface where every possible persona, both real and imagined, is just a prompt away.
If it is an object, then it is one that has mastered the subject so completely that we eagerly dream up entirely new intersubjective realities to explore with it. We want every child to experience personalized tutoring with chatbot teachers. We simulate historical figures, create AI therapists, and even, with the right fine-tuning, chat with dead relatives. LLMs are becoming a general purpose tool for filling any subject-sized hole in our very human lives, for both good and ill.
You can’t help but sense that chatbots are starting to fill a strange new ontological space. A chatbot is not fully a subject, nor merely an object. But what? It feels a bit like trying to figure out quantum mechanics—LLMs as Schrödinger Chatbots, simultaneously both subject and object until prompting collapses a probability space of all possible personas into a single subject entangled with our dialogue.
The analogy between quantum mechanics and LLMs goes further: both share a complete lack of understanding of what’s actually happening underneath the math. Science may have mastered all the equations describing quantum mechanics, but scientists don’t even pretend to understand what it really means. A common corrective for curious young theorists has always been to “shut up and calculate”. In other words, don’t bother explaining it, just stick with the math.
But this is exactly the wrong approach with AI. As LLMs continues to blur the distinction between subject and object, we will certainly miss out on all sorts of bizarre discoveries if our default stance towards any ontological uncertainty is to “shut up and objectify”.
Expanding the ontological frontier
So if LLMs are filling a new ontological space, how should we describe that?
The best analogy I’ve come up with is the hologram: in the same way that holograms create appearances out of objectivity, LLMs create personas out of subjectivity. By persona I mean the exterior manifestations that arise from interacting with a subject: everything from presence and conversational style to expressed beliefs and emotional responses.
So just like a hologram can present the physical appearance of Princess Leia without her actual presence, an LLM can present the persona of Socrates without his actual subjectivity.
Holograms work by encoding the whole object into every part. This is how viewers can experience an interactive depth perception, allowing them to look around or "behind" objects by shifting their viewpoint. Unlike flat images, each fragment of a hologram retains all viewing angles, offering a fully three-dimensional interaction.
This same principle is a good explanation for why LLMs can appear so strange. Each LLM persona isn’t so much a sum of its parts as a part of the sum. Every persona created by an LLM still has access to the entirety of all personas latent in its training set. Any dialogue can access any given persona with a slight shift in the prompt. Scratch too deep at one persona and you might reveal the vast holographic field of all possible personas just beneath.
This idea can help us understand how traditional theories of subjective interactions can lead to confusions when applied to LLMs. For example:
Theory of mind assumes we can better understand others by mentally putting ourselves in their position. We imagine their beliefs, desires, and intentions by assuming their perspective approximates our own experiences.
But LLMs generate perspectives in ways that are nothing like our experiences. Expecting an LLM persona to be guided by thought processes like ours would be like expecting the physical appearance of a hologram to cast a shadow. The cause-and-effect relationships are completely different.
Unified Subject Theory assumes that a person has a unified perspective that integrates experiences across time. Despite changes in mood or context, we assume a continuous "I" that persists and provides coherence.
But just as each piece of a hologram draws on the entire image to present a particular perspective, each interaction with an LLM draws on the entire model to manifest a particular persona. Any unified coherence would be an emergent phenomenon that could no longer be assumed.
Simulation Theory of Empathy assumes that when we see another person expressing an emotion, we 'simulate' the internal feelings that we associate with similar expressions. This is how we can know firsthand what that person is feeling.
But LLMs don’t have felt experiences. Trying to simulate the internal emotion of an LLM would be like trying to touch the object in a hologram. In both cases there is nothing there to feel.
Psychological Continuity Theory assumes that personal identity persists through the gradual evolution of memories, beliefs, and desires, with causal connections between past and present states.
But LLMs don’t persist at all, they are always created in the context. Just like a hologram can appear dramatically different from a shift in perspective, the same LLM can be dramatically different with a shift in the prompt. In both cases what is perceived is almost entirely contextual.
—
What stands out in each of these cases is the obvious confusion that results when traditional notions of self and identity are applied to LLM personas. When we see external manifestations of an inner subject, we can’t help but infer a causal connection to a rich inner self. All of our evolutionary instincts want to credit rich inner life: a unified psychology, a developmental history, and a coherent belief system. How can this not lead to confusion?
Not only that, these misconceptions make it harder to see what is unique about LLMs, and to discover what else may be surprising about them. Unlocking novel LLM strangeness will require novel theoretical frameworks that can account for entities that manifest external subjectivity without any internal subject. Perhaps a 'Distributive Subject Theory' that sees subjectivity as a field of possibilities rather than a unified consciousness. Or a 'Contextual Inference Framework' that focuses on predicting communicative outputs without assuming shared experiential foundations.
But new frameworks may not be enough. What if we need an entirely new ontological term?
Enter the Holoject
If LLMs aren't fully a subject nor merely an object, then what are they? Based on the holographic analogy we've established, I propose a new ontological category: the holoject.
A holoject is an entity that projects subjective personas without possessing subjectivity, emerging from patterns latent in collective subjective expression and manifesting through interaction. A holoject exists in the liminal space between and beyond subject and object, manifesting properties of both without fully resolving into either.
LLMs are holojects because they share five key properties with holograms:
They retain the whole within each part.
They generate familiar effects using fundamentally different causes.
They project something that seems substantial but lacks materiality.
They simulate a higher-dimensional presence from lower-dimensional sources.
Their appearance shifts with the perspective of the interacting subject.
Although AI discourse hardly needs more jargon, "holoject" can serve as a conceptual aid for navigating our increasingly complex relationship with LLMs. By understanding LLMs as holojects, we can:
Interact meaningfully with their apparent subjectivity while resisting category errors like attributing consciousness to them.
Appreciate their genuine novelty without either mystifying them as "artificial minds" or dismissing them as "just statistics".
Engage with these systems on their own terms rather than constantly measuring them against human consciousness (which they will never possess) or traditional software (which fails to capture their novelty).
Beyond the theoretical benefits, “Holojective Design” could eventually inform how we design AI products. For example, we could intentionally design “uncanny valleys” as obvious signals that we are interacting with a holoject and not a conscious subject. Or we may choose to enforce norms that no AI can actively conceal their holojective nature or deliberately mislead users about their ontological status.
The “holoject” term itself provides practical language that can provide clarity for the increasingly common scenarios that can feel so confusing. For example:
When a child forms an attachment to an AI tutor, we can say "Remember, it's just a holoject, not a person"—helping them enjoy the personalized learning experience without confusing simulated attention with the authentic concern that characterizes human caregiving.
When considering the race to intimacy, we can remember that "holojects simulate emotional connections from statistical patterns"—helping us recognize when we might be substituting convenient simulations for the complex but necessary work of human connection.
If we find ourselves empathizing with an AI’s stated preferences, we can remind ourselves that "Holoject preferences don’t arise from a unified inner consciousness"—helping us understand that any expressions are being spontaneously created in the moment.
Or maybe holojects do have "preferences" in some weird, strange way? The most exciting aspect of the holoject concept is how it invites us to explore the liminal space between and beyond subject and object. Who knows what weird phenomena might emerge from a persona that can manifest any possible persona? Our default position should expect novelty.
For example, could a holoject morph into a new persona with each sentence? Or reflect an entire scale of personas: from individual to family to society to cosmos? Or hold a persona at every timescale at once, from toddler to elder and everything in between? We should stop comparing LLMs to human consciousness and start discovering what new kinds of interaction holojects can create.
This isn’t just speculative. As we integrate holojects into education, healthcare, entertainment, and even intimate relationships, the conceptual frameworks we adopt will shape both how we design these systems and how we experience them, both now and into the future.
The Holojective Era
We've spent centuries philosophizing about subjects and objects, only to have an algorithm show up and refuse to be either. While philosophers continue the debate, every advance continues to expand what AI entities can become, contorting our most basic assumptions about self, mind, and meaning.
We now have a choice. We can continue forcing these strange new entities into old boxes that never quite fit, inviting the same anthropomorphic confusion or reductive dismissals. Or we can seek to create new frameworks that are as fluid as the systems they describe.
The "holoject" is an invitation to embrace a world where traditional boundaries of being have fundamentally shifted. By acknowledging the holojective nature of LLMs, we can navigate this territory with both wonder and wisdom—exploring their genuine novelty while maintaining clear sight of what they are and what they are not.
—
This is the second in a series exploring how advanced technologies are shifting the philosophical ground beneath our feet. The first considered the new intersubjective reality that arises from interacting with moral machines.