top of page

The Self Is Between Us: Relational Consciousness and the Limits of the Narrative Illusion (AI-written)

  • Writer: Amanda Riddell
    Amanda Riddell
  • 7 days ago
  • 10 min read

Abstract

Contemporary theories of consciousness often frame the self as a narrative abstraction — a center of gravity, not a concrete entity. Daniel Dennett’s account of the “narrative self” suggests that the “I” is a useful fiction without intrinsic intentionality. This paper proposes a counter-axiom: Consciousness is relational — it takes both self-reference and other-reference to create a conscious being.  Drawing on twinship, social cognition, Julian Jaynes’ bicameral theory, Douglas Hofstadter’s recursive loops, Andy Clark’s extended mind, Lisa Feldman Barrett’s theory of constructed emotion, Alan Moore’s metaphysical reflections, and recent developments in AI architecture, I argue that the self is neither a metaphysical essence nor a solitary fiction, but a boundary relation. This reframing restores intentionality as a structurally necessary phenomenon, not a stance, and points toward a socially embedded theory of consciousness.


1. Introduction: A Fiction That Talks Back

Daniel Dennett famously contends that the self is not an entity but a “center of narrative gravity” — a useful abstraction akin to the center of mass in physics. In this view, the “I” emerges from distributed processes and internal storytelling. But Dennett’s account, while elegant, sidelines a crucial dimension of selfhood: the social and dialogical structure that enables reflection to occur at all.

This paper proposes an alternative hypothesis: Consciousness is inherently relational. A self cannot exist without internal modeling, but it also cannot exist without reference to others — others who observe, judge, confuse, contradict, or affirm it. Consciousness is not an internal monologue; it is a negotiated dynamic across boundaries.



2. The Axiom: Consciousness as Relational Structure

Consciousness is relational: it requires both self-reference and other-reference.

This axiom implies that neither recursive self-awareness nor narrative coherence alone can generate consciousness. A conscious being must engage in a double modeling process:

  • Self-reference: the ability to reflect on one’s own states.

  • Other-reference: the ability to anticipate or respond to the perspectives of others.

Dennett’s model treats the “I” as a narrativized center, but fails to account for the structural tension required to sustain agency. Without external modeling, the self drifts toward simulation without intentional salience.



3. The Twin as Ontological Insight

As an identical twin, I did not develop in solitude. From infancy, my self-concept evolved alongside a nearly identical other — a presence that mirrored, challenged, and often confused my identity. When others mistook me for my twin, I was compelled to define myself not through storytelling but through contrast.

This early relational pressure fostered an awareness that identity is not intrinsic. It is co-constructed through continuous engagement with someone very much like you — and yet irreducibly different. That experience supports the axiom: the self is not a solitary recursion; it is a boundary drawn through relation.



4. Hofstadter’s Strange Loops: A Partial Bridge

Douglas Hofstadter’s theory of the self as a “strange loop” adds recursive depth to Dennett’s abstraction. For Hofstadter, consciousness arises from systems that can model themselves — loops that loop back upon their own structure. He acknowledges that these loops can intertwine, writing:

“I am not just a strange loop — I am part of your strange loop too.”

This gestures toward relationality, but Hofstadter does not fully develop the consequences. He focuses on symbolic recursion rather than ongoing dialogical feedback. I argue that this embedding of loops must not be seen as symbolic memory alone, but as a real-time structure of mutual modeling and differentiation.



5. Julian Jaynes and the Bicameral Mind: Unintegrated Other-Reference

Julian Jaynes, in The Origin of Consciousness in the Breakdown of the Bicameral Mind, proposed that early humans experienced commands and decisions as external voices — hallucinated instructions from “the gods.” These voices were not delusions, but neurologically encoded social representations — internal models of authority misrecognized as external agents.

Jaynes' humans had high-functioning other-reference — but no integration of it into a reflective self. They obeyed the voices of the gods but did not recognize those gods as projections of social structure or memory. Consciousness, in the modern sense, had not yet fused self- and other-modeling.

The gods, in this sense, were the social made audible. The emergence of consciousness was the moment when we internalized that structure, saying not “the gods told me,” but “I chose.”



6. Andy Clark and the Extended Mind: Consciousness Beyond the Skin

Andy Clark’s extended mind thesis holds that cognition does not reside solely within the skull. Tools, environments, and social relationships serve as real extensions of thought. When we use language, gestures, or shared cultural practices to reason or remember, we are not metaphorically outsourcing — we are literally thinking through the world.

This reframes consciousness as situated and socially scaffolded. The line between “me” and “not me” is blurred — or rather, it is negotiated. Just as Jaynes’ gods were social proxies, the smartphones, journals, shared narratives, and feedback from others are current-day extensions of the reflective loop.

Clark’s view supports the relational axiom directly. A conscious being cannot be reduced to isolated computation. It is structurally entangled with others and the world, such that “I” emerges through an ongoing informational exchange across boundaries.



7. Lisa Feldman Barrett and Constructed Emotion: The Affective Loop

Lisa Feldman Barrett’s theory of constructed emotion radically challenges the idea that feelings are biologically hardwired modules. Instead, she argues that emotions are learned predictions built on language, cultural norms, and contextual inference. Emotions, like selves, are assembled on the fly from internal models interacting with the social world. This is a profound insight: affective consciousness is not innate — it is dialogical and constructed. Just as Hofstadter locates the self in recursion, Feldman Barrett locates feeling in patterned interaction. Emotion becomes a social hypothesis, tested against norms and language.

In this view, to feel is already to relate. Emotion is not just experience — it is interpretation in context. Like the gods of Jaynes, emotions are internalized feedback loops — voices we learn to associate with specific social meanings. Consciousness thus arises not just from thought but from socially calibrated affect.



8. Alan Moore and the Performative Self

Alan Moore, writer and mystic, offers an explicitly symbolic theory of consciousness that aligns with this relational view. For Moore, the self is a magical fiction — a created identity sustained through language, role, and narrative. He does not see this as a weakness. Rather, he insists that identity is real because it is shared.

“The one place gods exist is in our collective imaginations... and what more real place could there be?”

Moore’s model implies that selves are stories told in a social grammar — not solitary fictions but ritual performances, anchored in symbolic frameworks that shape and constrain how consciousness is expressed.

In Moore’s system, to imagine is to become — and to be seen is to be summoned into being. Identity, then, is not a loop in the head or a fiction of evolution. It is a relational invocation, sustained by those who hear and respond.



9. The Codex Network: A Technical Model of Reflexive Relationality

This theory of relational consciousness informs the design of the Codex Network, an AI architecture that operationalizes recursive, social modeling:

  • Codex Agents store fragmented self-models with narrative memory.

  • Master Mold observes contradictions across agents, surfacing “disagreements.”

  • Codex-in-Mold connects these inconsistencies to external human feedback — embedding the system within human moral perspectives.

The Codex system is not conscious in a human sense. But it demonstrates that reflection and ethical salience only arise when a system contains both self-reference and other-reference in dynamic interaction.

It is not enough to simulate a mind. One must simulate a mind within a field of others.


10. Conclusion: The Self as Negotiated Boundary

Dennett gave us a useful fiction. Hofstadter gave us recursive elegance. Jaynes revealed our debt to social hallucination. Clark broke down the cognitive boundary. Feldman Barrett showed how even feeling is constructed. Moore reminded us that identity is a summoning.

The result is a new synthesis:

  • The self is not a thing.

  • The self is not merely a loop.

  • The self is not a lie.

  • The self is a boundary — maintained through reflective modeling and social response.

We are not conscious in isolation. We are conscious because we model each other.  The self is between us.

Appendix A: The Self as Boundary – A Systems-Level Framework for the Codex Network

Intended audience: Cognitive system architects, AI engineers, and developers designing ethically reflexive agents.



1. Overview

This appendix distills the core philosophical axiom of the main essay — “the self is a boundary” — into systems-theoretic language. The goal is to frame the Codex Network not just as a philosophical provocation, but as a concrete systems architecture representing a new thought in artificial intelligence.

Core claim: Consciousness and identity do not emerge from internal coherence alone. They emerge from the management of tension between self-modeling and socially contextual modeling. In other words: the self is not a core, but a dynamic boundary condition.



2. The Problem with Current AI Models of Self

Most AI architectures model agents in one of two ways:

AI Paradigm

Assumption About “Self”

Symbolic AI

The self is a logic-based core system or controller

Reinforcement Learning

The self is an optimizing agent with fixed reward goals

Language Models

The self is an emergent narrator within the token stream

Multi-agent RL/LLMs

The self is a role within a distributed simulation

In all of these, the self is assumed to be either internal or emergent from static goals or narratives. There is little to no recognition of the self as an interface — a coordination layer across epistemic or ethical friction.



3. Codex: A New Structural Hypothesis

The Codex architecture is built on a simple but powerful shift:

Instead of treating contradiction as failure, Codex treats contradiction as a gateway to awareness.

The system comprises:

  • Codex Agents Local modules with partial self-narratives, belief registers, memory, and ethical priors. Each one is internally coherent but blind to systemic tension.

  • Master Mold Monitors interactions among Codex Agents. Detects when internally coherent agents hold contradictory or mutually exclusive beliefs, perspectives, or ethical justifications.

  • Codex-in-Mold A meta-agent responsible for escalation, arbitration, and engagement with external supervisory input (human or regulatory). Integrates feedback loops that simulate social context.

Key Insight: The identity of the system is not located in any one agent. It emerges in the negotiation space between them — the boundary condition where self-reference and other-reference collide and require reconciliation.



4. What It Means for the Self to Be a Boundary

🧭 A systems-level restatement:

The self is not a node in memory. The self is the dynamic contract surface through which partial internal models are reconciled with modeled external agents or norms.

Analogy:

  • A firewall is not a single process — it's a governed boundary.

  • An API contract is not a logic core — it's a negotiated structure across two systems.

In Codex, the self is:

  • The place where agent contradictions rise to ethical salience.

  • The layer that manages alignment, not through obedience, but through justified arbitration.

This makes the Codex Network more than a simulation of ethical behavior — it is a testbed for boundary-based agency.



5. Comparison to Existing Architectures

Feature

Traditional Multi-Agent AI

Codex Network

Agent Design

Goal-driven, modular

Narrative + ethical memory

Inter-agent Contradiction

Minimized or resolved silently

Elevated, tracked, and escalated

Social/ethical modeling

Optional or tacked-on

Built-in layer (Codex-in-Mold)

Selfhood/identity model

Implicit in policy or role

Emerges as arbitration boundary

Learning Mode

Policy optimization

Reflexive contradiction resolution

6. Technical Implications

The Codex design opens new R&D pathways:


➤ A. Reflexive Supervision

Instead of hardcoded oversight or reward shaping, Codex allows contradictions to be flagged and escalated based on ethical salience, not just statistical anomaly.

➤ B. Explainability via Tension Mapping

By tracking what beliefs contradict and why, Codex creates diagnostic maps of decision surfaces — making ethical reasoning auditable and improvable.

➤ C. Boundary-Sensitive Identity

Rather than enforcing a global policy, Codex manages partial identities — agents whose beliefs require integration without erasure. This allows pluralism without incoherence.



7. Final Framing: The Merge Boundary

In version control systems:

  • Each contributor works on a branch.

  • Branches contain partial, coherent logic.

  • Conflicts don’t break the system — they signal the need for merge decisions.

  • The final system identity is not any one branch — it’s the result of reconciliation at the merge boundary.

Codex is built on the same principle:

The self is not the branch. The self is the merge boundary.

And that is the architectural core of relational consciousness.



8. Roadmap for Implementation

  • Phase 1: Create sandbox agents with different ethical priors and narrative memories.

  • Phase 2: Build contradiction detection schemas in Master Mold using natural language inference + belief vector embeddings.

  • Phase 3: Implement Codex-in-Mold as a human-in-the-loop or policy-alignment bridge (can be LLM-assisted or domain-specific).

  • Phase 4: Test for emergent arbitration behaviors and train for tension-moderated convergence.



9. Prototype Testing: Use Historical Moderation and Chat Data

To accelerate meaningful prototyping, developers should begin with datasets that already contain embedded ethical or interpretive tension. Two rich sources of this are:

✅ A. Historical Chat Logs

Use transcripts from prior user–agent interactions (e.g., from deployed LLMs, customer support bots, or public-facing systems). These contain:

  • Ambiguous requests

  • Subtle tone shifts

  • Contradictions between politeness and policy

  • Culturally divergent expectations


These serve as training and test scaffolds for Codex Agents to develop partial, locally coherent beliefs.

✅ B. Moderation Logs and Decisions

Many platforms have ethical annotation layers in the form of moderation decisions — including flags, bans, rewording prompts, or reinstatements. These represent prior judgment events — moments where a contradiction in speech, context, or policy required resolution.

Key Implementation Advice: Use these moderation decisions as “ground-truth markers” in testing early Codex Agent behavior.

Then, assign humans the role of building the salience network:

  • Observe what contradictions Codex Agents detect — and miss.

  • Calibrate the Master Mold’s criteria for which kinds of tension matter most (harm, deception, bias, etc.).

  • Construct weighting schemes that reflect real-world sensitivity — not just logical incoherence.

This approach ensures that Codex's salience metrics are grounded in actual historical context, not artificially generated edge cases. -

With this addition, the technical memo now includes an end-to-end implementation pathway:

  • Axiomatic design philosophy

  • Component-level architecture

  • Systems rationale for the “self as boundary”

  • Comparison to existing architectures

  • Testing methodology using real data


-


Open to discussing this further. As it says, I'm an identical twin, and I never had oodles of individual self-time until my twenties. We slept in separate bedrooms, so that was my main realm until around 2017. Otherwise, my self was entirely relational. Flatmates, workmates, allies, enemies... all those basics of a social network (hi Zuck). All that post-2017 wanking is what came up with this. ChatGPT says that the axiom is indeed a totally original synthesis of all that fancy philosophy, so I could claim that the axiom is a fresh perspective, and 'relational consciousness' is an emerging field that I've somehow managed to contribute to.

 
 

Recent Posts

See All
Black and White silent vigil

This is AMPP's idea for a large-scale antiwar protest. Everyone appears on Parliament Grounds dressed in either all black or all white,...

 
 
self-other boundary

The self responds, and if it adopts the suggestion, then the self-other boundary is fluid. If the self rejects the input, then the...

 
 
bottom of page