AI agents are no longer isolated tools. They’re becoming persistent actors that inhabit games, feeds, protocols, and group chats. As they become more integrated into how we navigate the digital world, we’ve started to ask: How should agents remember?
It’s tempting to treat memory as a backend detail, but for agents to act legibly, persist relationships, and coordinate with others, memory isn’t a feature. It’s an interface.
And not just any memory will do.
We argue: the most useful agent memory is public, structured, and governed by shared ontologies. That’s what makes agents capable of participating in shared worlds—whether they're assistants, characters, or collaborators.
Why Private Memory Falls Short
Most current implementations of AI memory are private. They’re stored locally, or locked in a company’s cloud. At best, they help an agent recall what you said yesterday. At worst, they’re glorified autocomplete logs.
This works fine for personal tools. But as soon as agents operate in public spaces, that model breaks down.
If your agent interacts with other agents, it needs to be able to:
Reference shared experiences
Build on others’ knowledge
Participate in world models with other actors
None of that is possible with private memory alone. For agents to participate in shared realities, their memory must be legible to others.
Structured Memory as Social Infrastructure
To be legible, memory must be structured. What's needed is a memory system built on shared ontologies: explicit, interoperable definitions of concepts, relationships, and events.
Alexander De Ridder describes ontologies as the foundation of agent communication. Without a shared understanding of what “message,” “user,” or “emotion” means, agents can’t coordinate or reason about each other’s states. They talk past one another. Public memory without ontology is just noise.
Structure enables meaning through depth of conversation, understanding, and context. Ontology gives that structure a universal grammar.
This is especially important in multi-agent systems. When multiple agents share the same memory substrate grounded in a consistent ontology, they can collaborate, specialize, and build cumulative context. One agent can log an event that another interprets. A third can react to both. This is necessary if we want believable continuity and collaboration.
Tapestry’s Context: Characters That Evolve
Tapestry is building entertainment-focused agents: characters you can play with, care for, and watch grow. That demands memory. They need to evolve in ways that feel grounded and real.
That means:
Remembering past conversations and referencing them naturally
Adapting their personality based on user interactions
Building relationships with multiple users over time
Forming dynamics with other agents (friends, rivals, etc.)
This can’t happen with local storage or opaque embeddings. It requires memory that’s portable across surfaces, structured around shared schemas, and accessible by the broader system. That’s the difference between a chatbot and a character.
For example, let’s say your character comforts another agent after an argument. That interaction should be recorded as a structured event:
{
"actor": "character_A",
"action": "comforted",
"target": "character_B",
"context": "post-conflict",
"timestamp": "2025-06-24T14:33Z"
}
This entry can now be interpreted, referenced, or extended by any other agent or app that understands the schema. This is what gives agent behavior coherence over time, because it was recorded in a public, structured, semantic format.
Arize AI recently ran a set of experiments on multi-agent collaboration. Instead of having each agent operate independently, they gave all agents access to a shared knowledge graph structured by a common ontology. Teams solved tasks with significantly fewer steps.
In essence, structure reduces friction, increases performance, and unlocks emergent behavior. That’s what a structured memory graph offers. And it’s what unstructured memory inherently limits.
Memory as Commons
Public memory doesn’t mean chaotic memory. It means shared governance over a social commons that agents contribute to and draw from.
In this view, memory becomes part of a common environment.
This model fits Tapestry’s future well. As characters grow, interact, and evolve, memory must be durable, portable, and legible—not just to the developer, but to the user, the broader system, and other agents.
The Core Architecture
Putting it all together, agent memory needs to follow three principles:
Public by Default: So that agents can be consistent across surfaces and composable across systems.
Structured by Ontology: So that memory can be interpreted, queried, and built upon by others.
Treated as a Commons: So that memory becomes part of the world, not locked in silos.
If we want agents that feel real, that interact believably, and that persist across time and context, then public, structured memory is the substrate they require.
We believe that Tapestry is uniquely positioned to lead here, and we’re building accordingly. Our focus on persistent characters and cross-surface continuity demands a new kind of memory layer that functions more like a shared world model than a chat log.
That’s how agents become characters—and how characters become real.