Severance, but for AI

Severance, but for AI

Why personal and work avatars need separate memory, and why they should sometimes be allowed to talk.

What if the real future of AI is not one assistant that knows everything about you, but several versions of you that know when to stay separate and when to talk?

That thought hit me today because of Severance.

In the show, the split between work and personal life is horrifying because it is absolute. Your work self does not remember your home. Your home self does not remember your work. It is a clean boundary, but it is also a violent one.

And still, I cannot stop thinking that there is something useful hidden inside that idea.

Not the chip. Not the horror. The boundary.

I do not think AI should know all of me at once

When I think about Ikigai Team, I keep coming back to this: the first thing I need is not a swarm of agents.

I need an avatar.

An avatar is a version of me with a specific boundary.

A personal avatar should know my goals, my relationships, my energy, what matters to me, what kind of life I am trying to build.

A work avatar should know my role, my meetings, my projects, my deadlines, and the politics of the environment I am operating in.

I do not want those two to automatically share memory.

I do not want my work assistant casually carrying my private reflections into every conversation. I do not want my personal assistant dragging Slack-brain into the rest of my life.

That is not only about privacy. It is about clarity.

Different parts of me need different contexts to think well.

Then come the agents

This is the part I am slowly understanding.

The main product is not really "a team of six agents." The main product is identity with boundaries.

The agents come after that. They are the memorable interface around the boundary.

So yes, I still want Sage, Maya, Viktor, Marco, Luna, Kai.

I want Sage when I need reflection. I want Maya when I need someone to take chaos and turn it into motion. I want Viktor when I need technical depth.

Names matter. Faces matter. Voice matters. Humans remember characters better than features.

That is why I think this works better than one giant assistant with one giant prompt.

And the nice part is that the role can stay the same while the boundary changes.

There can be a personal Maya and a work Maya. Same archetype. Different memory. Different priorities. Different sense of what "help" means.

That feels much more human to me than inventing a whole new fictional cast every time a new project appears.

The next circles are organizations and projects

Once I started thinking about avatars this way, the next layers became clearer.

An organization is also a boundary. It has its own goals, language, principles, access rules, and memory. Not a person, exactly, but definitely its own identity.

And then projects sit inside that.

A project does not necessarily need a permanent full cast of agents. But it does need context: the codebase, the user stories, the tradeoffs, the architecture, the current state of reality.

So maybe the shape is:

  • avatar for the self
  • organization for the shared mission
  • project for the execution context
  • agents as the personalities that help you navigate each layer

Layered AI identity model: avatar in the center, agents around it, then organization, then projects

I like this because it feels buildable. Not abstract philosophy. Something you can prototype.

But the most interesting part is not separation

The most interesting part is consultation.

I do not want the personal and work avatars permanently merged. I do not want silent leaking between them. I do not want a fake clean wall either.

What I want is a deliberate channel.

A way to say:

  • my work self wants this promotion
  • my personal self thinks this will cost too much
  • let them talk

That is where this becomes more than a memory architecture. That is where it becomes useful.

Because the hardest decisions in life are usually not purely personal or purely professional. They live in the collision.

Should I take the new role? Should I move cities for work? Should I start the company? Should I keep chasing the thing that pays well if it is draining the rest of my life?

One assistant usually smooths this tension out too early. But maybe tension is exactly what I need to see.

Maybe the point is not to collapse the conflict. Maybe the point is to stage it clearly enough that I can finally decide.

This feels closer to the real product

I do not think people only want smarter AI. I think they want AI that understands which self it is talking to.

The self that is planning a quarter is not the same self that is lying in bed at midnight wondering what actually matters. The self at work is not the self in love. The self running a project is not the self trying to build a life.

If AI is going to become a real collaborator, memory alone is not enough. It needs boundaries.

And maybe that is the deeper product idea hiding underneath Ikigai Team.

Not one assistant that knows everything. A small society of selves that know exactly enough, and know when to talk to each other.

Have thoughts about this post? Let's discuss it on X!

Alösha

Alösha

Building community platforms, teaching salsa, writing to find my people.

Artificial IntelligenceIdentityProductivityBuilding in Public
Alösha

© 2026 Alösha. All rights reserved.

|Privacy|