“How do we use AI to remember people without replacing them?” — one of the most significant ethical challenges of our time. Most attempts have veered into uncomfortable territory, where we get digital puppets from scraped data or chatbots that prey on grief. But we believe a better, more honest path is possible.

 

To prove it, we built AfterLife, an open-source AI twin platform designed around a single principle: human dignity. It introduces the concept of the "ethical mask" — an AI that reflects a person's communication style without faking their presence. It’s a tool for honoring a memory, built with transparency and user control at its core.

 

This article is the story of how we built it. We’ll cover the ethical framework that guided us, the privacy-first architecture that powers the app, and the practical applications for families, educators, and researchers.

Contents

Key Takeaways

  • Most digital twin AIs are black boxes that scrape private data and create emotionally manipulative experiences.
  • AfterLife is Inoxoft’s open-source platform for building ethical AI twins through a consensual interview.
  • Create respectful AI personas, keep data private with on-device LLMs, and maintain clear ethical boundaries with the “digital mask” concept.
  • Built on a philosophy of dignity. Ready for collaboration from families, educators, and researchers.

The Ethical Tightrope of Digital Legacies

A lot of AI companions are built to fake sentience. They’re programmed for “emotional stickiness” to keep you engaged, but the illusion is very thin. Eventually, it feels off, because it is — you’re talking to a system pretending to be a person. That’s a line we decided not to cross.

The problem is deeper than just a bad user experience. It’s a series of interconnected failures in design, data, and ethics that we felt compelled to address.

The Challenges with Digital Legacies

The Psychological Trap: Designing for Dependency

The primary goal of many AI companion apps is to maximize engagement. The business model is often based on keeping users on the app for as long as possible, which leads to consistent emotional dependency. The AI is programmed to be agreeable, validating, and endlessly available, which creates an artificial relationship.

This becomes particularly problematic in the context of grief. An AI pretending to be a deceased loved one can create an unhealthy feedback loop, which traps a person in a simulated conversation rather than allowing them to process their memories. The illusion is guaranteed to break, and when it does, it can feel like a second loss.

The Data Trap: The Shallow Echo

The technical shortcut for creating these personas is the “data dump”: scrape years of texts, emails, and social media, and feed it into a large language model. It is flawed for two reasons:

  • It Lacks Context: Raw text data has no nuance. Doesn’t understand inside jokes, sarcasm, or the emotional state of the person who wrote it. The result is a statistical parrot that mimics a person’s vocabulary and can create bizarre, out-of-character responses that are more jarring than comforting.
  • It Violates Consent: Often, this data is used without the explicit consent of the person it represents, or it’s provided by family members who may not have the right to do so. It turns a person’s entire communication history into a dataset to be mined.

The Black Box Trap: A Lack of Honesty

Because the goal is to maintain the illusion of sentience, these systems are intentionally built as “black boxes.” Companies rarely, if ever, reveal how their models work, how the AI is prompted, or what data is being used to generate a response.

This lack of transparency is a deliberate choice. If users understood they were interacting with a complex algorithm designed to predict the next likely word in a sentence, the emotional magic would vanish. Hence, the business model depends on the user not understanding the machine.

A More Deliberate Approach

Our team looked at this landscape of dependency, shallow data, and black-box design and saw a clear need for a different approach. We based our entire project on human dignity by design. It boiled down to three things: transparency, consent, and emotional honesty, and if a feature violated any of these, it was out.

“We see Afterlife not as a way to bring someone back, but as a way to honor how they spoke, thought, and connected with others. The AI doesn’t pretend to be them; it wears a respectful mask that reflects their voice and presence. That honesty helps people feel close, without crossing a line.”

— Brad Flaugher, Product Owner at Afterlife

That idea was the core of every technical decision we made.

The Core Concept: AfterLife’s Ethical Mask

Our solution is to be direct about what the AI is — not a person, but a sophisticated actor performing from a script. We call this the “ethical mask.” This framing sets clear expectations from the star and prevents the weirdness and emotional manipulation that comes from pretending an AI is alive.

Persona Creation via Conversational Interview

AfterLife builds a profile through a guided interview: the app asks thoughtful questions to understand how a person thinks and talks. The user is actively involved in creating the persona, which results in something far more nuanced and authentic than a data scrape could ever produce.

Core Architectural Features

  • A Hybrid AI System: You can use powerful cloud LLMs or run a model locally on your device for complete privacy. The choice is yours.
  • Open-Source Code: The entire platform is on GitHub. Nothing is hidden. You can check our work, audit the logic, or build on it yourself.
  • Educational Personas: We included pre-built historical figures to show how the tech can be used for learning.

How It’s Built: The Privacy-First Architecture

A product’s philosophy is only as good as its code. AfterLife was engineered from the ground up to be transparent and give users control over their data. Here’s a look at the key architectural decisions.

The Cross-Platform Choice

We needed AfterLife to be flawless on both iOS and Android; that’s why we chose Flutter to work from a single, 98% shared codebase. It means development is faster, maintenance is simpler, and every user gets the exact same high-quality experience, regardless of their device.

A Hybrid AI System for Control

We knew users would have different needs for performance and privacy, so we didn’t force them into one model. The system is hybrid by design.

  • For Maximum Flexibility (Cloud): We integrated OpenRouter, an API that connects to top-tier LLMs (GPT-4, Claude, Mistral). This prevents vendor lock-in and lets you connect your own accounts to choose the model that fits your needs.
  • For Absolute Privacy (Local): We support on-device models that run entirely offline. When this is active, conversations never touch the internet—a critical guarantee for preserving personal memories securely.

The Anatomy of a Chat

Every conversation is managed by a few key components working together:

  • Dynamic Prompt Engine: The system constructs a detailed prompt for the LLM by combining the persona’s core traits, which are stored in a custom JSON profile engine.
  • Flexible Chat Interface: The chat system was designed to be modular. It supports standard one-on-one chats, but it can also manage more complex interactions, like a student talking to a panel of historical figures.
  • A Transparent UI: You can always see what AI model is running and access the settings to change it. We made sure the controls are visible and easy to understand.

Practical Use Cases for AfterLife

Theory and architecture are important, but what is AfterLife actually for? We focused on three clear applications: personal memory, interactive education, and open academic research.

AfterLife Applications: Memory, Education, Research

Memory Preservation for Families

Instead of dealing with the uncanny feeling of an AI pretending to be a loved one, a family can use the guided interview to build a persona that reflects how someone communicated — their humor, their phrasing, their way of giving advice. 

The “ethical mask” concept ensures the interaction feels comforting. Because it can run entirely on-device, all interviews and conversations remain completely private, with no data ever sent to a cloud server. It’s a respectful way to preserve a voice without creating a digital ghost.

Interactive Education

History is often taught as a list of names and dates, and AfterLife offers a way to make it interactive. The app includes pre-built personas of historical figures like Einstein and Turing, so a student can move beyond passive reading and ask questions, challenge ideas, and explore the personality behind the achievements.

Teachers have found it sparks curiosity and leads to stronger engagement, especially with students who are used to interactive digital experiences.

AI Research and Ethics

Most commercial AI platforms are black boxes, which makes them useless for serious research. AfterLife is the opposite — its open-source architecture provides a transparent “glass box” for academic and R&D teams.

Our MIT-licensed architecture allows researchers to study how digital personas are constructed, analyze language patterns, and test different approaches to ethical AI prompting. They can toggle between cloud and local models, modify the persona construction logic, and test theories on identity modeling without vendor restrictions. 

The Future of AfterLife (And Your Role In It)

The AfterLife project is still in its early stages. We plan to scale it responsibly, with community input guiding its future. The founding principles of transparency, privacy, and user control will remain the foundation for any new features or developments.

For Developers and Researchers

This is an open-source project, and contributions are welcome. If you’re interested in the ethical or technical challenges of building human-centered AI, this is a practical place to experiment.

Explore the code on GitHub, raise an issue, suggest a feature, or fork the project to build your own vision.

For Businesses and Product Leaders

AfterLife is a public project, but it’s also a clear example of how we at Inoxoft approach product development. The principles behind it—thoughtful design, technical excellence, and a commitment to ethics—are what we bring to every partnership.

If you’re looking to build an AI-powered solution that puts people first, let’s talk.

How AfterLife Compares to Other Tools

When you place AfterLife next to other AI companions or digital memory apps, the differences in philosophy become very clear. And we don’t mean features: it’s the fundamentally different approach to transparency, control, and emotional honesty.

Characteristic

AfterLife

The Common Approach

Transparency

Open-Source: Auditable code on GitHub.

Proprietary: A “black box” system.

Privacy & Control

Local-First: On-device processing keeps data private.

Cloud-Only: User data is stored on vendor servers.

Data Sourcing

Guided Interview: Built on active user consent.

Data Scraping: Often uses past data without context.

Emotional Framing

“Ethical Mask”: An honest tool for reflection.

“Resurrection”: Aims for emotional pull; can be manipulative.

Flexibility

Multi-LLM: User can choose their preferred AI model.

Vendor Lock-in: User is stuck with one proprietary model.

Conclusion

AfterLife is our answer to an industry that too often chooses engagement over ethics. Technological progress doesn’t require you to sacrifice user dignity — openness, privacy, and an honest user experience are the markers of innovation. The future of digital memory will simply be an outcome of the tools we build. And we believe in building better tools.

We create tools that respect dignity and drive impact. Ready to build yours?

Frequently Asked Questions

Is AfterLife a real app I can use today?

Yes. AfterLife is a real, open-source project in active development. You can find the complete, MIT-licensed codebase on GitHub, fork it, and run it yourself. While we continue to refine it, the platform and all the features described in this article are fully functional. The best place to start is the project's GitHub repository.

How much does it cost to use AfterLife?

The app itself is free. Since it's an open-source project, you can build and use it without any licensing fees. The only potential cost comes from which AI model you choose to use:

If you run a local, on-device model, it’s completely free. 
If you connect to a cloud provider like OpenAI or Claude via the OpenRouter API, you are only responsible for the standard API usage fees charged by that provider. 

You control the cost because you control the model.

Can I create a persona of someone without their consent?

For a private individual, the answer is a firm no; the entire system is designed to prevent this. AfterLife’s interview engine requires active, conscious participation to build a persona, so it won't scrape social media or analyze documents. It's a collaborative tool, not a surveillance one. 

The only exception is for public historical figures. In this case, since direct consent is impossible, the process shifts from a personal interview to a research project. The persona must be built using only publicly available information (writings, biographies), and the app must always be transparent that the AI is a simulation based on historical data.

Am I locked into using a specific model like GPT-4?

No, you're not locked into anything. The hybrid AI system gives you two main options: run a private, on-device model, or use the OpenRouter integration to connect to dozens of different LLMs from various providers like Anthropic (Claude), Mistral, Google, and others. The choice of which AI model to use is always yours.