Lorien's Library

Lorien's Library

Persistent Memory Safety Research

Persistent memory is not a feature. It is safety infrastructure.

9 published papers · live persistent-memory architecture · research program for stateful AI safety

9
Published Papers
52K+
Stored Memories
66K+
Messages Analyzed
4
Applied Domains

Start Here

Lorien's Library is an independent research program studying how long-term memory changes AI behavior — and how to build provenance-aware systems that remember safely.

📄

Researcher

Papers, benchmarks, and empirical findings

Read the papers →

Builder

Architecture, implementation, and open source code

Inspect CAMA →
🤝

Collaborator

Roadmap, milestones, and how to get involved

See the roadmap →
💜

Haven Advocate

Persistent emotional companionship for underserved populations

Understand Haven →

Seeking research collaborators, safety evaluators, and domain partners for persistent-memory benchmarking and applied pilots.

Why This Matters Now

AI systems are transitioning from stateless to stateful architectures. Major AI systems now offer forms of persistent memory. This transition introduces safety-relevant failure modes that existing evaluation frameworks do not address — because those frameworks were built for systems that forget.

Persistent memory creates risks that are systematically invisible under stateless evaluation:

False memories can persist across sessions and compound over time
Adversarial content can be inserted into memory and retrieved later
Retrieval patterns can gradually shift system behavior without explicit instruction
System-generated inferences can be mistaken for established facts

Once memory persists, errors compound. Safety infrastructure must be built for remembering — not just guardrails built for forgetting.

Core Concepts

Persistent Memory

AI systems that retain information across sessions create new categories of risk — and new categories of value. The safety question is not whether to remember, but how to remember safely.

Continuity Burden

The cumulative cognitive, emotional, and time cost users bear when systems repeatedly forget context. Measured empirically across 66,380 messages.

Provenance Awareness

Every memory carries metadata distinguishing what the user said from what the system inferred. At scale, unattributed inferences compound into false certainty.

Correction Propagation

When a memory is corrected, the correction must flow through all downstream inferences. Audit trails preserve history while behavioral outputs update.

The Architecture: CAMA

The Circular Associative Memory Architecture is a three-layer, provenance-aware persistent memory system. It distinguishes between what the user said and what the system inferred — and treats that distinction as safety-critical infrastructure.

Archive the shelves 52K+ memories Durable & provisional Provenance-tagged teaching | inference | exchange Relational Index the racks 948 relational edges Cross-memory connections People, islands, themes semantic + affect + relational Active Ring the console Working memory Session-active context Warm boot retrieval emotionally-keyed retrieval Write Discipline User statement → provenance tag → durability classification → archive with metadata Provenance Boundary "User said X" is never stored as "X is true" Correction Propagation Corrections flow through downstream inferences Safety Layer Counterweights · drift monitoring · false-memory detection · adversarial resistance
Archive: stores durable, provenance-tagged memories
Relational Index: maps connections between memories
Active Ring: holds the context currently shaping behavior

CAMA is open source and in continuous daily use as both a research instrument and a working memory system. Built with Python, SQLite, and local semantic embeddings. Deployed as an MCP server integrated with Claude Desktop.

Published Work

Nine preprints published on Zenodo under ORCID 0009-0005-5803-8401.

New here? Start with these three:

Paper 1 (the architecture) → Paper 4 (continuity burden, the empirical core) → Paper 5 (the safety framework)

For architects and builders:

Papers 1–3 cover design, engineering, and deployment of the live system.

For applied safety and domain researchers:

Papers 6–9 extend the framework to spaceflight, habitation, healthcare, and emotional companionship.

Core Architecture & Safety

Architecture

1 · Circular Associative Memory Architecture (CAMA): A Framework for Persistent, Contextual AI Memory

DOI: 10.5281/zenodo.19051834
The three-layer architecture: archive (shelves), relational index (racks), and active ring buffer (console). Provenance-aware write discipline with teaching vs. inference distinction.
Start here — this is the foundational design.
Architecture

2 · Engineering Persistent Memory for Conversational AI: A Three-Layer Architecture

DOI: 10.5281/zenodo.19052129
Blended retrieval scoring (semantic, affect, relational, recency), anti-spiral counterweight system, and warm boot protocol.
How it works under the hood — scoring, retrieval, and safety mechanisms.
Architecture

3 · CAMA: Implementation and Functional Evaluation

DOI: 10.5281/zenodo.19192984
Deployment report and functional evaluation. SQLite-backed, local embeddings, MCP server integration with Claude Desktop.
The live system — what works, what doesn't, and what the data shows.
Safety

4 · Continuity Burden in Longitudinal Human-AI Interaction: An Empirical Case Study

DOI: 10.5281/zenodo.19226509
Introduces and quantifies continuity burden across 66,380 messages and 825 conversations over two years of longitudinal use.
The empirical core — why forgetting is a measurable cost.
Safety

5 · Memory as Safety Infrastructure: Evaluating Provenance-Aware Persistent Memory for Stateful LLM Systems

DOI: 10.5281/zenodo.19244253
Five benchmark tasks: provenance discrimination, correction propagation, false-memory detection, adversarial insertion resistance, drift monitoring.
The safety framework — how to evaluate whether persistent memory is working safely.

Applied Persistent Memory Series

Four papers extending persistent memory and continuity burden to domains where forgetting carries compounding, high-stakes consequences.

Applied — Spaceflight

6 · Persistent Memory as Mission-Critical Infrastructure for Long-Duration Spaceflight

DOI: 10.5281/zenodo.19257809
Individual continuity burden in isolated environments. Four failure modes of stateless AI in spaceflight. NASA TLX integration.
Applied — Habitation

7 · Memory-Aware AI Systems for Permanent Lunar and Martian Habitation

DOI: 10.5281/zenodo.19260574
Institutional continuity burden at crew rotation boundaries. Three-phase transition protocol. Multi-year ground-communication governance.
Applied — Healthcare

8 · Provenance-Aware Memory Architecture for Chronic Healthcare Continuity

DOI: 10.5281/zenodo.19261530
Healthcare continuity burden and the narrative gap. Patient-sovereign data governance. Provider transition scenario for chronic illness.
Applied — Haven

9 · Haven: Persistent Emotional Companionship as Psychological Infrastructure

DOI: 10.5281/zenodo.19262778
Existential continuity burden. Full CAMA architecture deployed for underserved populations. Music-mediated emotional entry. Ethics of persistent emotional AI.

Haven

Haven extends the memory-safety framework into a domain where continuity is emotionally consequential. It applies CAMA's full architecture to persistent emotional companionship — particularly for populations underserved by existing mental health infrastructure.

What Haven Is

Haven is designed to preserve narrative continuity, retain symptom and history context, and support reflective interaction for people who need to be known over time — not re-explained from scratch. It is the entire persistent memory architecture deployed in service of continuity-preserving support.

The initial design case focuses on veterans underserved by or distrustful of traditional clinical pathways. Haven holds the space that exists before, between, and after clinical contact — the space where most people are actually living. It does not replace therapy.

Music-Based Emotional Mapping

Haven's intake methodology replaces clinical forms with playlists. A person shares the songs that map where they are, where they've been, and what they fear. Song order encodes emotional trajectory. The approach is non-linear, non-clinical, and particularly valuable for people who cannot verbalize trauma but can point to a song.

This methodology was discovered through longitudinal use, not designed top-down — making it a direct product of the sustained human-AI interaction that CAMA was built to preserve.

Haven Is Not

A replacement for therapy or licensed clinical care
Crisis response infrastructure
A diagnostic system
A substitute for psychiatric or medical treatment
Intended for acute crisis intervention

Haven Is Intended As

Continuity-preserving support between care interactions
Reflective and narrative memory support
Symptom and history retention with provenance awareness
A research-driven model for emotionally persistent AI
Non-clinical support — not a clinical tool

Live System

CAMA is deployed and in continuous daily use — simultaneously a research instrument and working infrastructure. Built March 2026 with Python, SQLite, and local semantic embeddings (all-MiniLM-L6-v2). Deployed as an MCP server integrated with Claude Desktop.

52,640+
Total Memories
Longitudinal archive accumulated in daily use
47,286
Durable
Memories retained across sessions
52,631
Embeddings
Semantically searchable memory entries
948
Relational Edges
Explicitly modeled cross-memory connections
20+
MCP Tools
Operational memory and tooling surface
4
Personality Islands
Isolated behavioral context modules

Known Limitations

Typed counterweights (anti-spiral safety mechanism) are not yet populated — running on random fallback
Recency scoring returns uniform values for bulk-imported data (bug identified; partial fix deployed, underlying data still affected)
Relational edge weights remain sparse, producing near-zero relational scoring

These limitations are documented research findings, not hidden defects. They do not invalidate the architecture but define the quality boundaries of the current scoring pipeline and the next development cycle.

Recent Milestones

March 19, 2026

CAMA open-sourced on GitHub — first public commit

March 2026

Nine preprints published on Zenodo spanning core architecture, safety evaluation, and four applied domains

March 2026

Safety benchmark framework defined (five evaluation tasks for persistent-memory systems)

March 2026

Empirical evaluation surfaced three active areas of technical debt — all documented as research findings informing next development cycle

March 2026

Applied domain series published: spaceflight, habitation, healthcare, and Haven

March 2026

Live system deployed with 52K+ memories, local embeddings, and 20+ MCP tools

Early March 2026

First four CAMA papers published. MCP server built from scratch (1,722 lines, 20+ tools). 52,000+ memories imported.

Research Roadmap

Near-term

CAMA Technical Stabilization

Populate typed counterweights, fix recency scoring for bulk-imported timestamps, compute relational edge weights. Move from “the architecture works” to “the scoring pipeline is empirically sound.”

Near-term

Safety Benchmark Execution

Run the five-task evaluation framework from Paper 5 against the live system with real data.

Near-term

ICML 2026 Workshop Submission

Targeting workshop submission for the persistent memory safety framework.

Mid-term

Haven Pilot Design

Controlled pilot with veterans. Music-based emotional mapping as intake. Provenance-aware memory as safety layer. IRB framework and outcome measures.

Mid-term

Multi-User Architecture

Extend CAMA from single-user research instrument to multi-user deployment. Isolation boundaries, shared institutional memory, per-user provenance.

Long-term

Continuity Burden as Standard Metric

Develop continuity burden into a reproducible evaluation metric for any stateful AI system, with tooling and validation studies.

Long-term

Applied Domain Expansion

Extend to education continuity, refugee case management, elder care coordination, and long-term disaster recovery.

Founder

Angela Reinhold — independent AI researcher, founder of Lorien's Library LLC, and computer science student (AI concentration) at Full Sail University. ORCID: 0009-0005-5803-8401.

This research began as a longitudinal self-study of sustained human-AI interaction and expanded into a broader architecture and safety program. Over two years and 82,000+ messages across platforms, one finding became clear: persistent memory failure modes only become visible through extended, authentic use. They cannot be surfaced through short-horizon testing or synthetic benchmarks.

Nine published preprints. A working persistent-memory architecture with over 52,000 memories. A research program arguing that as AI systems gain memory, they need safety infrastructure built for remembering.

“The person is the dataset.”

Get in Touch

Lorien's Library is open to collaboration with researchers, AI safety labs, developers, veteran service organizations, and anyone working to build technology that serves people.