Preview Test
Visualization may not exactly reflect the underlying algorithm.
We also release a live demo of this algorithm as an early research preview:
Traditional memory mechanisms in AI agent systems typically fall into two categories. The first is reasoning-based memory [1], where the model actively summarizes memory segments after each conversational turn. Conceptually, this mirrors the reasoning process: information is reconsidered, recomposed, and stored as a summary. While intuitive, repeated summarization is computationally costly, and critical details often degrade over successive turns.
The second approach is tool-use-based memory [2,3]. Here, memory is stored in external databases. When recall is needed, the model queries this storage and retrieves relevant interactions. Although easy to integrate, this often leads to fragmented understanding, as the retrieval and reintegration process can strip away crucial nuance and context.
We have developed a fundamentally different approach - instead of treating memory as a separate storage task, we view the entire trajectory as the memory itself, managed through a continuous process of intelligent forgetting. Our method operates in three steps: Mask, Allocate, and Refill.