Memori: An open source framework for large models to “have long-term memory”

When using ChatGPT, Claude, or native LLMs, we often have a pain point:
The conversation ends and you forget everything.
No matter how much you talk about preferences, information, habits, context – it’s the new AI for the next conversation.
But if we want to be a real AI assistant, AI Agent, knowledge management system, we must make the model remember things.
That’s exactly what Memori (GibsonAI/Memori) wants to solve.

What is Metri? Summarized in one sentence

Memori is an open-source “AI Long-Term Memory System”,
Allow large models to automatically extract, store, manage, and recall memories.

It makes the model more like a “brain with memory” than a tool for one-time conversations.

Why is it needed?

Limitations of Traditional LLMs:

  • Conversation information is not actively saved
  • Turn off the history and forget it
  • Unable to remember your preferences across sessions
  • The context window is limited, and the previous text is often “forgotten”

Memori offers:

  • Automatically recognize “important information” and write it into memory
  • Keep your facts, habits, preferences for a long time
  • Automatically recall relevant memories when answering questions
  • “Time decay management” for old memories
  • Support text + image + video + audio

It is equivalent to an AI memory manager + multimodal knowledge base + memory retrieval system.

Core features of Memori

1. Automatic Memory Extraction

It automatically draws from the input:

  • Your personal preferences
  • Factual information
  • Time events
  • Intention, relationship
  • Content of images/videos

Just like humans automatically grasp the “key points” when talking.

2. Automatically write to the memory

Memori labels each memory with:

  • Timestamp
  • Tags (e.g. preference / event / fact)
  • Summary summary
  • Priority and “freshness” weight

The underlying uses vector databases (e.g. PGVector / Chroma / Milvus).

3. Auto-retrieve + auto-complete context

When an LLM needs to answer a question, it will:

  • Automatically search for relevant memories
  • As an “additional context” resupply model
  • Make answers more coherent, sustainable, and personalized

Unlike regular RAG, Memori is a “long-term learner”.

4. Multimodal support

It not only remembers texts, but also remembers:

  • Image content (OCR + image embedding)
  • Video frames, motion information
  • Audio content (speech-to-text + embedding)

This makes it ideal for applications such as digital humans, educational assistants, or monitoring analytics.

What can it be used for?

1) Be a real personal AI assistant

  • Remember what you like to drink
  • Remember your work plan
  • Remember your learning progress
  • Remember the articles you have read in the past
  • Remember the topics you care about and relationships

Stronger and more open than ChatGPT’s Memory feature.

2) Add long-term memory to the intelligent agent

For what you often do:

  • Telegram Bot
  • WordPress automation workflows
  • Cloudflare Worker + Webhook bot
  • Dify + external database
  • Automated blogging / summarizing documents / knowledge pipelines

Make agents less of a “one-time tool.”

3) Knowledge Base System (AI PKM)

Memori can be continuously absorbed:

  • Your PDF
  • Your notes
  • Your web page clipping
  • Your conversation transcript

Build a “second brain” that belongs to you.

Project Structure (Brief)

Memori is roughly made up of three layers:

  1. Extraction Layer: The LLM extracts key information from the input
  2. Storage layer: Vector database holds memory + tags
  3. Retrieval layer: Automatically retrieves memories and injects context

The entire system can be hung on Docker, Cloudflare, and self-hosted servers.

GitHub:https://github.com/GibsonAI/Memori
Tubing:

Scroll to Top