Memobase is a user profile-based memory system designed to bring long-term user memory to your Generative AI (GenAI) applications. Whether you're building virtual companions, educational tools, or personalized assistants, Memobase empowers your AI to remember, understand, and evolve with your users.
Memobase can provide you structured user profiles of users, check out the result (compared with mem0) from a 900-turns real-world chatting:
Profile Output
{
"basic_info": {
"language_spoken": ["English", "Korean"],
"name": "오*영"
},
"demographics": {
"marital_status": "married"
},
"education": {
"notes": "Had an English teacher who emphasized capitalization rules during school days",
"major": "국어국문학과 (Korean Language and Literature)"
},
"interest": {
"games": 'User is interested in Cyberpunk 2077 and wants to create a game better than it',
'youtube_channels': "Kurzgesagt",
...
},
"psychological": {...},
'work': {'working_industry': ..., 'title': ..., },
...
}
- 🎯 Memory for User, not Agent: Define and control exactly what user information your AI captures.
- ➡️ Time-aware Memory: Memobase saves specific dates in profiles to prevent outdated information from affecting your AI. Check Memobase event for sequential events (episodic memory).
- 🖼️ Contorllable Memory: Among all types of memory, only some may enhance your product experience. Memobase offers a flexible configuration for you to design the profile.
- 🔌 Easy Integration: Minimal code changes to integrate with your existing LLM stack with API, Python/Node/Go SDK.
- ⚡️ Batch Processing: Industry-leading speeds via non-embedding system and session buffer. Fast & Cheap.
- 🚀 Production Ready: Battle-tested by our partners in production.

How Memobase works?
-
Start your Memobase Backend, you should have the below two things to continue:
-
A project url. default to
http://localhost:8019
-
A project token. default to
secret
-
-
Install the Python SDK:
pip install memobase
-
Get ready to make AI remember your users now.
Here's a step-by-step guide and breakdown for you.
Tip
You can just run this equivalent quickstart script
Or you can keep things super easy by using OpenAI SDK with Memobase., Ollama with Memobase
from memobase import MemoBaseClient, ChatBlob
mb = MemoBaseClient(
project_url=PROJECT_URL,
api_key=PROJECT_TOKEN,
)
assert mb.ping()
uid = mb.add_user({"any_key": "any_value"})
mb.update_user(uid, {"any_key": "any_value2"})
u = mb.get_user(uid)
print(u)
# mb.delete(uid)
In Memobase, all types of data are blobs to a user that can insert, get and delete:
messages = [
{
"role": "user",
"content": "Hello, I'm Gus",
},
{
"role": "assistant",
"content": "Hi, nice to meet you, Gus!",
}
]
bid = u.insert(ChatBlob(messages=messages))
print(u.get(bid)) # not found once you flush the memory.
# u.delete(bid)
Be default, Memobase will remove the blobs once they're processed. This means that apart from the relevant memory, your data will not be stored with Memobase. You can persist the blobs by adjusting the configuration file.
u.flush()
And what will you get?
print(u.profile())
# [UserProfile(topic="basic_info", sub_topic="name", content="Gus",...)], []
u.profile()
will return a list of profiles that are learned from this user, including topic
, sub_topic
and content
. As you insert more blobs, the profile will become better.
Why need a flush?
In Memobase, we don't memoize users in hot path. We use buffer zones for the recent inserted blobs.
When the buffer zone becomes too large (e.g., 1024 tokens) or remains idle for an extended period (e.g., 1 hour), Memobase will flush the entire buffer into memory. Alternatively, you can use flush()
manually decide when to flush, such as when a chat session is closed in your app.
Memobase has a context
api to pack everything you need into a simple string, where you can insert it into your prompt directly:
print(u.context(max_token_size=500, prefer_topics=["basic_info"]))
Something like:
<memory>
# Below is the user profile:
- basic_info::name: Gus
...
# Below is the latest events of the user:
2025/02/24 04:25PM:
- work::meetings: Scheduled a meeting with John.
...
</memory>
Please provide your answer using the information within the <memory> tag at the appropriate time.
Checkout the detail params here.
- Checkout the quickstart script for more details
- You may want to explore the customization of Memobase to make sure the system works as your expectation.
- If you want to test Memobase on your own data, we offer a script that allows you to set multiple chat sessions and see how the memory grows.
By placing profiles into your AI (e.g. system prompt).
Demo
PROFILES = "\n".join([p.describe for p in u.profile()])
print(PROFILES)
# basic_info: name - Gus
# basic_info: age - 25
# ...
# interest: foods - Mexican cuisine
# psychological: goals - Build something that maybe useful
# ...
Too much information is hidden in the conversations between users and AI, that's why you need a new data tracking method to record user preference and behavior.
Demo
PROFILES = u.profile()
def under_age_30(p):
return p.sub_topic == "age" and int(p.content) < 30
def love_cat(p):
return p.topic == "interest" and p.sub_topic == "pets" and "cat" in p.content
is_user_under_30 = (
len([p for p in profiles if under_age_30(p)]) > 0
)
is_user_love_cat = (
len([p for p in profiles if love_cat(p)]) > 0
)
...
Not everyone is looking for Grammarly, it's always nice to sell something your users might want.
Demo
def pick_an_ad(profiles):
work_titles = [p for p in profiles if p.topic=="work" and p.sub_topic=="title"]
if not len(work_titles):
return None
wt = work_titles[0].content
if wt == "Software Engineer":
return "Deep Learning Stuff"
elif wt == "some job":
return "some ads"
...
For detailed usage instructions, visit the documentation.
Star Memobase on Github to support and receive instant notifications!
Join the community for support and discussions:
Or Just email us ❤️
This project is licensed under the Apache 2.0 License - see the LICENSE file for details.