Persistent memory system for LLMs that actually learns mid-conversation
·
0 reactions
·
0 comments
·
4 views
Every LLM conversation starts from zero. RAG helps, but it can't learn from what's happening right now. MDA is my attempt to fix that. MDA encodes knowledge as associative entity networks, updates in real-time via the Oja rule (no backprop, no reindexing), and retrieves context by activating concept graphs rather than similarity search. Runs CPU-first, model-agnostic, works with Ollama/OpenAI/Anthropic out of the box. Available as an MCP server, with GPU acceleration support for batch workloads.
Original article
LocalLlama
Anonymous · no account needed