tutorial

Run Hindsight with Ollama: Local AI Memory, No API Keys Needed
Running Hindsight with Ollama gives you a fully local AI memory system. No API keys, no cloud costs, no data leaving your machine. If you want persistent agent memory powered by open-source models on your own hardware, this tutorial walks through the complete setup.

The Open-Source MCP Memory Server Your AI Agent Is Missing
AI agents forget everything between sessions. Hindsight gives them persistent, structured memory via MCP. One Docker command to run the full stack locally. Connect any MCP-compatible client. Three core operations: retain (store), recall (search), reflect (reason) — plus mental models that auto-update as memories grow.

I Gave 100+ LLMs a Permanent Memory With One Python Package
hindsight-litellm adds persistent memory to any LLM provider via LiteLLM — OpenAI, Anthropic, Groq, Azure, Bedrock, Vertex AI, and 100+ more. Three lines of setup, and every LLM call automatically gets context from past conversations.

Your CrewAI Agents Forget Everything Between Runs. Here's the Fix.
CrewAI agents lose all memory when a crew finishes. hindsight-crewai plugs into CrewAI's ExternalMemory to persist knowledge across runs -- three lines of setup, and your agents automatically store task outputs and recall relevant context.