Skip to main content

Quick Start

Get up and running with Hindsight in 60 seconds.

Start the API Server

pip install hindsight-api
export OPENAI_API_KEY=sk-xxx
export HINDSIGHT_API_LLM_API_KEY=$OPENAI_API_KEY

hindsight-api

API available at http://localhost:8888

LLM Provider

Hindsight requires an LLM with structured output support. Recommended: Groq with gpt-oss-20b for fast, cost-effective inference. See LLM Providers for more details.


Use the Client

pip install hindsight-client
from hindsight_client import Hindsight

client = Hindsight(base_url="http://localhost:8888")

# Retain: Store information
client.retain(bank_id="my-bank", content="Alice works at Google as a software engineer")

# Recall: Search memories
client.recall(bank_id="my-bank", query="What does Alice do?")

# Reflect: Generate disposition-aware response
client.reflect(bank_id="my-bank", query="Tell me about Alice")

What's Happening

OperationWhat it does
RetainContent is processed, facts are extracted, entities are identified and linked in a knowledge graph
RecallFour search strategies (semantic, keyword, graph, temporal) run in parallel to find relevant memories
ReflectRetrieved memories are used to generate a disposition-aware response

Next Steps

  • Retain — Advanced options for storing memories
  • Recall — Search and retrieval strategies
  • Reflect — Disposition-aware reasoning
  • Memory Banks — Configure disposition and background
  • Server Deployment — Docker Compose, Helm, and production setup