Skip to main content

Configuration

Complete reference for configuring Hindsight server through environment variables and configuration files.

Environment Variables

Hindsight is configured entirely through environment variables, making it easy to deploy across different environments and container orchestration platforms.

All environment variable names and defaults are defined in hindsight_api.config. You can use MemoryEngine.from_env() to create a MemoryEngine instance configured from environment variables:

from hindsight_api import MemoryEngine

# Create from environment variables
memory = MemoryEngine.from_env()
await memory.initialize()

LLM Provider Configuration

Configure the LLM provider used for fact extraction, entity resolution, and reasoning operations.

Common LLM Settings

VariableDescriptionDefaultRequired
HINDSIGHT_API_LLM_PROVIDERLLM provider: groq, openai, gemini, ollamagroqYes
HINDSIGHT_API_LLM_API_KEYAPI key for LLM provider-Yes (except ollama)
HINDSIGHT_API_LLM_MODELModel nameProvider-specificNo
HINDSIGHT_API_LLM_BASE_URLCustom LLM endpointProvider defaultNo

Provider-Specific Examples

Groq (Recommended for Fast Inference)

export HINDSIGHT_API_LLM_PROVIDER=groq
export HINDSIGHT_API_LLM_API_KEY=gsk_xxxxxxxxxxxx
export HINDSIGHT_API_LLM_MODEL=openai/gpt-oss-20b

OpenAI

export HINDSIGHT_API_LLM_PROVIDER=openai
export HINDSIGHT_API_LLM_API_KEY=sk-xxxxxxxxxxxx
export HINDSIGHT_API_LLM_MODEL=gpt-4o

Gemini

export HINDSIGHT_API_LLM_PROVIDER=gemini
export HINDSIGHT_API_LLM_API_KEY=xxxxxxxxxxxx
export HINDSIGHT_API_LLM_MODEL=gemini-2.0-flash

Ollama (Local, No API Key)

export HINDSIGHT_API_LLM_PROVIDER=ollama
export HINDSIGHT_API_LLM_BASE_URL=http://localhost:11434/v1
export HINDSIGHT_API_LLM_MODEL=llama3.1

OpenAI-Compatible Endpoints

export HINDSIGHT_API_LLM_PROVIDER=openai
export HINDSIGHT_API_LLM_BASE_URL=https://your-endpoint.com/v1
export HINDSIGHT_API_LLM_API_KEY=your-api-key
export HINDSIGHT_API_LLM_MODEL=your-model-name

Database Configuration

Configure the PostgreSQL database connection and behavior.

VariableDescriptionDefaultRequired
HINDSIGHT_API_DATABASE_URLPostgreSQL connection string-Yes*

*Note: If DATABASE_URL is not provided, the server will use embedded pg0 (embedded PostGRE).

MCP Server Configuration

Configure the Model Context Protocol (MCP) server for AI assistant integrations.

VariableDescriptionDefaultRequired
HINDSIGHT_API_MCP_ENABLEDEnable MCP servertrueNo
# Enable MCP server (default)
export HINDSIGHT_API_MCP_ENABLED=true

# Disable MCP server
export HINDSIGHT_API_MCP_ENABLED=false

Embeddings Configuration

Configure the embeddings provider for semantic search. By default, uses local SentenceTransformers models.

VariableDescriptionDefaultRequired
HINDSIGHT_API_EMBEDDINGS_PROVIDERProvider: local or teilocalNo
HINDSIGHT_API_EMBEDDINGS_LOCAL_MODELModel name for local providerBAAI/bge-small-en-v1.5No
HINDSIGHT_API_EMBEDDINGS_TEI_URLTEI server URL-Yes (if provider is tei)

Local Provider (Default)

Uses SentenceTransformers to run embedding models locally. Good for development and smaller deployments.

export HINDSIGHT_API_EMBEDDINGS_PROVIDER=local
export HINDSIGHT_API_EMBEDDINGS_LOCAL_MODEL=BAAI/bge-small-en-v1.5

TEI Provider (HuggingFace Text Embeddings Inference)

Uses a remote TEI server for high-performance inference. Recommended for production deployments.

export HINDSIGHT_API_EMBEDDINGS_PROVIDER=tei
export HINDSIGHT_API_EMBEDDINGS_TEI_URL=http://localhost:8080
warning

All embedding models must produce 384-dimensional vectors to match the database schema.

Reranker Configuration

Configure the cross-encoder reranker for improving search result relevance. By default, uses local SentenceTransformers models.

VariableDescriptionDefaultRequired
HINDSIGHT_API_RERANKER_PROVIDERProvider: local or teilocalNo
HINDSIGHT_API_RERANKER_LOCAL_MODELModel name for local providercross-encoder/ms-marco-MiniLM-L-6-v2No
HINDSIGHT_API_RERANKER_TEI_URLTEI server URL-Yes (if provider is tei)

Local Provider (Default)

Uses SentenceTransformers CrossEncoder to run reranking locally.

export HINDSIGHT_API_RERANKER_PROVIDER=local
export HINDSIGHT_API_RERANKER_LOCAL_MODEL=cross-encoder/ms-marco-MiniLM-L-6-v2

TEI Provider (HuggingFace Text Embeddings Inference)

Uses a remote TEI server with a reranker model.

export HINDSIGHT_API_RERANKER_PROVIDER=tei
export HINDSIGHT_API_RERANKER_TEI_URL=http://localhost:8081
tip

When using TEI, you can run separate servers for embeddings and reranking, or use a single server if it supports both operations with your chosen model.

Configuration Files

.env File

The Hindsight API will look for a .env file:

# .env

# Database
HINDSIGHT_API_DATABASE_URL=postgresql://hindsight:hindsight_dev@localhost:5432/hindsight

# LLM
HINDSIGHT_API_LLM_PROVIDER=groq
HINDSIGHT_API_LLM_API_KEY=gsk_xxxxxxxxxxxx

# Embeddings (optional, defaults to local)
# HINDSIGHT_API_EMBEDDINGS_PROVIDER=local
# HINDSIGHT_API_EMBEDDINGS_LOCAL_MODEL=BAAI/bge-small-en-v1.5

# Reranker (optional, defaults to local)
# HINDSIGHT_API_RERANKER_PROVIDER=local
# HINDSIGHT_API_RERANKER_LOCAL_MODEL=cross-encoder/ms-marco-MiniLM-L-6-v2

For configuration issues not covered here, please open an issue on GitHub.