Configuration
Environment Variables
| Variable | Required | Default | Description |
|---|---|---|---|
LLM_PROVIDER | No | openai | LLM provider to use (openai or bedrock) |
OPENAI_API_KEY | Yes* | — | OpenAI API key (*required when LLM_PROVIDER=openai) |
PORT | No | 3001 | Server port |
MONGODB_URI | Yes | — | MongoDB connection string |
AUTH_SECRET | Yes | — | Secret for JWT auth verification |
LOG_LEVEL | No | info | Logging level (debug, info, warn, error) |
Provider Configuration
OpenAI
LLM_PROVIDER=openai
OPENAI_API_KEY=sk-...Supported models: gpt-4o, gpt-4o-mini, gpt-4-turbo, gpt-3.5-turbo
AWS Bedrock (Planned)
LLM_PROVIDER=bedrock
AWS_REGION=us-east-1
AWS_ACCESS_KEY_ID=AKIA...
AWS_SECRET_ACCESS_KEY=...
BEDROCK_MODEL_ID=anthropic.claude-3-haiku-20240307-v1:0Rate Limiting
Rate limits are enforced per-user based on token consumption. Configure via:
RATE_LIMIT_RPM=60 # Requests per minute per user
RATE_LIMIT_TPM=100000 # Tokens per minute per userDocker Compose
version: '3.8'
services:
norman-engine:
build: .
ports:
- "3001:3001"
environment:
- LLM_PROVIDER=openai
- OPENAI_API_KEY=${OPENAI_API_KEY}
- MONGODB_URI=mongodb://mongo:27017/norman-engine
- AUTH_SECRET=${AUTH_SECRET}
depends_on:
- mongo
mongo:
image: mongo:7
ports:
- "27017:27017"
volumes:
- mongo_data:/data/db
volumes:
mongo_data: