Architecture
Memory Architecture
Norman Agent implements a multi-layer memory system:
Conversation Memory (Short-term)
Stored per userId + chatId pair. Recent messages are injected as system context before each LLM call:
Previous conversation context:
User: What's the weather?
Assistant: It's sunny and 22°C.Configurable via CONVERSATION_HISTORY_LIMIT (default: 50 messages).
Long-term Memory
Persistent across conversations. Searched semantically when a new user message arrives — relevant memories injected as context:
Relevant long-term knowledge:
User prefers metric units for temperature.Memory Types
| Type | Description | Storage |
|---|---|---|
conversation | Chat history turns | Per chatId |
long_term | Persistent knowledge | Per userId |
task | Task execution context | Per taskId |
Tool System
Tools are registered via a ToolRegistry with a standard interface:
interface Tool {
name: string;
description: string;
enabled: boolean;
parameters: JSONSchema;
execute(params: any, context: ToolContext): Promise<ToolResponse>;
}Tool Execution Flow
- Tool schemas injected into system prompt
- LLM responds with
TOOL_CALL:tool_name({"param":"value"}) - Agent parses calls, executes each tool
- Results replace tool call patterns in response
- Follow-up LLM call synthesizes final answer
Built-in Tools
file_access— Read/write files via Norman Librarysearch— Search knowledge base via Norman Searchmemory— Query and store long-term memoriesweb_search— External web search
Task System (v1)
Task Lifecycle
pending → running → completed
→ failed
→ cancelledStep Dependencies
Steps declare dependencies by title. The execution engine:
- Identifies all independent steps (no dependencies)
- Runs them in parallel
- On completion, unlocks dependent steps
- Continues until all steps complete or a failure occurs
Agent Assignment
Each task spawns worker agents with roles:
coordinator— Manages overall task executionworker— Executes individual stepssynthesizer— Combines results into final output