Production-ready serverless backend for a personal knowledge graph — DynamoDB, Lambda, Bedrock, MCP, Step Functions. The implementation of the architecture described in the 'From Prototype to Production' essay.
The production implementation of the serverless second brain described in the essay of the same name. While the essay defines the architecture — memory, compute, and interface layers with "two doors" for humans and agents — this repository is the code that brings it to life.
Available as source code.
The system separates three layers with clear responsibilities:
Bedrock provides classification (Claude) and embeddings (Titan 1,024 dimensions). EventBridge connects events between components. SNS delivers daily digest notifications.
The project follows four phases defined in the essay:
Additionally: CloudFront for static hosting (#10), data migration (#11), benchmarks (#12), domain-agnostic configuration (#13), observability (#14), and MCP write safety (#15).
All infrastructure is defined with Terraform using reusable modules:
terraform/
environments/dev/ → per-environment configuration
modules/
dynamodb/ → single-table design
lambda/ → compute functions
api-gateway/ → human door
step-functions/ → orchestration
s3/ → content and frontend
cloudfront/ → CDN + headers
agentcore/ → agent door
monitoring/ → dashboards + alarms
This project translates a reference architecture into deployable code. The goal is for any builder to take the repository, configure their domain (legal, research, education) in terraform.tfvars, and deploy a complete second brain with terraform apply. The essay explains the "why" behind each decision; the code implements the "how."
Cloud computing model where the provider manages infrastructure automatically, allowing code execution without provisioning or managing servers, paying only for actual usage.
AWS serverless compute service that runs code in response to events without provisioning or managing servers, automatically scaling from zero to thousands of concurrent executions.
AWS serverless NoSQL database with single-digit millisecond latency at any scale, ideal for applications requiring high performance and automatic scalability.
AWS serverless service providing access to foundation models from multiple providers (Anthropic, Meta, Mistral, Amazon) via unified API, without managing ML infrastructure.
AWS managed service for creating, publishing, and managing REST, HTTP, and WebSocket APIs that act as entry points to Lambda functions and other backend services.
AWS serverless orchestration service that coordinates multiple services into visual workflows using Amazon States Language (ASL), with built-in error handling, retries, and parallel execution.
AWS serverless event bus connecting applications using events, enabling decoupled event-driven architectures with rule-based routing.
AWS pub/sub messaging service that distributes messages to multiple subscribers simultaneously, enabling fan-out patterns and notifications at scale.
AWS object storage service with 99.999999999% durability, unlimited scalability, and multiple storage classes for cost optimization.
AWS identity and access management service controlling who can do what in your account, with granular policies based on the principle of least privilege.
Data structures representing knowledge as networks of entities and relationships, enabling reasoning, connection discovery, and semantic queries over complex domains.
Open protocol created by Anthropic that standardizes how AI applications connect with external tools, data, and services through a universal interface.
Practice of defining and managing infrastructure through versioned configuration files instead of manual processes. Foundation of modern operations automation.
Practices and strategies to minimize cloud spending without sacrificing performance, including right-sizing, reservations, spot instances, and eliminating idle resources.
AWS framework with six pillars of best practices for designing and operating reliable, secure, efficient, and cost-effective cloud systems.