SStorage Brain

Architecture

Storage Brain is an edge-native file storage service. This page covers the system design, tech stack, and deployment architecture.

Two-Layer Design

Client App
    |
    v
+-----------------------+
|   Gatekeeper API      |   Cloudflare Worker (Hono)
|   - Auth & quota      |
|   - Presigned URLs    |
|   - File management   |
|   - Workspace CRUD    |
|   - Signed URLs       |
|   - Webhook dispatch  |
+-----------------------+
    |              |
    v              v
+--------+   +-----------+
| Storage|   | Database  |
| Adapter|   |  Adapter  |
+--------+   +-----------+
  |                |
  v                v
R2 / S3       D1 / Postgres

Layer 1: Gatekeeper API

The API layer handles all client-facing requests. Built with Hono on Cloudflare Workers, it provides:

  • Authentication -- API key validation via hashed lookup
  • Quota enforcement -- Per-tenant and per-workspace storage limits checked before upload
  • Upload handshake -- Returns presigned URLs for direct-to-storage uploads
  • File management -- CRUD operations on file records
  • Workspace management -- Create, list, get, update, and delete workspaces within a tenant
  • Signed download URLs -- HMAC token-based URLs for time-limited, unauthenticated downloads
  • Webhooks -- Notification delivery to client-specified URLs after upload

Layer 2: Storage & Database (Adapter Pattern)

Storage and database access are abstracted via adapter interfaces, allowing different backends:

  • Storage adapters -- R2 (Cloudflare edge) or S3 (self-hosted / AWS). Files are organized by tenant: tenants/{tenantId}/files/{fileId}/{fileName}
  • Database adapters -- D1 (Cloudflare SQLite at the edge) or Postgres (self-hosted). Stores tenant records, workspace records, file metadata, and upload sessions

Tech Stack

ComponentEdge (Cloudflare)Self-Hosted (Docker)
RuntimeCloudflare WorkersNode.js
API FrameworkHonoHono
Object StorageCloudflare R2S3-compatible (MinIO)
DatabaseCloudflare D1 (SQLite)PostgreSQL
ValidationZodZod
LanguageTypeScriptTypeScript

Database Schema

Storage Brain uses four tables:

tenants

ColumnTypeDescription
idTEXT (PK)UUID
nameTEXT (UNIQUE)Tenant display name
api_key_hashTEXTBcrypt hash of API key
quota_bytesINTEGERStorage quota in bytes (default: 500 MB)
used_bytesINTEGERCurrent storage usage
allowed_file_typesTEXTJSON array of allowed MIME types
created_atINTEGERUnix timestamp
updated_atINTEGERUnix timestamp

workspaces

ColumnTypeDescription
idTEXT (PK)UUID
tenant_idTEXT (FK)References tenants.id
nameTEXTWorkspace display name
slugTEXTURL-safe identifier (unique per tenant)
quota_bytesINTEGEROptional per-workspace quota in bytes
used_bytesINTEGERCurrent workspace storage usage (default: 0)
metadataTEXTOptional JSON metadata
created_atINTEGERUnix timestamp
updated_atINTEGERUnix timestamp

files

ColumnTypeDescription
idTEXT (PK)UUID
tenant_idTEXT (FK)References tenants.id
workspace_idTEXT (FK)References workspaces.id (nullable)
original_nameTEXTOriginal filename
stored_pathTEXTStorage object key
file_typeTEXTMIME type
size_bytesINTEGERFile size
contextTEXTOptional free-form string (max 100 chars)
tagsTEXTJSON key-value pairs
metadataTEXTJSON object ({ [key: string]: unknown })
processing_statusTEXTcompleted (set immediately after upload)
webhook_urlTEXTOptional callback URL
created_atINTEGERUnix timestamp
updated_atINTEGERUnix timestamp
deleted_atINTEGERSoft delete timestamp (nullable)

upload_sessions

ColumnTypeDescription
idTEXT (PK)UUID
file_idTEXT (FK)References files.id
presigned_urlTEXTPresigned upload URL
expires_atINTEGERExpiration timestamp
statusTEXTpending, completed, expired, or failed
created_atINTEGERUnix timestamp

Upload Flow

  1. Client calls POST /api/v1/upload/request with file metadata (and optional workspaceId)
  2. API validates auth, checks tenant quota (and workspace quota if applicable), creates file record and upload session
  3. API returns a presigned URL (valid for 15 minutes)
  4. Client uploads file directly to storage using the presigned URL
  5. API marks the file as completed and sends a file.uploaded webhook if configured

Deployment

Edge (Cloudflare)

All functionality deploys as a single Cloudflare Worker with R2 and D1 bindings:

Cloudflare Worker
+-- API Routes (Hono)
|   +-- POST /api/v1/upload/request
|   +-- GET  /api/v1/files
|   +-- GET  /api/v1/files/:fileId
|   +-- DELETE /api/v1/files/:fileId
|   +-- GET  /api/v1/files/:fileId/download
|   +-- GET  /api/v1/files/:fileId/signed-url
|   +-- GET  /api/v1/workspaces
|   +-- POST /api/v1/workspaces
|   +-- GET  /api/v1/workspaces/:id
|   +-- PATCH /api/v1/workspaces/:id
|   +-- DELETE /api/v1/workspaces/:id
|   +-- GET  /api/v1/tenant/quota
|   +-- GET  /api/v1/tenant/info
|   +-- POST /api/v1/admin/tenants
|   +-- POST /api/v1/admin/tenants/:tenantId/regenerate-key
+-- Bindings
    +-- D1 Database
    +-- R2 Bucket

Self-Hosted (Docker)

Run the same API with S3-compatible storage (MinIO) and PostgreSQL via docker-compose up:

docker-compose.yml
+-- api        (Node.js, port 3000)
+-- postgres   (PostgreSQL 16)
+-- minio      (S3-compatible storage)

Environment variables configure the S3 endpoint, credentials, database URL, and admin API key.