Hardware requirements
For a small team — under ten simultaneous users, a handful of open sessions — a single-core VPS with 2 GB of RAM is enough to run everything. A $6/month Hetzner or DigitalOcean droplet has been the baseline target since day one.
Scale-up math is linear until you hit the sync tier: budget about 40 MB of RAM per active session and a CPU hyperthread for every 20 concurrently-typing participants. Postgres is light — the storage grows with document history, not user count, and the built-in periodic snapshot keeps the table from bloating.
- Tiny (< 10 users): 1 vCPU, 2 GB RAM, 20 GB disk.
- Small (< 50 users): 2 vCPU, 4 GB RAM, 40 GB disk.
- Medium (< 250 users): 4 vCPU, 8 GB RAM, 100 GB disk + managed Postgres.
Docker Compose quickstart
The repo ships a docker-compose.yml that brings up Postgres 16, the API server, the Hocuspocus sync server, the terminal PTY server, and the Next.js web app behind a single network. To start:
git clone https://github.com/huddle-dev/huddle.git
cd huddle
cp .env.example .env.docker
# edit .env.docker — set JWT_SECRET and INTERNAL_TOKEN at minimum
openssl rand -hex 32 # handy generator
docker compose --env-file .env.docker up -d
docker compose --env-file .env.docker --profile tools run --rm migrateOnce the migrate profile exits cleanly, open http://localhost:3000and sign up for the first admin account. The first user created against a fresh database is automatically granted platform admin — subsequent signups are regular users until you promote them.
Environment variables
The full list lives in .env.example with inline comments. The short version: you must set JWT_SECRET, INTERNAL_TOKEN, DATABASE_URL, and the three public URL variables (HUDDLE_PUBLIC_URL, HUDDLE_API_URL, HUDDLE_SYNC_URL). Everything else is opt-in.
Optional surfaces you probably want before opening the app to real users:
- Email:
RESEND_API_KEYfor magic-link sign-ins, password resets, and invite emails. - OAuth:
GITHUB_OAUTH_CLIENT_ID/CLIENT_SECRETand the equivalents for Google. - Rate limiting:
UPSTASH_REDIS_REST_URLandUPSTASH_REDIS_REST_TOKEN. Without Redis we fall back to an in-process limiter, which is fine for dev but will drift under multi-process deployments. - Billing:
STRIPE_SECRET_KEYand the per-tierSTRIPE_PRICE_*variables, if you want paid plans. - Observability:
SENTRY_DSNandPOSTHOG_KEY.
Running migrations
Migrations are plain SQL files in server/drizzle/. Apply them in order against DATABASE_URL_UNPOOLED — the pooled connection string will reject DDL. If you are running the Compose stack, the migrate profile handles this for you:
docker compose --env-file .env.docker --profile tools run --rm migrateFor a managed Postgres, use psql directly:
for f in server/drizzle/*.sql; do
psql "$DATABASE_URL_UNPOOLED" -v ON_ERROR_STOP=1 -f "$f"
doneUsing Neon for Postgres
Neon is the recommended managed Postgres for small and medium deployments. Create a branch, grab the pooled and unpooled connection strings from the dashboard, and set them as DATABASE_URL (pooled) and DATABASE_URL_UNPOOLED (unpooled) respectively. The pooled connection is what the API and sync servers use for queries; the unpooled one is for migrations and the occasional long transaction.
If you would rather run your own database, the built-in Postgres 16 container in docker-compose.yml is production-ready as long as you mount a real volume and set a strong POSTGRES_PASSWORD.
Production hardening
A few things to do before you hand the URL to anyone who does not work for you:
- Put the stack behind a reverse proxy with TLS termination — Caddy or Traefik are painless. The WebSocket endpoints (
/sync,/terminal) needwss://, notws://, in production. - Remove the
5432port mapping from the Postgres service once you have run migrations. Nothing outside the Docker network should reach the database. - Rotate
JWT_SECRETandINTERNAL_TOKENat least once a quarter. Both can be rotated without downtime by deploying the new value first and then reissuing tokens. - Enable backups. Neon does this automatically; for self-hosted Postgres, wire up
pg_dumpto object storage on a daily cron.
If you hit a snag, get in touch. Early self-hosters get a direct line to us.