· Jonathan Cutrer · Tools · 3 min read
A Local PostgreSQL Dev Environment with Docker Compose That Actually Works
Stop installing Postgres directly on your machine. A compose file you can copy across projects and a few patterns that prevent the usual pain.
Every project I work on needs a database. For a long time I ran Postgres directly on my machine, which meant remembering which version I had, dealing with service startup, and occasionally breaking something while updating. Two years ago I moved everything to Docker Compose and I’m not going back.
Here’s the setup I use. It’s not clever — it’s boring in the good way.
Step 1: The Base Compose File
Create docker-compose.yml at the project root:
services:
db:
image: postgres:16-alpine
restart: unless-stopped
environment:
POSTGRES_DB: ${DB_NAME:-appdb}
POSTGRES_USER: ${DB_USER:-appuser}
POSTGRES_PASSWORD: ${DB_PASSWORD:-devpassword}
ports:
- "5432:5432"
volumes:
- pgdata:/var/lib/postgresql/data
- ./db/init:/docker-entrypoint-initdb.d
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${DB_USER:-appuser} -d ${DB_NAME:-appdb}"]
interval: 5s
timeout: 5s
retries: 5
volumes:
pgdata:The postgres:16-alpine image is significantly smaller than the default and I’ve never hit a missing dependency. The healthcheck block is essential — without it, any service that depends on the database might start before Postgres is ready to accept connections.
Step 2: The .env File
# .env — never commit this
DB_NAME=appdb
DB_USER=appuser
DB_PASSWORD=devpassword
DB_HOST=localhost
DB_PORT=5432
DATABASE_URL=postgresql://appuser:devpassword@localhost:5432/appdbThe DATABASE_URL variable is what SQLAlchemy, Django, and most ORMs expect. Keeping it in .env means you can use the same variable name in dev and production by just changing the value.
Add .env to .gitignore. Add .env.example to the repo with the variable names and placeholder values.
Step 3: The Init Script Directory
Create db/init/ and put SQL files there. Postgres runs them alphabetically on first container start:
-- db/init/01_extensions.sql
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
CREATE EXTENSION IF NOT EXISTS "postgis";
-- db/init/02_schema.sql
CREATE TABLE IF NOT EXISTS users (
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
email TEXT NOT NULL UNIQUE,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
CREATE INDEX IF NOT EXISTS idx_users_email ON users (email);These run exactly once, on the first docker compose up. After that, the volume persists the data. If you need to reinitialize:
docker compose down -v # destroys the volume
docker compose up -dStep 4: Add a Test Database
For running tests against a real database (which you should — mocked databases hide real bugs):
db_test:
image: postgres:16-alpine
environment:
POSTGRES_DB: appdb_test
POSTGRES_USER: appuser
POSTGRES_PASSWORD: devpassword
ports:
- "5433:5432"
tmpfs:
- /var/lib/postgresql/dataPort 5433 avoids collision with the dev database. The tmpfs mount means the test database lives entirely in memory — it’s faster and automatically empty after each container restart. No cleanup between test runs.
Step 5: Add pgAdmin (Optional but Useful)
pgadmin:
image: dpage/pgadmin4
environment:
PGADMIN_DEFAULT_EMAIL: dev@local.dev
PGADMIN_DEFAULT_PASSWORD: devpassword
ports:
- "5050:80"
depends_on:
db:
condition: service_healthyhttp://localhost:5050 after startup. The depends_on with condition: service_healthy means pgAdmin only starts after Postgres passes its health check. Without this, pgAdmin often tries to connect before Postgres is ready and throws misleading startup errors.
Step 6: Bring It Up
docker compose up -d
docker compose logs -f db # watch startup
docker compose ps # check all services healthyStep 7: Connect From Your Application
Using the DATABASE_URL env var:
# Python / SQLAlchemy
import os
from sqlalchemy import create_engine
engine = create_engine(
os.environ["DATABASE_URL"],
pool_size=5,
max_overflow=10,
pool_pre_ping=True,
)pool_pre_ping=True sends a lightweight query before returning a connection from the pool, which catches stale connections after the database container restarts.
What This Buys You
Every developer on the project gets the same database version, same schema, same extensions — regardless of what’s installed on their machine. CI/CD runs the same compose file. Production uses the same DATABASE_URL pattern with real credentials. The local dev environment is disposable; destroying and recreating it takes 30 seconds.
The init scripts double as documentation of what the schema requires on a fresh install. I’ve used them to debug production issues by spinning up a local copy with prod’s SQL dump and the init scripts, then reproducing the problem in minutes.