Prerequisites
Before deploying, make sure you have:
- Node.js 22.x (minimum 20.x)
- pnpm (latest)
- Turso account for the production database (or local LibSQL for development)
- Environment variables configured (copy
.env.exampleto.env.local)
Database Setup
Create a Turso database and push the schema:
# Create a database
turso db create your-db-name
# Get connection details
turso db show your-db-name --url
turso db tokens create your-db-name
# Push schema (creates all tables)
pnpm run db:push
After your first production deploy with real data, use incremental migrations instead:
pnpm run db:generate # creates migration files
pnpm run db:migrate # applies migrations safely
All db:* scripts auto-load .env.local — no manual export needed.
Vercel (Recommended)
Vercel is the recommended platform for Next.js. Codapult is optimized for Vercel's serverless functions and Edge Runtime.
1. Connect Repository
- Go to vercel.com and sign in.
- Click Add New... → Project and import your Git repository.
- Vercel auto-detects Next.js.
2. Configure Build Settings
| Setting | Value |
| ---------------- | ---------------- |
| Framework Preset | Next.js |
| Build Command | pnpm run build |
| Install Command | pnpm install |
| Node.js Version | 22.x |
3. Set Environment Variables
Add these in Settings → Environment Variables:
Required:
TURSO_DATABASE_URL— Turso connection string (libsql://...)TURSO_AUTH_TOKEN— Turso auth tokenBETTER_AUTH_SECRET— generate withopenssl rand -base64 32BETTER_AUTH_URL— your production URL (e.g.,https://app.example.com)NEXT_PUBLIC_APP_URL— your production URL
Conditional (based on enabled features):
- Stripe:
STRIPE_SECRET_KEY,STRIPE_WEBHOOK_SECRET - Email:
RESEND_API_KEY,EMAIL_FROM - Analytics:
NEXT_PUBLIC_POSTHOG_KEY,NEXT_PUBLIC_POSTHOG_HOST - Sentry:
NEXT_PUBLIC_SENTRY_DSN,SENTRY_ORG,SENTRY_PROJECT,SENTRY_AUTH_TOKEN
Set variables for Production, Preview, and Development as needed.
4. Deploy
Push to your main branch. Vercel builds and deploys automatically.
5. CI-Gated Deploy (Recommended)
Codapult includes a GitHub Actions CI pipeline (.github/workflows/ci.yml) that runs lint, unit tests, type-check, build, and E2E tests on every push and PR.
To ensure only tested code gets deployed, protect the main branch in GitHub:
- GitHub → repository → Settings → Rules → Rulesets → New ruleset → New branch ruleset.
- Set target branches to
main. - Add rule "Require status checks to pass".
- Add required checks:
Lint & Format,Unit Tests,Type-check & Build,E2E Tests.
With this setup, PRs can only be merged into main after all CI checks pass. Since Vercel auto-deploys from main, only tested code reaches production.
The CI also includes an optional DB Migrate job that automatically applies schema changes to your production Turso database on push to main. To enable it, add a GitHub Actions variable ENABLE_DB_MIGRATE=true and set TURSO_DATABASE_URL / TURSO_AUTH_TOKEN as GitHub Secrets.
6. Seed Database
Codapult ships with two seed scripts and a GitHub Actions workflow for seeding:
pnpm db:seed # 5 users, 2 orgs, subscriptions, feature flags
pnpm db:seed:demo # 1 admin account with enterprise subscription
You can also run seeding from GitHub Actions: Actions → "Seed Database" → Run workflow. Choose target (production/demo), seed type (full/demo), and whether to push schema first.
7. Demo Mode
To deploy a live demo with a banner and one-click sign-in, set NEXT_PUBLIC_DEMO_MODE=true in Vercel environment variables and seed the database with pnpm db:seed:demo. The demo account is [email protected] / admin123.
8. Custom Domain
- Go to Settings → Domains and add your domain.
- Follow the DNS configuration instructions.
- SSL is auto-provisioned.
Docker
Codapult includes a production-ready multi-stage Dockerfile with standalone output.
Build and Run
# Build the image
docker build -t codapult:latest .
# Run the container
docker run -p 3000:3000 --env-file .env.local codapult:latest
Docker Compose
The included docker-compose.yml provides three profiles:
| Service | Description |
| ------- | ---------------------------------------- |
| app | Production build |
| dev | Development server with hot-reload |
| ws | WebSocket notification server (optional) |
# Start production
docker compose up -d app
# Start with WebSocket server
docker compose --profile ws up -d
Health Check
The container exposes a health endpoint:
curl http://localhost:3000/api/health
# {"status":"ok","timestamp":"2026-04-01T12:00:00.000Z"}
Health check configuration for orchestration platforms:
healthcheck:
test: ['CMD', 'wget', '--quiet', '--tries=1', '--spider', 'http://localhost:3000/api/health']
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
Push to Registry
docker tag codapult:latest ghcr.io/your-org/codapult:latest
docker login ghcr.io
docker push ghcr.io/your-org/codapult:latest
For production file uploads, use STORAGE_PROVIDER=s3 or STORAGE_PROVIDER=r2 instead of local storage.
Kubernetes (Helm)
Codapult includes a Helm chart in infra/helm/codapult/.
Prerequisites
- Kubernetes 1.24+
- Helm 3.x
- kubectl configured
1. Build and Push the Image
docker build -t ghcr.io/your-org/codapult:latest .
docker push ghcr.io/your-org/codapult:latest
2. Create Secrets
kubectl create secret generic codapult-secrets \
--from-literal=TURSO_DATABASE_URL="libsql://your-db.turso.io" \
--from-literal=TURSO_AUTH_TOKEN="your-token" \
--from-literal=BETTER_AUTH_SECRET="your-secret" \
--from-literal=STRIPE_SECRET_KEY="sk_live_..." \
--from-literal=STRIPE_WEBHOOK_SECRET="whsec_..." \
--from-literal=RESEND_API_KEY="re_..."
Use a secrets management tool (Sealed Secrets, External Secrets Operator, or your cloud provider's secrets manager) for production.
3. Configure values.yaml
image:
repository: ghcr.io/your-org/codapult
tag: latest
app:
url: 'https://app.example.com'
name: 'Codapult'
existingSecret: 'codapult-secrets'
env:
PAYMENT_PROVIDER: 'stripe'
AUTH_PROVIDER: 'better-auth'
STORAGE_PROVIDER: 's3'
JOB_PROVIDER: 'bullmq'
ingress:
enabled: true
className: 'nginx'
hosts:
- host: app.example.com
paths:
- path: /
pathType: Prefix
tls:
- secretName: codapult-tls
hosts:
- app.example.com
autoscaling:
enabled: true
minReplicas: 2
maxReplicas: 10
targetCPUUtilizationPercentage: 70
redis:
enabled: true
worker:
enabled: true
replicaCount: 1
4. Install
helm install codapult ./infra/helm/codapult \
--namespace default \
--create-namespace \
--values values.yaml
5. Verify
kubectl get pods -l app.kubernetes.io/name=codapult
kubectl get svc codapult
kubectl get ingress codapult
kubectl logs -l app.kubernetes.io/name=codapult --tail=100
Worker Deployment
The Helm chart includes a separate worker deployment for background jobs (BullMQ). Enable it with worker.enabled: true in values.yaml. The worker runs the same image and processes jobs from the Redis queue.
HPA (Horizontal Pod Autoscaler)
Enable autoscaling in values.yaml:
autoscaling:
enabled: true
minReplicas: 2
maxReplicas: 10
targetCPUUtilizationPercentage: 70
targetMemoryUtilizationPercentage: 80
AWS with Terraform
Codapult includes Terraform templates in infra/terraform/.
Infrastructure Created
- VPC with public subnets across multiple AZs
- ECR repository for container images
- ECS Fargate cluster and service
- Application Load Balancer with health checks
- S3 bucket for file uploads
- ACM certificate and Route53 DNS
- SSM Parameter Store for secrets
- CloudWatch log groups
1. Configure Variables
Create infra/terraform/terraform.tfvars:
project = "codapult"
environment = "production"
aws_region = "us-east-1"
vpc_cidr = "10.0.0.0/16"
availability_zones = ["us-east-1a", "us-east-1b"]
container_cpu = 512
container_memory = 1024
desired_count = 2
domain_name = "app.example.com"
route53_zone_id = "Z1234567890ABC"
turso_database_url = "libsql://your-db.turso.io"
turso_auth_token = "your-token"
better_auth_secret = "your-secret"
stripe_secret_key = "sk_live_..."
stripe_webhook_secret = "whsec_..."
resend_api_key = "re_..."
openai_api_key = "sk-..."
Never commit terraform.tfvars to version control.
2. Deploy
cd infra/terraform
terraform init
terraform plan
terraform apply
3. Push Image to ECR
ECR_REPO=$(terraform output -raw ecr_repository_url)
aws ecr get-login-password --region us-east-1 | \
docker login --username AWS --password-stdin $ECR_REPO
docker build -t codapult:latest .
docker tag codapult:latest $ECR_REPO:latest
docker push $ECR_REPO:latest
4. Update ECS Service
CLUSTER_NAME=$(terraform output -raw ecs_cluster_name)
SERVICE_NAME=$(terraform output -raw ecs_service_name)
aws ecs update-service \
--cluster $CLUSTER_NAME \
--service $SERVICE_NAME \
--force-new-deployment \
--region us-east-1
Destroy Infrastructure
terraform destroy
AWS with Pulumi
Codapult includes Pulumi templates in infra/pulumi/ — the same infrastructure as Terraform, written in TypeScript.
1. Initialize Stack
cd infra/pulumi
pulumi stack init production
2. Configure
pulumi config set aws:region us-east-1
pulumi config set project codapult
pulumi config set environment production
# Secrets (stored encrypted)
pulumi config set --secret tursoDatabaseUrl "libsql://your-db.turso.io"
pulumi config set --secret tursoAuthToken "your-token"
pulumi config set --secret betterAuthSecret "your-secret"
3. Deploy
pulumi preview # review changes
pulumi up # apply
4. Build and Push Image
Follow the same ECR steps as the Terraform section above.
Access Outputs
pulumi stack output albDnsName
pulumi stack output # all outputs
Post-Deployment Checklist
Database
- [ ] Schema applied (
db:pushordb:migrate) - [ ] Connection working (check application logs)
- [ ] Multi-region replication configured (if using Turso)
Environment Variables
- [ ] All required variables set
- [ ] Secrets stored securely (not in code or logs)
- [ ]
NEXT_PUBLIC_APP_URLmatches your production domain - [ ]
BETTER_AUTH_URLmatches your production domain
Authentication
- [ ] Auth provider configured (Better-Auth or Kinde)
- [ ] OAuth credentials set (Google, GitHub)
- [ ] OAuth redirect URLs configured in provider dashboards:
- Google:
https://app.example.com/api/auth/callback/google - GitHub:
https://app.example.com/api/auth/callback/github
- Google:
- [ ] Magic link emails working (if enabled)
- [ ] 2FA TOTP working (if enabled)
Payments
- [ ] Payment provider configured (Stripe or LemonSqueezy)
- [ ] Webhook endpoint configured:
- Stripe:
https://app.example.com/api/webhooks/stripe - LemonSqueezy:
https://app.example.com/api/webhooks/lemonsqueezy
- Stripe:
- [ ] Webhook secret set in environment variables
- [ ] Test webhook delivery in provider's test mode
- [ ] Resend API key configured
- [ ]
EMAIL_FROMdomain verified in Resend - [ ] Test email sending (sign-up, password reset)
Analytics and Monitoring
- [ ] PostHog key and host URL configured
- [ ] Sentry DSN, org, project, and auth token configured
- [ ] Source maps uploaded to Sentry
- [ ] Error tracking working (trigger a test error)
Storage
- [ ] Storage provider set to
s3orr2for production - [ ] Bucket created with correct permissions
- [ ] S3/R2 credentials configured
- [ ]
S3_PUBLIC_URLset for public file access
Background Jobs
- [ ]
JOB_PROVIDERset tobullmq(notmemory) - [ ] Redis configured and
REDIS_URLset - [ ] Worker processes running
- [ ] Jobs processing successfully (check logs)
Rate Limiting
- [ ] Redis-backed rate limiter configured for multi-instance deployments
- [ ] Rate limit headers working (
X-RateLimit-Remaining)
Domain and SSL
- [ ] Custom domain configured
- [ ] DNS records pointing to deployment
- [ ] SSL certificate issued and valid
- [ ] HTTPS redirect working
Health Checks
- [ ]
GET /api/healthresponding - [ ] Load balancer health checks passing
- [ ] Monitoring alerts configured
Performance
- [ ] CDN configured (Vercel or CloudFront)
- [ ] Image optimization working
- [ ] Static assets cached
- [ ] Core Web Vitals acceptable (LCP, INP, CLS)
Security
- [ ] Security headers configured (CSP, HSTS)
- [ ] CORS configured correctly
- [ ] API routes protected with authentication
- [ ] Rate limiting enabled on sensitive endpoints
- [ ] Secrets rotation schedule established