Codapult integrates with industry-standard observability tools — PostHog for product analytics, Sentry for error monitoring, OpenTelemetry for distributed tracing, a built-in structured logger, and a Core Web Vitals dashboard for performance tracking. Each integration is optional and activated by setting the corresponding environment variables.
Server Logs (Vercel)
When the app is deployed to Vercel, all console.* output and everything written by @/lib/logger appears in the platform logs. You can inspect them three ways.
Vercel Dashboard
Open the project on vercel.com and switch to Logs (or Observability → Logs) to see:
- Runtime logs — live function output (API routes, server actions, middleware)
- Build logs —
pnpm buildoutput - Filters by status code, path, region, time window
Retention depends on the plan — Hobby keeps runtime logs for ~1 hour, Pro up to 3 days. For longer retention use a Log Drain (below).
Vercel CLI
pnpm logs # last deployment
pnpm logs:follow # stream live (tail -f style)
pnpm logs:errors # live stream filtered to errors only
These scripts wrap vercel logs; you can also call it directly:
vercel logs <deployment-url>
vercel logs --since 1h
vercel logs --follow --output raw | grep requestId=abc123
Run vercel link once per workspace to bind the local directory to its Vercel project.
Log Drains (long-term storage & search)
A Log Drain forwards every runtime/edge/build log line to an external log aggregator. Use one when you need searchable history beyond the dashboard retention window or alerts based on log content.
Enable one in Project Settings → Log Drains on Vercel. Supported destinations include Better Stack / Logtail, Axiom, Datadog, Grafana Loki, S3, and any HTTP endpoint accepting NDJSON.
Recommended free-tier option for most SaaS projects: Better Stack (1 GB/month on the free plan, simple search UI, built-in alerting). Setup:
- Create a Sources → Vercel integration in Better Stack to get a source token.
- In Vercel Project Settings → Log Drains → Add Log Drain, choose Better Stack (or HTTP JSON / NDJSON with the token).
- Trigger a log line in production — it should appear in Better Stack within seconds.
Because @/lib/logger emits JSON in production, every log line already has level, time, msg, service, env, and your custom fields. Better Stack indexes these automatically — search level:error AND feature:billing out of the box.
Structured Logger
The logger at src/lib/logger.ts wraps pino on the Node.js runtime and falls back to a compatible JSON-console shim on Edge Runtime and in the browser. Public API is identical across runtimes. Use it everywhere server-side instead of console.log so logs stay machine-readable in production.
- Node.js routes → pino with PII redaction,
pino-prettyin dev, NDJSON to stdout in prod. - Edge Runtime (middleware,
export const runtime = 'edge') → minimal JSON-console implementation (pino can't run in Edge — it needsworker_threads). - Browser → pretty console output; unhandled errors go to Sentry via the browser SDK.
PII redaction
Every log line is redacted against a list of sensitive keys before it leaves the process: password, token, apiKey, authorization, cookie, secret, creditCard, cvv, ssn, and their common variants. Both top-level and nested paths are covered. Redaction happens before the payload reaches Vercel logs, log drains, or Sentry — so secrets in request bodies or error details never escape.
Extend the list by editing REDACT_PATHS in src/lib/logger/node.ts and REDACT_KEYS in src/lib/logger/edge.ts.
Basic usage
import { logger } from '@/lib/logger';
logger.info('user signed in', { userId });
logger.warn('payment retry', { orderId, attempt });
logger.error('checkout failed', { err, orderId });
- Development: colorized pretty output from
pino-pretty(Node) or[LEVEL] msg(Edge/browser). - Production: NDJSON —
{"level":"info","time":"...","msg":"user signed in","userId":"..."}— parsed natively by Vercel Logs and every Log Drain target.
Request-scoped logger
In API routes bind a requestId early so every log for the same request is correlated:
import { getRequestLogger } from '@/lib/logger';
export async function POST(req: Request) {
const log = getRequestLogger(req, { feature: 'billing' });
log.info('checkout requested');
// ...
log.error('stripe failed', { err });
}
getRequestLogger reuses upstream x-request-id / x-vercel-id headers when present, otherwise generates a new UUID.
Child loggers
const log = logger.child({ jobId, queue: 'emails' });
log.info('job started');
log.info('job completed', { durationMs });
Levels & filtering
| Level | When to use |
|---|---|
trace | Extremely noisy diagnostics |
debug | Local debugging, never critical |
info | Business events (sign-in, checkout created) |
warn | Recoverable issues (retry, degraded mode) |
error | Failed operation — also forwarded to Sentry |
fatal | Unrecoverable failure — also forwarded to Sentry |
Override the minimum level via LOG_LEVEL=debug|info|warn|error. Default: info in production, debug elsewhere.
error and fatal calls that include err: Error in the context are automatically sent to Sentry as exceptions when NEXT_PUBLIC_SENTRY_DSN is set.
PostHog Analytics
PostHog provides product analytics with event tracking, funnels, session replays, and feature flags.
Setup
Set two environment variables in .env.local:
NEXT_PUBLIC_POSTHOG_KEY="phc_your_project_key"
NEXT_PUBLIC_POSTHOG_HOST="https://us.i.posthog.com"
PostHog is automatically initialized when these variables are present. No code changes required.
Components
Analytics components live in src/components/analytics/ and handle:
- Page view tracking — automatic on route changes
- Custom event tracking — call
posthog.capture()for specific user actions
Built-in Analytics (Alternative)
If you prefer not to use PostHog, Codapult includes a first-party analytics module:
NEXT_PUBLIC_ANALYTICS_ENABLED="true"
This activates the self-hosted analytics engine in src/lib/analytics/, which tracks page views and custom events without sending data to a third party.
Sentry Error Monitoring
Sentry captures errors across the full stack — client-side React errors, server-side exceptions, and edge runtime failures.
Setup
| Variable | Description |
|---|---|
NEXT_PUBLIC_SENTRY_DSN | Sentry DSN — activates error tracking when set |
SENTRY_ORG | Organization slug (for source map uploads) |
SENTRY_PROJECT | Project name (for source map uploads) |
SENTRY_AUTH_TOKEN | Auth token (for source map uploads during build) |
SENTRY_RELEASE | Optional release identifier. Defaults to VERCEL_GIT_COMMIT_SHA |
SENTRY_ENVIRONMENT | Optional environment name. Defaults to VERCEL_ENV or NODE_ENV |
NEXT_PUBLIC_SENTRY_TRACE_TARGETS | Extra hosts for distributed tracing (CSV). Your NEXT_PUBLIC_APP_URL is included already |
SENTRY_IGNORE_ERRORS | Extra substrings to ignore in error messages (CSV). Adds to the built-in browser-noise list |
Configuration Files
| File | Purpose |
|---|---|
src/instrumentation.ts | Server-side Sentry initialization (Node.js runtime) |
src/instrumentation-client.ts | Client-side Sentry initialization (browser) |
src/sentry.server.config.ts | Server-side Sentry configuration |
src/sentry.edge.config.ts | Edge runtime Sentry configuration |
src/lib/sentry/filters.ts | Shared release/env resolution, PII scrubbing, noise filter |
src/components/analytics/SentryUser.ts | Client component that tags every event with the current user id |
src/app/global-error.tsx | Global error boundary — catches unhandled React errors and reports to Sentry |
Source Maps & Security
When SENTRY_ORG, SENTRY_PROJECT, and SENTRY_AUTH_TOKEN are set, source maps are generated during pnpm build, uploaded to Sentry for readable stack traces, and then deleted from the build output (deleteSourcemapsAfterUpload: true) so they are never served to end users or CDNs. Combined with productionBrowserSourceMaps: false in next.config.ts, this guarantees that production bundles remain minified to the public while Sentry still gets full stack traces.
widenClientFileUpload: true uploads additional client chunks to Sentry for higher-quality symbolication. Files live inside your private Sentry project — this option does not affect what is served publicly.
What ships by default
- Release tracking via
VERCEL_GIT_COMMIT_SHA— regressions are visible per deploy. - Distributed tracing —
sentry-trace/baggageheaders are propagated to same-origin API routes and the configured app URL, so client clicks and server spans are stitched into one transaction. - Session Replay with
maskAllText: true+blockAllMedia: true— PII is masked in replays by default. - PII scrubbing — the shared
sanitizeEventhook strips passwords, tokens, cookies, authorization headers, and email addresses fromextra,contexts,request, and breadcrumbs before sending. - Browser noise filter — common
ResizeObserver, extension-origin, and canceled-fetch errors are dropped automatically. - Anonymous by default — no name/email/IP is attached to events. To correlate issues with users, mount
<SentryUser id={session.userId} role={session.role} />inside your authenticated layout. Only the stable user id is sent.
Client checklist
After setting NEXT_PUBLIC_SENTRY_DSN in production, verify:
- Events arrive — trigger a test error (
throw new Error('sentry test')inside a button handler) and confirm it appears in the Sentry Issues tab. - Stack traces are readable — the frame shows your source file names (e.g.
src/app/.../page.tsx), notchunks/abc123.js. If not, check thatSENTRY_AUTH_TOKENwas available at build time. - Release is populated — the issue lists a release (commit SHA or
SENTRY_RELEASE). If it saysnone, the CI environment is missingVERCEL_GIT_COMMIT_SHA; setSENTRY_RELEASEmanually. - Replay plays back — open the issue, play the Replay, confirm text is masked.
- No public source maps — run
curl -I https://your-app.com/_next/static/chunks/main-<hash>.js.mapand expect404. - User id tagged — sign in, trigger a test error, confirm the issue shows the user id (if
<SentryUser />is mounted).
OpenTelemetry
OpenTelemetry provides distributed tracing for debugging request flows across services. The integration lives in src/lib/telemetry/.
Setup
| Variable | Required | Description |
|---|---|---|
OTEL_EXPORTER_OTLP_ENDPOINT | Yes | OTLP collector endpoint, e.g. http://localhost:4318 |
OTEL_SERVICE_NAME | No | Service name in traces. Defaults to "codapult" |
OTEL_TRACES_SAMPLE_RATE | No | Sampling rate from 0 to 1. Defaults to "0.1" (10%) |
OTEL_EXPORTER_OTLP_HEADERS | No | Custom headers for the OTLP exporter, e.g. Authorization=Bearer token |
Tracing is disabled when OTEL_EXPORTER_OTLP_ENDPOINT is not set.
Compatible Backends
OpenTelemetry traces can be sent to any OTLP-compatible backend:
- Jaeger — open-source, self-hosted
- Datadog — commercial APM
- New Relic — commercial observability platform
- Grafana Tempo — open-source, pairs with Grafana dashboards
- Honeycomb — commercial observability with BubbleUp analysis
Example: Local Jaeger
# Start Jaeger with Docker
docker run -d --name jaeger \
-p 16686:16686 \
-p 4318:4318 \
jaegertracing/all-in-one:latest
OTEL_EXPORTER_OTLP_ENDPOINT="http://localhost:4318"
OTEL_SERVICE_NAME="my-saas"
OTEL_TRACES_SAMPLE_RATE="1.0"
Open http://localhost:16686 to view traces in the Jaeger UI.
Core Web Vitals
Codapult includes a performance reporter and in-memory metric store in src/lib/perf/. The admin panel displays a Core Web Vitals dashboard with real user metrics.
Tracked Metrics
| Metric | Full Name | What It Measures | Good Threshold |
|---|---|---|---|
| LCP | Largest Contentful Paint | Loading performance | ≤ 2.5s |
| INP | Interaction to Next Paint | Responsiveness | ≤ 200ms |
| CLS | Cumulative Layout Shift | Visual stability | ≤ 0.1 |
| FCP | First Contentful Paint | Initial render speed | ≤ 1.8s |
| TTFB | Time to First Byte | Server response time | ≤ 800ms |
Metrics are collected from real user sessions in the browser and reported to the server. View the aggregated results at Admin → Performance (/admin/performance).
Choosing a Setup
| Stage | Recommended Setup |
|---|---|
| Local development | Built-in analytics + console errors (no external services needed) |
| Staging | PostHog (free tier) + Sentry (free tier) |
| Production | PostHog + Sentry + OpenTelemetry (with your preferred backend) |
All integrations are optional and independent — enable only what you need.
Production Considerations
- Sentry sampling: Set
tracesSampleRatebelow1.0in production to control event volume and costs. A rate of0.1–0.2is typical. - PostHog billing: PostHog charges by event volume. Use the
NEXT_PUBLIC_ANALYTICS_ENABLEDfirst-party analytics as a free alternative for basic tracking. - OTEL volume: Set
OTEL_TRACES_SAMPLE_RATEto0.1or lower in production. Full sampling (1.0) generates significant data at scale.
Next Steps
- Environment Variables — full reference for all monitoring-related env vars
- Admin Panel — view Core Web Vitals and manage experiments
- Security — rate limiting and error response guidelines