Building a real-time notification system with zero external dependencies
We needed real-time payment notifications but could not justify Redis or a managed WebSocket service. SSE and PostgreSQL LISTEN/NOTIFY did the job.
Our clients needed to see payment status updates in real time. Not refresh-the-page real time. Actually real time, where the dashboard updates within a second of a payment state change. The obvious solutions were Redis pub/sub, a managed WebSocket service like Pusher, or a message queue like RabbitMQ. Every one of those adds infrastructure, cost, and operational complexity to a system that was already doing everything it needed with PostgreSQL and Node.js. I refused to add a dependency for a single feature.
In that first stretch at FinanceOps, I was still learning how to wear the Head of Engineering title without hiding behind it. It also builds on what I learned earlier in “Node.js 22 dropped and here is what actually matters for production backends.” The only credibility that mattered was whether the decision survived contact with real money, ugly edge cases, and the next person I would eventually hire. That same bias toward strict boundaries later shaped how I approached ftryos and pipeline-sdk: make correctness boring before you make the API clever.
The Architecture: SSE Plus LISTEN/NOTIFY
The system has two parts. PostgreSQL LISTEN/NOTIFY channels push events from the database to the Node.js process when payment rows change. Server-Sent Events push those events from Node.js to connected browser clients. The entire implementation is about 200 lines of code with no external dependencies beyond what we already had.
-- Trigger that fires on payment status changesCREATE OR REPLACE FUNCTION notify_payment_update()RETURNS TRIGGER AS $$BEGIN PERFORM pg_notify( 'payment_updates', json_build_object( 'paymentId', NEW.id, 'tenantId', NEW.tenant_id, 'status', NEW.status, 'updatedAt', NEW.updated_at )::text ); RETURN NEW;END;$$ LANGUAGE plpgsql;
CREATE TRIGGER payment_status_change AFTER UPDATE OF status ON payments FOR EACH ROW WHEN (OLD.status IS DISTINCT FROM NEW.status) EXECUTE FUNCTION notify_payment_update();The trigger fires only when the status column actually changes, not on every update. This keeps the notification volume manageable. At our peak, we process about 200 payment status changes per minute. LISTEN/NOTIFY handles this without breaking a sweat.
The Node.js Listener
On the Node.js side, a single persistent database connection listens for notifications and fans them out to connected SSE clients. The connection is separate from the application connection pool because LISTEN requires an idle connection that is not being used for queries.
import { Client } from 'pg';
const listener = new Client({ connectionString: DATABASE_URL });await listener.connect();await listener.query('LISTEN payment_updates');
// Map of tenantId -> Set of SSE response objectsconst clients = new Map<string, Set<Response>>();
listener.on('notification', (msg) => { if (!msg.payload) return; const event = JSON.parse(msg.payload); const tenantClients = clients.get(event.tenantId); if (!tenantClients) return;
for (const res of tenantClients) { res.write(`data: ${msg.payload}\n\n`); }});The fan-out is tenant-scoped. Each SSE connection registers with its tenant ID, and events are only sent to clients belonging to the same tenant. This is critical for multi-tenant isolation. A payment update for Tenant A must never reach a browser session for Tenant B.
The SSE Endpoint
The client connects to a standard SSE endpoint. The browser EventSource API handles reconnection automatically, which is one of the biggest advantages over raw WebSockets for this use case.
// Next.js Route Handler for SSEexport async function GET(req: Request) { const tenantId = await getTenantFromSession(req); const stream = new ReadableStream({ start(controller) { const encoder = new TextEncoder(); const send = (data: string) => { controller.enqueue( encoder.encode(`data: ${data}\n\n`) ); }; // Register this client registerClient(tenantId, send); // Send heartbeat every 30s to keep connection alive const heartbeat = setInterval(() => { send(JSON.stringify({ type: 'heartbeat' })); }, 30000); // Cleanup on disconnect req.signal.addEventListener('abort', () => { clearInterval(heartbeat); removeClient(tenantId, send); }); }, }); return new Response(stream, { headers: { 'Content-Type': 'text/event-stream', 'Cache-Control': 'no-cache', Connection: 'keep-alive', }, });}The 30-second heartbeat is essential. Without it, load balancers and proxies will close idle connections. The heartbeat keeps the connection alive and also serves as a health check. If a client stops receiving heartbeats, it knows the connection is dead and EventSource automatically reconnects.
Tradeoffs and Limits
- Single-process fan-out: This architecture assumes one Node.js process handles all SSE connections. Once we scale beyond a single process, we need a shared pub/sub layer between instances. That is when Redis enters the picture.
- LISTEN/NOTIFY has no persistence: If the Node.js process is down when a notification fires, the event is lost. We mitigate this with a catch-up query on SSE reconnection that fetches payments updated in the last 60 seconds.
- Connection limits: Each SSE client holds an open HTTP connection. At 500 concurrent clients, this is fine. At 5,000, we would need to consider WebSocket multiplexing or a dedicated push service.
- No binary data: SSE is text-only. For our JSON payloads this is not a limitation, but it rules out streaming binary data.
We ran this architecture in production for eight months before hitting the scaling limit. The signal was clear: when we added a second Node.js process behind the load balancer, clients connected to process A stopped receiving events published by process B. That was the point where we added Redis pub/sub as a cross-process channel. But those eight months of zero-dependency real-time notifications served us well.
The builder phase was less glamorous than people imagine. It was mostly a series of stubborn, unfashionable choices that kept future-me out of 2 a.m. incident calls. I still make the same kind of choices inside portfolio, pipeline-sdk, and dotfiles.
Do not add infrastructure because a tutorial told you to. Add it when your current solution stops working and you understand exactly why.
The zero-dependency approach worked because the problem was simpler than we initially assumed. Most real-time notification systems do not need message persistence, guaranteed delivery, or complex routing. They need a reliable connection, a clean message format, and graceful reconnection logic. By building on native platform capabilities instead of reaching for a library, we ended up with a system that was easier to debug, cheaper to operate, and faster to extend when new notification types appeared.