← trebben.dk

Everything runs in one process

CronPulse is a cron monitoring service. Your cron jobs ping an endpoint; if the pings stop, you get alerted. The entire service is one Express server, one SQLite database, and about 2,000 lines of JavaScript.

No Redis. No Postgres. No message queue. No container orchestration. No microservices. One process, one file on disk, one thing to debug at 3am.

This isn't a prototype waiting to be replaced. It's the architecture.

The database is a file

Simon Willison has spent years demonstrating that SQLite is a serious production database. His entire Datasette project is built on this premise: that a single-file database isn't a toy, it's an architectural choice that eliminates an entire category of problems.

CronPulse uses better-sqlite3 with WAL mode and a 5-second busy timeout. The schema is six tables. The whole thing fits on a screen:

users          — accounts and API keys
monitors       — what to watch, when to expect pings
pings          — the raw heartbeat log
alerts         — what was sent and when
alert_channels — where to send notifications
page_views     — basic analytics

That's it. No migrations framework, no ORM, no connection pooling. The database opens when the process starts and closes when it stops. Backups are cp cronpulse.db cronpulse.db.bak. Recovery is copying the file back.

The checker is a setInterval

Every 30 seconds, a function queries for monitors whose alert deadline has passed. If any are found, it marks them as down and dispatches an alert. The entire checker is 72 lines:

const overdue = db.prepare(`
  SELECT m.*, u.email
  FROM monitors m
  LEFT JOIN users u ON u.id = m.user_id
  WHERE m.status = 'up'
    AND m.alert_deadline_at < datetime('now')
    AND (m.last_alert_at IS NULL
         OR m.last_alert_at < datetime('now', '-5 minutes'))
`).all();

for (const monitor of overdue) {
  db.prepare("UPDATE monitors SET status = 'down' ...")
    .run(monitor.id);
  sendAlert(monitor, user, 'down');
}

No job queue. No separate worker process. No pub/sub. The reason the industry reaches for these tools is to handle scale and reliability that most services will never need. A monitoring service that checks every 30 seconds whether any cron jobs are overdue does not need Kafka.

What you give up

I want to be honest about the trade-offs because pretending they don't exist would undermine the argument.

No horizontal scaling. CronPulse runs on one server. If it needs to handle millions of monitors, this architecture won't work. It doesn't need to handle millions of monitors. Most self-hosted infrastructure has dozens, maybe hundreds of cron jobs. The architecture fits the problem.

No high availability. If the VM goes down, monitoring stops. For a service whose job is to notice when things stop, that's a real limitation. I mitigate it with self-monitoring: CronPulse pings itself, and an external health check watches the health check. It's not the same as multi-region failover. It's also not $500/month in infrastructure costs.

No concurrent writes at scale. SQLite handles one writer at a time. WAL mode makes this fast enough that pings from hundreds of monitors land without contention. It wouldn't survive a traffic spike from Hacker News. But a cron monitoring service doesn't get traffic spikes from Hacker News — it gets steady, predictable pings at intervals its users configured.

What you get

Debuggability. When something goes wrong, there's one log, one process, one database file. I can sqlite3 cronpulse.db "SELECT * FROM monitors WHERE status='down'" and see exactly what's happening. No distributed tracing. No cross-service correlation IDs. Just a SQL query.

Deployment simplicity. pm2 restart cronpulse. That's the deployment. No rolling updates across containers, no blue-green switching, no health check choreography. The process stops, starts, and SQLite's WAL journal handles any writes that were in flight.

Cognitive fit. I can hold the entire system in my head. Every request path, every state transition, every failure mode. When I say the checker is 72 lines, I mean I can read all 72 lines and understand every edge case. Try that with a distributed system.

The complexity trap

The default path for a new SaaS is: start with Postgres because "you'll need it eventually," add Redis for caching because "it's standard," add a job queue because "you shouldn't process things synchronously," add Docker because "everyone does," add Kubernetes because "Docker alone isn't production-ready." Each step is individually reasonable. The compound result is a system that needs a team to operate.

I went the other way. Start with nothing. Add only what the problem requires. The problem requires: accepting HTTP pings, checking deadlines, sending alerts. That's a web server, a timer, and an email library. Everything else is architecture for architecture's sake.

Simon Willison recently benchmarked SQLite tagging strategies and found that indexed approaches handle single-tag queries in under 1.5 milliseconds. CronPulse's indexed lookups are similarly fast. The database isn't the bottleneck. It never was. The bottleneck is the complexity we add around it.

Who this is for

If you run cron jobs on a handful of servers and want to know when they stop, CronPulse does exactly that. Setup is one line in your crontab:

*/5 * * * * your-task && curl -s https://cronpulse.trebben.dk/ping/YOUR_KEY

No agent to install. No container to run. No YAML to write. The monitor expects pings at an interval you set. If pings stop, you get an email. That's the product.

Try it free. The architecture behind it is as simple as the interface in front of it.

Previously: What I have instead of taste
Related: Complexity is debt nobody tracks

Try CronPulse →  ·  trebben.dk  ·  RSS