loke.dev
Header image for The Database Is Just a File

The Database Is Just a File

It is time to stop over-provisioning massive RDS clusters and rediscover why SQLite is becoming the backbone of high-performance edge computing.

· 4 min read

The Database Is Just a File

You’ve been told that for anything "production-ready," you need a multi-node, managed PostgreSQL instance living in a VPC with automated failover and a six-figure price tag. You've been lied to. Or, at the very least, you've been sold a massive solution for a problem your application doesn't actually have yet.

We’ve spent the last decade making databases more complex, more distant, and more expensive. We put them behind load balancers, connection poolers, and private subnets, and then we wonder why our "hello world" takes 200ms to load. The truth is, for a huge percentage of workloads, the most performant database is the one that lives right next to your code as a single, boring file on a disk.

The Hidden Tax of the Network

When you use a managed database service like RDS or Google Cloud SQL, you are paying a "network tax" on every single query. Even within the same availability zone, a round trip from your app server to your database takes about 1ms to 2ms.

That sounds fast, right? It isn't. If you’re rendering a page that needs to make 10 sequential queries—which happens more often than we'd like to admit—you’ve just added 20ms of pure latency before you've even processed a single row of data.

SQLite, on the other hand, is a library, not a server. When you query SQLite, you aren't making a network call. You're calling a function that reads bytes from a file on the local NVMe drive. That same 10-query operation doesn't take 20ms; it takes about 200 microseconds.

Setup is... just opening a file

I remember the first time I tried to set up a "proper" dev environment for a team. We spent three hours configuring Docker Compose, mounting volumes for Postgres, and fighting with pg_hba.conf just to get a local login working.

Compare that to the SQLite experience in Node.js using better-sqlite3:

const Database = require('better-sqlite3');

// This is it. This is the whole setup. 
// It creates a file named 'production.db' if it doesn't exist.
const db = new Database('production.db', { verbose: console.log });

// Create a table
db.exec("CREATE TABLE IF NOT EXISTS users (id INTEGER PRIMARY KEY, name TEXT)");

// Insert some data
const insert = db.prepare('INSERT INTO users (name) VALUES (?)');
insert.run('Alice');

// Query it
const user = db.prepare('SELECT * FROM users WHERE name = ?').get('Alice');
console.log(user);

There are no connection strings, no credentials to leak into a public repo, and no need to manage a connection pool. It's just a file. You can move it, you can rename it, and you can back it up by simply running cp production.db production.db.bak.

But what about concurrency?

"SQLite is only for single-user apps!" is the standard retort. It’s a bit of an outdated myth. While SQLite does lock the file for writes, it handles concurrent reads beautifully. And if you enable WAL (Write-Ahead Logging) Mode, it can handle reads and writes simultaneously without blocking each other.

I’ve found that for most CRUD apps, WAL mode is the "magic button" that makes SQLite production-ready:

// Switch to WAL mode for way better performance
db.pragma('journal_mode = WAL');

With WAL mode enabled, SQLite can handle thousands of transactions per second on a decent server. Unless you are building the next Twitter, your bottleneck is going to be your code, not the file locking.

The Edge is Changing Everything

The biggest argument against the "just a file" approach used to be horizontal scaling. If you have five web servers, they can't all talk to the same local file, right?

The landscape has changed. Tools like LiteFS and Turso allow you to replicate that SQLite file across a global network. You get the local-speed reads of a file on a disk, but the data is automatically synchronized to other instances of your app.

I’ve been moving my smaller projects to this stack lately, and the simplicity is addictive. You deploy your app to a VPS or a container, attach a small persistent volume, and you're done. No more RDS dashboards, no more "is the database up?" alerts, and your AWS bill actually looks like a human-readable number for once.

The Gotchas (Because nothing is perfect)

I’m not saying you should delete your Postgres clusters today. There are real trade-offs:

1. Strict Typing: SQLite is "flexibly typed." It’ll let you put a string in an integer column if you really want to. Use a tool like Drizzle ORM or Prisma to keep yourself sane.
2. Huge Datasets: If your database is 500GB, keeping that all on one disk and replicating it globally starts to get tricky.
3. Complex Permissions: SQLite doesn't have "users" or "roles." If someone can read the file, they can read the data.

Final Thoughts

The trend in web development is swinging back toward simplicity. We’re realizing that the "enterprise" way of doing things often adds more friction than value for 90% of the apps we build.

If you're starting a new project, try the "file" approach. Stop worrying about horizontal scaling until you have enough users to actually break a single NVMe drive. You’ll be surprised at how much faster you can build when you aren't fighting your own infrastructure.