Development··3 min read

The SQLite Renaissance: Why It's Back in the Spotlight

A PostgreSQL loyalist tries SQLite for a side project and gets surprised

"SQLite? Isn't That Just for Mobile?"

When I told a colleague I used SQLite for my side project's database, that was the reaction. Honestly, I would've said the same thing a year ago. SQLite is the local DB on Android, or something you use temporarily in dev environments. Production? Come on.

But the vibe has shifted lately.

What's Happening

Turso, Litestream, LiteFS, Cloudflare D1. Since 2024, SQLite-based services have been appearing everywhere. Rails 8 switched its default database to SQLite. Laravel strengthened SQLite support. "SQLite in production" stopped being a meme on Twitter.

Why the sudden change? A few reasons.

First, serverless and edge computing. In environments like Cloudflare Workers or Vercel Edge Functions, PostgreSQL connections are inefficient. Connection pooling, cold starts, network latency. SQLite is a single file, so none of these problems exist.

Second, simplicity. PostgreSQL setup requires a server. Docker or spin up RDS. SQLite is just a file. Backups? Copy the file. Done.

I Tried It Myself

Side project: a "books I've read" tracker. Next.js + Drizzle ORM + SQLite. Used better-sqlite3 as the driver.

Setup was embarrassingly easy. npm install better-sqlite3 drizzle-orm and done. No database server to start. A data.db file appears in the project root.

Defining schemas and running migrations with Drizzle felt nearly identical to PostgreSQL. At the ORM layer, the difference is almost invisible.

const books = sqliteTable('books', {
  id: integer('id').primaryKey({ autoIncrement: true }),
  title: text('title').notNull(),
  author: text('author').notNull(),
  finishedAt: text('finished_at'),
  rating: integer('rating'),
});

You could tell me this was PostgreSQL and I'd believe it.

Performance Was Surprising

Ran some benchmarks. Inserting 1,000 books: 43ms. Same data into PostgreSQL (Docker, local): 187ms. SQLite was 4x faster. No network hop, so it makes sense.

SELECT was similar. Full scan of 1,000 rows: 2ms. PostgreSQL: 8ms. With joins, the gap narrows, but SQLite still wins.

But this is single-user. Concurrent access changes the picture entirely.

The Limits Are Clear

SQLite's write lock is file-level. WAL mode allows concurrent reads and writes, but concurrent writes are still serialized. Around 100 concurrent users, write contention becomes real.

My side project has exactly one user: me. So this isn't an issue. (The sad part is that it's just me.)

JSON query support is weaker than PostgreSQL. No jsonb -- you use json_extract, and complex JSON queries get awkward.

Full-text search is available through the FTS5 extension, but it's less capable than PostgreSQL's ts_vector.

The Turso Option

Turso is a service that lets you use SQLite over the network. Built on libSQL, a SQLite fork. Supposedly good read performance by placing replicas at the edge.

The free tier is generous -- 9GB storage, 500 databases. Plenty for side projects. But in practice, latency is higher than local SQLite. Obviously. Network overhead.

"Then why not just use PostgreSQL?" is a fair question. Turso's advantage is easy edge deployment, but if you don't need that scale, it's overkill.

Can SQLite Run in Production?

My answer: "conditionally, yes."

Works for: low-traffic services, read-heavy apps, personal projects, internal tools. Blogs, documentation sites, admin panels.

Doesn't work for: write-heavy concurrent access, large datasets, multi-server architectures.

Most side projects fall in the first category. And honestly, most early-stage startups do too. The moment you actually need PostgreSQL comes later than you'd think.

My new default for side projects is SQLite. PostgreSQL when I need it. Migration will be annoying, sure, but carrying unnecessary complexity from day one is worse.

Related Posts