The Day We Put SQL vs NoSQL to the Test
It started as a Slack argument.
One of our backend devs insisted MongoDB would crush PostgreSQL in read-heavy scenarios. Another swore by Postgres's indexes and JOIN performance. So we did what any responsible engineers would do:
We ran 1 million queries. Side by side. Real schema. Real traffic.
And the results?
The internet tells half the story. The other half lives in your indexes, your query patterns, and your architecture.
This article will show you what really happens when PostgreSQL and MongoDB face off under real-world pressure — and why your choice should depend on how you query, not just what you store.
The Setup
Before we dive into benchmarks, here's our architecture:
+--------------+ +---------------+ +---------------+
| Load Gen +-------> | Go Service +--------> | Database |
| (wrk + Go) | | (API Layer) | | (Postgres / |
+--------------+ +---------------+ | MongoDB) |
+---------------+
- We used
wrk
to simulate high-throughput reads/writes - Backend: lightweight Go service exposing REST endpoints
- Schema: simple
users
,orders
,products
with lookups and filters - Data Volume: 1 million rows / docs in each DB
- Tests: 500K reads, 500K writes (concurrent, real-world queries)
Schema (PostgreSQL) vs Collection (MongoDB)
Here's the basic schema we used for PostgreSQL:
-- PostgreSQL schema
CREATE TABLE users (
id SERIAL PRIMARY KEY,
name TEXT,
email TEXT UNIQUE,
created_at TIMESTAMP
);
CREATE TABLE orders (
id SERIAL PRIMARY KEY,
user_id INTEGER REFERENCES users(id),
amount DECIMAL,
status TEXT,
created_at TIMESTAMP
);
Equivalent MongoDB structure:
// MongoDB document
{
_id: ObjectId(),
name: "Amit",
email: "amit@example.com",
created_at: ISODate(),
orders: [
{ amount: 1200, status: "paid", created_at: ISODate() },
{ amount: 300, status: "failed", created_at: ISODate() }
]
}
Query 1: Read by ID
Let's test the simplest case — fetch a user by ID.
PostgreSQL (Go)
row := db.QueryRow("SELECT * FROM users WHERE id = $1", userID)
err := row.Scan(&id, &name, &email, &createdAt)
MongoDB (Go)
var user User
err := coll.FindOne(ctx, bson.M{"_id": userID}).Decode(&user)
Benchmark (Read by ID)
| Database | P50 Latency | P99 Latency | RPS |
| ---------- | ----------- | ----------- | ------ |
| PostgreSQL | 1.2 ms | 5.4 ms | 14,800 |
| MongoDB | 1.5 ms | 6.8 ms | 12,300 |
Postgres edges ahead on pure indexed lookups.
Query 2: Filter with JOINs / Embedded Arrays
Now for something more realistic — show all orders for users created in the last 30 days.
PostgreSQL (JOIN)
SELECT o.*
FROM orders o
JOIN users u ON o.user_id = u.id
WHERE u.created_at >= NOW() - INTERVAL '30 days';
MongoDB (Nested Filter)
db.users.aggregate([
{ $match: { created_at: { $gte: ISODate("2024-06-01") } } },
{ $unwind: "$orders" },
{ $match: { "orders.status": "paid" } }
])
Benchmark (Relational Query / Aggregation)
| Database | P50 Latency | P99 Latency | RPS |
| ---------- | ----------- | ----------- | ----- |
| PostgreSQL | 3.8 ms | 12.1 ms | 7,200 |
| MongoDB | 6.1 ms | 18.9 ms | 4,100 |
Winner: PostgreSQL Complex relational queries remain Postgres's stronghold — especially when JOINs are unavoidable.
Query 3: Insert at Scale
Insert 1,000 new users with 5 orders each.
PostgreSQL (Bulk Insert)
tx, _ := db.Begin()
for _, user := range users {
tx.Exec("INSERT INTO users(name, email) VALUES($1, $2)", user.Name, user.Email)
for _, order := range user.Orders {
tx.Exec("INSERT INTO orders(user_id, amount) VALUES($1, $2)", user.ID, order.Amount)
}
}
tx.Commit()
MongoDB (Nested Document)
var docs []interface{}
for _, user := range users {
docs = append(docs, bson.M{
"name": user.Name,
"email": user.Email,
"orders": user.Orders,
})
}
coll.InsertMany(ctx, docs)
Benchmark (Writes at scale)
| Database | P50 Latency | P99 Latency | RPS |
| ---------- | ----------- | ----------- | ----- |
| PostgreSQL | 5.9 ms | 15.2 ms | 2,800 |
| MongoDB | 4.3 ms | 10.1 ms | 3,700 |
Winner: MongoDB Mongo's document model shines with nested data and bulk inserts — far less overhead.
Summary Table — The Real Winner?
| Query Type | Winner | Reason |
| ------------------ | ---------- | ------------------------------------- |
| Read by ID | PostgreSQL | Better index performance |
| Relational Query | PostgreSQL | Natural JOIN support, optimizer usage |
| Bulk Insert | MongoDB | Faster nested writes, lower overhead |
| Aggregations | PostgreSQL | SQL planner beats Mongo's pipeline |
| Schema Flexibility | MongoDB | Dynamic documents = faster iteration |
Lessons We Learned
- If you model your data relationally, don't force it into Mongo.
- MongoDB works best for document-first designs — logs, user activity, cache layers.
- PostgreSQL wins for transactional, normalized, multi-table systems.
- Index tuning made a huge difference on both sides.
Final Thoughts
The winner isn't the database. It's the model that fits your access pattern.
If you're doing strict transactions, complex joins, or anything financial — go Postgres. If you're logging, caching, or storing flexible blobs — Mongo can be a beast.
We didn't write this to pick a side. We wrote it to show you what the numbers actually say — so you can pick the right one for your system, not your resume.