Real Results, Real Databases

Anonymised case studies from real engagements. These are production databases — the problems were real, the fixes were measurable, and the impact was immediate.

94%
Average query time reduction on performance engagements
62%
AWS RDS cost reduction on cloud optimisation projects
<2h
Average production incident resolution time
99.98%
Uptime maintained across all monitored client databases
Performance Tuning · PostgreSQL

SaaS Platform: 18-Second Queries Down to 200ms

A B2B SaaS company's core reporting feature was so slow that customers had stopped using it. Within one week, every query was under 200ms.

PostgreSQL 14 AWS RDS pgBadger EXPLAIN ANALYZE Partial Indexes
94% Query time reduction
18s → 200ms Worst-case query
5 days Time to resolution

The Problem

The company's SaaS platform had a core reporting module that generated custom usage dashboards for each of their enterprise customers. As their data grew past 50 million rows, the dashboards became unusably slow — sometimes taking 18–25 seconds to load.

They'd tried adding RAM to their RDS instance and switching to a more powerful instance class. Neither helped. Their engineering team didn't have a DBA with deep PostgreSQL expertise, and they were starting to lose enterprise customers who complained about the reporting UX.

They contacted us after reading a blog post about PostgreSQL slow query diagnosis. The initial audit uncovered the core issue within 2 hours of getting access.

Root Cause Analysis

  • Sequential scans on a 50M-row table. The reports query filtered by tenant_id and created_at, but the index only covered tenant_id. PostgreSQL was loading 50M rows into memory then filtering.
  • No partial indexes. 80% of queries filtered on active records (status = 'active'), but there was no partial index for this common case.
  • Bloated autovacuum configuration. The table had 40% dead tuple bloat — autovacuum was running too infrequently for this write-heavy workload, inflating table size.
  • N+1 query pattern in ORM. The dashboard loaded each widget in a separate query. With 12 widgets per dashboard, that was 12+ round trips on each page load.

What We Fixed

We created a composite index on (tenant_id, created_at DESC) and a partial index on (tenant_id, created_at) WHERE status = 'active'. This immediately eliminated the sequential scans.

We tuned autovacuum_vacuum_scale_factor and autovacuum_analyze_scale_factor for the high-churn tables, then ran a manual VACUUM ANALYZE to clear the bloat immediately.

We worked with their engineering team to collapse the 12 separate dashboard queries into 2 using CTEs and window functions — eliminating the N+1 completely.

The Result

Worst-case query time dropped from 18.4 seconds to 190ms — a 94% reduction. Average dashboard load time fell from 12 seconds to under 800ms.

The customer who had been most vocal about leaving renewed their contract the following month. The engineering team said this was the single biggest UX improvement they'd shipped in 18 months — without writing a line of new application code.

As an ongoing benefit: CPU utilisation on their RDS instance dropped 40%, which let them downsize the instance class and reduce their monthly AWS spend.

"We'd spent three weeks trying to fix this ourselves and were starting to think we needed to rewrite the whole feature. Mughees diagnosed the real problem in a two-hour session and had everything fixed within a week. The dashboard literally feels like a different product now." — Engineering Manager, B2B SaaS Platform (name withheld at client's request)
Cost Optimisation · AWS RDS · MySQL

E-Commerce Platform: AWS RDS Bills Cut by 62%

A fast-growing e-commerce company had an AWS RDS bill that had grown rapidly. After a two-week optimisation engagement, the same workload ran at a much lower monthly cost — delivering major annual savings.

MySQL 8.0 AWS RDS Read Replicas Aurora Serverless Parameter Groups
62% Cost reduction
Major Annual savings
2 weeks Time to deliver

The Problem

This mid-size e-commerce business had grown rapidly during a period of high online retail growth, and their AWS bill had scaled with them — but inefficiently. Their monthly RDS spend had ballooned across a multi-AZ instance, multiple read replicas, and heavy storage I/O costs.

Their CTO reached out after the board flagged the AWS bill in a quarterly review. They suspected they were over-provisioned but couldn't confidently downsize without risking performance degradation during peak sale periods.

What We Found

  • Massively over-provisioned instance. The db.r5.2xlarge (64GB RAM) was running at 8–12% CPU and 15% memory utilisation on average. The large instance class was solving a query efficiency problem with brute-force hardware.
  • Three read replicas, two barely used. Their application was routing all reads to a single replica. Two others sat near-idle and added unnecessary monthly cost.
  • Unoptimised storage I/O costs. Their IOPS allocation was set to 3,000 provisioned IOPS, but peak utilisation never exceeded 400 IOPS.
  • Missing indexes causing full table scans. Several frequently-run product search queries were scanning the full 8M-row products table because composite indexes were missing.
  • Binary logging retention set too high. Binlogs were being retained for 7 days, inflating storage by 180GB unnecessarily.

What We Changed

After adding the missing indexes and fixing the worst-performing queries, CPU utilisation dropped to 4–5% on the existing instance. This gave us the confidence to downsize — we moved to a db.r5.xlarge (32GB RAM), which handled the workload comfortably with 25% headroom at peak.

We decommissioned the two unused read replicas and updated the application's database connection configuration to properly distribute read traffic across all replicas.

We switched from provisioned IOPS to gp3 storage, which provided better baseline performance at a fraction of the cost. We also reduced binlog retention to 1 day, reclaiming the wasted storage.

The Result

Monthly RDS spend dropped by 62%. The changes were rolled out over two maintenance windows with zero downtime and no performance regressions.

As a bonus, query response times on the product search feature improved by 65% due to the missing indexes — a performance improvement the business hadn't even asked for.

The engagement paid for itself within the first month of savings. The CTO went back to the board with a clear annual savings figure and a written report explaining the technical changes.

"We knew we were probably overspending on AWS but we were nervous about touching anything in case it broke. The audit gave us a clear picture of exactly what to change and why, and the changes went in smoothly. We've since expanded the engagement to ongoing monthly support." — CTO, E-Commerce Platform (name withheld at client's request)

What Could We Fix in Your Database?

Start with a free 30-minute assessment call. We'll ask about your setup, identify your biggest risks and opportunities, and provide three actionable recommendations — regardless of whether you hire us.

Book Your Free Assessment