Building Quickerdomain: Sub-100ms Domain Search
The Challenge: Making Domain Search Feel Instant
When you're brainstorming a business name, waiting even a second between queries kills the creative flow. Most domain search tools feel sluggish-they make you wait, show spinners, and by the time results load, you've lost your train of thought.
At Seaional, I was tasked with building something different. The goal: make domain search feel instant. Not just fast, but perceptibly instant-the kind of fast where results appear before you're done thinking.
Starting with BadgerDB (and Why I Moved Away)
My first iteration used BadgerDB, a popular embedded key-value store in the Go ecosystem. It's fast, pure Go, and has a clean API. For the initial prototype, it worked great.
But when we started processing real TLD zone files-gigabytes of data with millions of domain records-problems emerged. BadgerDB's LSM tree compaction was causing memory spikes during bulk writes. We'd be cruising at 300MB, then suddenly spike to 1.2GB during compaction. In a Kubernetes environment with strict pod limits, this meant OOMKills.
I spent a week trying to tune it-adjusting compaction thresholds, limiting table sizes, tweaking the value log. Nothing worked reliably.
Enter PebbleDB: The CockroachDB Solution
PebbleDB is CockroachDB's fork/rewrite of LevelDB, designed specifically for their needs: high write throughput with predictable memory usage. When I read their engineering blog about why they built it, I knew it was worth trying.
The migration was surprisingly smooth. PebbleDB's API is similar enough to BadgerDB that most changes were mechanical. The difference in production was dramatic: memory stayed flat during zone file processing, compaction happened in the background without spikes, and query latency actually improved slightly.
Server-Sent Events for Streaming Results
Traditional REST: user types → request → wait → all results at once.
With SSE: user types → request → results stream in one by one.
The implementation is simple. The server opens a connection, then for each matching domain, it writes to the stream and flushes. The client receives results incrementally.
This alone made searches feel faster, even when the total time was the same. Users see results appearing immediately instead of waiting for a complete response.
The Frontend Trick: Perceived Speed
Here's where it gets interesting. The fastest network request is the one you don't make.
When a user types "mycoolstartup", we don't wait for the backend. Instead, we immediately display a list showing the query with common TLDs prepended: mycoolstartup.com, mycoolstartup.io, mycoolstartup.co, and so on.
These appear grayed out with a subtle loading indicator. As real results stream in from the backend, we update the status-green checkmark for available, red X for taken. But the user sees "results" instantly.
Is it cheating? Maybe. But it creates the illusion of zero latency. The user's eye has something to focus on while real data loads. Psychology matters as much as engineering.
Lessons Learned
Memory predictability beats raw speed. BadgerDB might be faster in benchmarks, but production systems need consistent behavior. PebbleDB's boring predictability was exactly what we needed.
Perceived performance is performance. Users don't measure milliseconds. They measure how the experience feels. Streaming and progressive loading make things feel faster.
Go is excellent for this. Goroutines made the concurrent zone file processing trivial. The compiled binary runs with minimal overhead. The standard library's HTTP/2 support made SSE implementation clean.
The final system serves queries in 12ms p50, 67ms p99, across 1,000+ TLDs. More importantly, it feels instant-which is what actually matters.