Engineering · April 30, 2026 · 7 min read

Four leaderboards, one cron: scaling a content engine off a single aggregate

In 21 build rounds we shipped four leaderboards, three history archives, and eight dedicated share cards on top of one saves table and one daily cron. Here's the architecture pattern that made it cheap, and the trap I keep seeing teams fall into.

The setup

StudioMode has one user-facing primitive that matters: the save. An artist hits ♡, a row lands in saved_beats. That's it. Everything ranking, every "what's hot," every weekly recap, every share card — everything — is a different lens on that one table.

We started with a producer leaderboard. Then someone said "what about the biggest mover this week?" Then "what about the hottest beat?" Then "what about the dominant genre?" Each one sounded like its own feature. Each one tried to drag in its own table, its own cron, its own share card pipeline.

Resisting that instinct is the whole game.

The architecture, top down

Four ranking lenses on the same data:

Leaderboard Aggregate RPC
Top producer GROUP BY producer producer_leaderboard_window
Top mover Compare last week's rank producer_top_mover
Top beat GROUP BY beat_id beat_of_the_week
Top genre GROUP BY genre top_genres_of_the_week

Three history tables — one append-only row per snapshot:

History Snapshot RPC Lookup RPC
Producer ranks snapshot_producer_rank_history producer_rank_history_recent
Beat winners snapshot_beat_of_the_week beat_of_the_week_history_recent
Genre winners snapshot_top_genres top_genre_history_recent

One cron — the existing daily-digest at 14:00 UTC — calls all three snapshot RPCs inline, before doing its actual notification work:

// /api/notifications/daily-digest.ts
await supabaseAdmin.rpc('snapshot_producer_rank_history');
await supabaseAdmin.rpc('snapshot_beat_of_the_week');
await supabaseAdmin.rpc('snapshot_top_genres');
// ...digest work below

Roughly 50ms total. One function slot. Three new history tables filling in.

12
function cap
1
cron used
4
leaderboards
8
share cards

The trap I keep seeing

When a product sees one feature working and someone proposes a similar feature, the default move is to copy the architecture. "Producer leaderboard works — let's build the same thing for beats." And so a second snapshot table appears, a second cron runs ten minutes after the first, a third API endpoint shows up to serve it.

It feels productive. Each new feature ships fast because the pattern is familiar. The team celebrates velocity.

Two months later you have ten cron jobs running on overlapping schedules, four overlapping aggregates that almost-but-not-quite agree, and zero conviction about which one is the source of truth. Welcome to the long tail of accidental complexity.

The first time you copy an architecture, you save time. The fourth time, you've created a maintenance contract.

The thing that makes this work

The reason we got eight ranked surfaces out of one table is that we kept asking "what's the smallest piece of new state we need?" at every step.

The total schema delta after 21 rounds: 4 small tables, 12 RPCs, 0 new crons. Most other systems I've worked on would have added 3× that for the same surface area.

Generalizing

If you're sitting on one table that's accruing all your user-generated state — saves, likes, votes, plays, follows — these are the questions worth asking before you build the second feature on top:

  1. Can the new ranking come straight out of the existing table as a live aggregate? (Usually yes, if your indexes are right.)
  2. Does the feature actually need history? Or is "this week" always good enough? Skip the snapshot table until you're sure.
  3. If history is needed, is the snapshot small (≤100 rows) and idempotent on a date key? If yes, piggy-back it on whatever cron you already run, even if that cron's nominal purpose is unrelated.
  4. Is the share card a view on data that already exists? Or are you tempted to write a new "share-renders" table to precompute it? If the latter, stop. The card is a view.

We answered yes to all four every time. That's why eight surfaces came out of the same one cron run.

What I'd do differently

Add a single observability column to every snapshot table — created_at timestamptz default now() — from day one. Lets you write one query that tells you whether each daily snapshot fired. Without that, you find out about a missed snapshot when the history page has a gap, which is two weeks too late.

See the full ecosystem

The /weekly-recap page consolidates all four leaderboards in one shareable surface.

Open weekly recap →
© 2026 StudioMode · Home · Blog