Hunting Now: how a search bug became a public demand queue
Chef tested with a brand-new account, searched "ebk jaaybo", got zero results. The next two scrapes ignored it. Tracing why turned into a P0 fix, then four feature rounds, then a public page that recruits visitors into searching the artists they want — turning the gap between "what users want" and "what we have" into the most active surface on the site.
The bug
We have a missing_artist_requests table. The iOS
Find My Sound flow is supposed to log a row whenever a user
searches an artist we don't have, with a priority that bumps
every repeat search. The scraper's queue function pulls top-
priority rows first, runs three query variants per artist on
YouTube, and lands beats in the catalog. End-to-end loop:
search → demand row → next scrape → catalog grows.
Chef's test broke the loop. EBK Jaaybo is a Stockton drill artist not in our master list. He searched. Nothing in the catalog. The next scrape passed the row over. Then the one after. Three different bugs were stacked on top of each other.
Bug 1: the search rescue path
The search code's demand-logging gate read:
const lowResults = beats.length < 10;
let demandArtistName = detectedArtist;
if (lowResults && !demandArtistName) {
demandArtistName = extractArtistFromQuery(query);
}
const buildingYourSound = lowResults && !!demandArtistName;
if (buildingYourSound && demandArtistName) {
supabase.rpc('log_missing_artist_request', { ... });
}
Looks fine. But there's a multi-word fallback before this
block: when an artist isn't found in the master list, the code
runs search_artist_beats_v2 against each
significant word in the query. So "ebk jaaybo type beats"
became three separate searches: "ebk", "jaaybo", "beats". The
"ebk" search returned 12+ beats with "ebk" anywhere in the
description — random producers, unrelated tracks. Result:
lowResults = false. The log never fired. The user
saw irrelevant beats and assumed the platform didn't have what
they wanted.
Fix: log demand whenever the user typed an explicit unknown artist (extracted but not in the master list), regardless of result count. The substring rescue can fool the result counter, but it can't fool the "did the user type an artist name we don't have" check.
Bug 2: rows stuck in 'queued'
Even when the row landed, the scraper's queue function had a
classic dangling-state bug. queue_missing_requests
flipped rows to status='queued' before running
their search queries — defensive, in case a crash mid-loop
caused duplicate work. But there was no
mark_scraped() step at end of phase. (Actually:
the function existed, it just wasn't called anywhere.) So:
- Row lands in queue with
status='pending' - Next scrape:
queue_missing_requestspicks it up, marks it'queued' - YouTube queries run, beats save
- ...nothing marks the row done
- Row stays
'queued'forever. Invisible to the next pending fetch. Search again? RPC inserts a brand-new dupe row at priority 1 because the old one isn't pending.
Fix: at end of phase 1, if quota wasn't dead, mark all queued
rows as 'scraped'. They've done their job.
Bug 3: stranded rows
Bug 2's fix only handles the happy path. If the YouTube quota
runs out partway through phase 1, or the scraper crashes, or
the run gets killed mid-job, the rows that were
flipped to 'queued' never get marked
'scraped' AND never get retried. Stuck forever
until someone runs SQL by hand.
Fix: at the start of queue_missing_requests,
recycle any row in 'queued' with
updated_at older than 6 hours back to
'pending'. The 6-hour window is comfortably longer
than any single scrape run (under 30 minutes) so it only
catches genuine orphans. After 6 hours, a stuck row reappears
in the queue automatically.
Three bugs stacked. Each one alone would have been a 1% reliability hit. Together they made the demand loop look broken to anyone whose first search wasn't in the master list.
From bug to feature
The P0 fix shipped. EBK Jaaybo (and everyone else who hit the same gap) would now land in the queue, get scraped, get a row in the catalog. Loop closed. But the bug surfaced something bigger: the demand queue was a hidden treasure. Users were searching specific artists and that data was sitting in a Supabase table only Chef could see.
Four feature rounds came out of that observation:
- Round 154 — admin demand widget. Upgraded the existing thin widget into a top-25 list ordered by priority (the actual scraper-fetch order), with the original query text the user typed, age, and tap-to-bump-priority action. Lets Chef triage demand without writing SQL.
- Round 155 — search empty-state rewrite. The "Hunting Your Sound" card used to wave its hands ("Our AI is hunting your next drop"). New copy is concrete: "Saved to the priority queue. Your search bumped a row. Scraper runs every 12 hours; check back after." Plus a tip: "Searching the same artist again bumps it higher in the queue." Now users have a credible reason to search again.
- Round 156 — /hunting-now public page. A
public RPC,
public_demand_queue, returns the anonymized top 30 of the queue. The page mirrors/trending-now's pattern: rank medals, vote counts, age, status badge. Anyone visiting can see what artists are getting demand right now. - Round 157 — cross-link from the homepage. A green "🎯 Hunting now" chip in the homepage explore cluster, plus footer links and community nav entries. The public page is only useful if people find it.
- Round 158 — dynamic OG card. Shares of
/hunting-nowon Twitter / Discord / Slack now preview with the live top-5 queue. Each share becomes a recruitment ad: "EBK Jaaybo · 1 search · IN QUEUE — search him too to bump priority."
The recruitment loop
The interesting bit isn't the bug fix. It's that exposing the demand queue publicly creates a flywheel:
- User searches an artist we don't have.
- Search lands in queue at priority 1, scraper picks it up within 12 hours.
- That same row appears on
/hunting-nowimmediately (it's a public RPC, no caching). - Someone shares the URL. The OG card shows "EBK Jaaybo · 1 search · IN QUEUE."
- A viewer thinks "I want him too" and searches him in the app. Priority +1.
- Higher priority means the scraper runs his queries with more pages. More beats land. Catalog grows in the direction of demand.
Without the OG card, /hunting-now is just a status page. With
the OG card, every share is a recruitment ad for the demand
queue itself. The loop closes when the artist lands in the
catalog and the row gets scraped'd.
What this isn't
Hunting Now isn't a leaderboard. It's a queue. The top of the
list isn't "best artist" — it's "highest priority for the next
scrape." Once an artist gets enough beats in the catalog, they
drop off (status flips to scraped). The page
always reflects what's missing, not what's most
popular. That's deliberate: trending lives at
/trending-now, demand lives at
/hunting-now.
It's also not a social network. There's no "follow this
search" or "notify me when found" button yet. Those exist
partway in the schema (the missing_artist_requests
table has a nullable user_id column from when the
RPC was first written), but no notification path on top of
them. That's the next round.
See what artists are in the queue right now
Live demand. Updated continuously. Search any artist to add your vote.
Open Hunting Now →What's next
Push notifications. When a queued artist actually lands in the
catalog (rows go from queued → scraped
and the scraper's save_beat succeeds), notify
every user whose user_id was attached to that
demand row. Closes the loop fully: search → queue → public
surface → catalog → push notification → playback. Probably the
next ~50 lines on the scraper plus a new push handler.