Lead Playbook

5,497 Reachable Leads
14-Step Pipeline

Complete step-by-step operational playbook for Jamie Schneiderman. Venture-backed SaaS founders who've plateaued -- from Sales Navigator filters to tiered Instantly + HeyReach campaigns with dossier-based personalization.

3,935
Email Leads (Instantly)
1,562
LinkedIn-Only (HeyReach)
487
T1 Leads (Personalized)
3
Waterfall Tools

Sending Infrastructure: Ready

6 new sending domains purchased January 2026, fully warmed up and actively sending with replies coming in. Domains: unstuckyourbusinessconsultants.com, withunstuckyourbusiness.com, tryunstuckyourbusiness.com, unstuckyourbusinesspartners.com, unstuckyourbusinessgrowth.com, growunstuckyourbusiness.com. Campaigns can launch immediately.

Previous HeyReach Lead Quality Issue

215 leads in HeyReach rated 1-3/5, zero 4/5 or 5/5. Root cause: no revenue/funding verification, industry too broad (Financial Services founders slipping through). Fix: This playbook adds Crunchbase verification + tighter industry filters to prevent this going forward.

Enrichment & Verification Strategy

All leads cross-referenced against existing HeyReach campaigns (215 active) and Instantly DNC lists. Crunchbase as verification layer to confirm funding stage and revenue -- NOT primary source. Expandi personal emails prioritized -- founders read personal inbox. MillionVerifier OK only.

Full Pipeline Funnel -- Actual Results

Every number below is real. This is what happened when we ran this SOP for Jamie in March 2026.

StepInputOutputHit RateCumulative ReachableWhat This Step Added
Phase 1: Sales Nav + HeyReach Scrape 7 segments 5,187 leads -- 0 Raw data, no emails yet
Phase 1B: Crunchbase + Prospeo Founders 3,257 domains 2,997 founders -- 0 Raw data, no emails yet
Phase 2: Dedup + Qualify 8,184 raw 6,180 unique 75.5% 0 Removed 2,004 dupes/bad fits
Step 4: Expandi (LinkedIn email enrichment) 6,180 2,692 emails 43.6% 2,692 First 2,692 reachable by email
Step 5: Apollo (~4,000 credits) 3,488 misses 2,114 emails 60.6% 4,806 +2,114 more emails (+78.5% vs Expandi alone)
Step 6: Prospeo (106 credits) 1,374 Apollo misses 313 emails 31.3% 5,119 +313 emails Apollo missed
Step 8: MillionVerifier 5,119 emails 3,935 verified 76.9% 3,935 email Removed 1,173 + 11 Prospeo bad = 1,184 risky/invalid
Step 9A: Dedup LinkedIn-only leads 1,062 (after Prospeo) 1,562 -- 3,935 + 1,562 302 recovered by Prospeo, rest deduped + qualified
Step 9B: Tier LinkedIn-only 1,562 T1: 295 / T2: 1,217 / T3: 50 -- 3,935 + 1,562 295 high-priority LinkedIn targets
Step 9C: Research T1 LinkedIn-only 487 T1 (incl. Prospeo) 487/487 personalized (100%) 97.1% hook rate 3,935 + 1,562 Personalized connection requests + email outreach
TOTAL REACHABLE 5,497 5,497 leads 3,935 email + 1,562 LinkedIn
Key insight: The waterfall enrichment (Expandi -> Apollo -> Prospeo) recovered 3,935 verified emails from 6,180 leads. Prospeo alone found 302 verified emails that both Expandi and Apollo missed. The remaining 1,562 LinkedIn-only leads are reachable via HeyReach -- including 295 T1 leads with personalized outreach. Total reachable: 5,497 leads across email + LinkedIn.
Phase 1: Source
Steps 1-2 -- Sales Nav searches + HeyReach scraping (free)
Why this order matters

HeyReach scraping is free. Qualify before enriching to avoid wasting credits. Jamie's previous leads rated 1-3/5 because filters were too loose -- this plan tightens them.

Lead Source URL Tracker (Master Sheet)

All lead source URLs (Sales Navigator, Crunchbase, Apollo, etc.) are tracked in one master Google Sheet. Every URL you build in Step 1 must be saved here before scraping. Open the "Jamie - Unstuck" tab, find the matching sub-search row, and paste the URL. This is the single source of truth for all lead scraping URLs across all clients.

Open Lead Source URL Tracker

1 Build Sales Navigator Searches (Click-by-Click)

How to build each search (do this for every sub-search in the tables below):

Go to linkedin.com/sales/search/people (Sales Navigator Lead Search)

You will see a search bar at the top and filter panels on the left sidebar. DO NOT type anything into the top search bar -- leave it blank always.

On the left sidebar, click "Company headcount". Check the box for the headcount range listed in the sub-search table (e.g., 11-50). Click "Show results".

Click "Current job title" in the sidebar. A text box appears. Type or paste the INCLUDE titles: CEO OR Founder OR Co-Founder OR Chief Executive. Press Enter. They will appear as green pills (green = included). Confirm the toggle at the top of the title filter says "Include."

Now add the EXCLUDE titles. There are two ways to do this. Method A: Click the "Include" dropdown at the top of the title filter and switch it to "Exclude". Now type or paste: GTM OR Growth OR Advisor OR Board OR Consultant OR Coach OR Fractional OR Intern OR Associate. Press Enter. They will appear as red pills (excluded). Switch the dropdown back to "Include" when done. Method B: Type the values while still on "Include." They appear as green pills. Then for EACH one, click the circle-with-line-through-it icon (looks like a stop sign, next to the X on the green pill). The pill turns red = excluded. Either method works. You will know it is correct when all exclude titles are red and all include titles are green.

Click "Geography" in the sidebar. Type United States and select it from the dropdown. (Add Canada as a second geography run later if needed.)

PAUSE. Scroll down to the segment tables below. Find the segment you are currently building (Segment 1, 2, or 3). Each segment has a bold line that says "Industry filter for all Segment X sub-searches" with the exact industries to input. Read that line, then come back here and continue.

Click "Industry" in the sidebar. Type and select the industries from the segment table you just read. For example, Segments 1-2 use: Software Development and Technology, Information and Internet. They will appear as green pills (included). Now add the exclude: type Financial Services and press Enter. It will appear as a green pill. Click the circle-with-line-through-it icon (stop sign icon, next to the X) on the green pill. It turns red, confirming it is excluded. Alternatively, switch the dropdown to "Exclude" first, then type the industry.

Click "Posted on LinkedIn" in the sidebar. If the sub-search says "ON," toggle this to Yes. If "OFF," leave it untouched (default = all).

Click "Seniority level" in the sidebar. Check only the boxes listed in the sub-search table. For Jamie, this is Owner only. Do NOT check CXO -- Jamie only works with founders, not C-suite executives like COOs or CFOs. CXO inflates results by 5-6x with wrong-fit leads.

CHECK THE RESULT COUNT in the top-right area of the results. If it says "2,500+ results", you must split further -- go back and narrow headcount or add the "Posted on LinkedIn" filter. If it shows a number under 2,500, you are good.

Copy the full URL from your browser's address bar. The URL contains all your filter selections. Open the Lead Source URL Tracker (Jamie tab), find the matching sub-search row, and paste the URL into the "Sales Nav URL" column. Also enter the actual result count in the "Actual Count" column.

Repeat steps 1-11 for each sub-search in the segment tables below.

Tip: When you change filters, Sales Nav updates the URL automatically. You do NOT need to click "Save Search" -- just copy the URL.
Tip: If a filter doesn't appear in the left sidebar, click "All filters" at the top to expand the full filter panel.

Shared base filters (quick reference):

FilterValueWhere to Find It
GeographyUnited StatesLeft sidebar > Geography
Title INCLUDECEO OR Founder OR Co-Founder OR Chief ExecutiveLeft sidebar > Current job title > Include
Title EXCLUDEGTM OR Growth OR Advisor OR Board OR Consultant OR Coach OR Fractional OR Intern OR AssociateLeft sidebar > Current job title > Exclude
IndustryVaries per segment -- see each segment's Complete Filter Set belowLeft sidebar > Industry
Top keyword barLEAVE BLANK ALWAYSTop of page -- do not type here

Critical fix from past campaigns: Excluding "Financial Services" industry and "GTM"/"Growth"/"Coach" titles. These were flooding Jamie's lists with wrong-fit leads.

Sales Navigator 2,500 Lead Limit

LinkedIn Sales Navigator caps every search at 2,500 results. HeyReach and Expandi enforce this -- you cannot scrape more than 2,500 from a single search URL. If a search returns more than 2,500, you MUST split it further using these volume controls:

1. Headcount split -- break "11-200" into "11-50" and "51-200" (two separate searches)

2. "Posted on LinkedIn" toggle -- ON = only people who've posted recently (smaller, more active set). OFF = everyone else. Two searches per segment.

3. Seniority filter -- narrow from "Owner+CXO+VP+Director" to just "Owner"

4. Geography split -- break "USA" into East Coast / West Coast, or specific states

Check the result count BEFORE scraping. If it says "2,500+ results," your search is too broad -- split it.

Segment 1: SaaS Founders -- Early Stage (11-50 employees)
Software DevelopmentTechnology, Information and Internet

The sweet spot. Smaller SaaS companies (typically Series A) who've proven PMF but growth has stalled. Board pressure to hit metrics.

Complete Filter Set (apply to ALL Segment 1 sub-searches)

FilterValueHow to Set
Company Headcount11-50Left sidebar > Company headcount > check "11-50"
GeographyUnited StatesLeft sidebar > Geography > type and select
Title INCLUDECEO OR Founder OR Co-Founder OR Chief ExecutiveCurrent job title > Include dropdown > paste
Title EXCLUDEGTM OR Growth OR Advisor OR Board OR Consultant OR Coach OR Fractional OR Intern OR AssociateCurrent job title > switch to Exclude > paste (red pills)
Industry INCLUDESoftware Development + Technology, Information and InternetIndustry filter > type each one > green pills
Industry EXCLUDEFinancial ServicesIndustry filter > type > click stop-sign icon > red pill
SeniorityOwnerSeniority level > check Owner ONLY (not CXO)
Posted on LinkedInVaries per sub-search (see table below)Left sidebar > Posted on LinkedIn
Top keyword barLEAVE BLANKDo not type anything in the top search bar

Sub-Searches

#Sub-SearchPostedEst. CountSales Nav URL
1aSaaS Early Stage -- Posted (11-50)ON~1,200 *Build search
1bSaaS Early Stage -- Not Posted (11-50)OFF~1,800 *Build search
After building each search: Copy the full URL from your browser, open the Lead Source URL Tracker (Jamie tab), and paste it in the matching row. Enter the actual result count too.
* Estimated counts. These are rough approximations. Check the actual result count in Sales Navigator before scraping. If any sub-search shows 2,500+, split it further.

~3,000 raw across 2 sub-searches. After dedup: ~300-400 unique qualified leads. Heavy overlap between Posted/Not Posted -- that's expected.

Segment 2: SaaS Founders -- Growth Stage (51-200 employees)
Software DevelopmentTechnology, Information and Internet

Different plateau. Larger SaaS companies (typically Series B) with more resources and team, but growth per dollar invested is declining. Strategy problem hiding behind busy work.

Complete Filter Set (apply to ALL Segment 2 sub-searches)

FilterValueHow to Set
Company Headcount51-200Left sidebar > Company headcount > check "51-200"
GeographyUnited StatesLeft sidebar > Geography > type and select
Title INCLUDECEO OR Founder OR Co-Founder OR Chief ExecutiveCurrent job title > Include dropdown > paste
Title EXCLUDEGTM OR Growth OR Advisor OR Board OR Consultant OR Coach OR Fractional OR Intern OR AssociateCurrent job title > switch to Exclude > paste (red pills)
Industry INCLUDESoftware Development + Technology, Information and InternetIndustry filter > type each one > green pills
Industry EXCLUDEFinancial ServicesIndustry filter > type > click stop-sign icon > red pill
SeniorityOwnerSeniority level > check Owner ONLY (not CXO)
Posted on LinkedInVaries per sub-search (see table below)Left sidebar > Posted on LinkedIn
Top keyword barLEAVE BLANKDo not type anything in the top search bar

Sub-Searches

#Sub-SearchPostedEst. CountSales Nav URL
2aSaaS Growth Stage -- Posted (51-200)ON~800 *Build search
2bSaaS Growth Stage -- Not Posted (51-200)OFF~1,200 *Build search
After building each search: Copy the full URL from your browser, open the Lead Source URL Tracker (Jamie tab), and paste it in the matching row.
* Estimated counts. These are rough approximations. Check the actual result count in Sales Navigator before scraping. If any sub-search shows 2,500+, split it further.

~2,000 raw across 2 sub-searches. After dedup: ~200-300 unique qualified leads.

Segment 3: Tech-Adjacent Founders -- Non-SaaS ($2-15M Revenue)
IT Services and IT ConsultingTelecommunications

Not all stuck founders are SaaS. IT services, telecom, and consulting companies hit the same plateau. Jamie's frameworks apply to any venture-backed tech-adjacent company.

Segment overlap rule: Segments 1-2 already scraped Software Development and Technology, Information and Internet. This segment uses COMPLETELY DIFFERENT industries to avoid duplicate leads.

Complete Filter Set (apply to ALL Segment 3 sub-searches)

FilterValueHow to Set
Company HeadcountVaries per sub-search: 11-50 or 51-200Left sidebar > Company headcount > check one range
GeographyUnited StatesLeft sidebar > Geography > type and select
Title INCLUDECEO OR Founder OR Co-Founder OR Chief ExecutiveCurrent job title > Include dropdown > paste
Title EXCLUDEGTM OR Growth OR Advisor OR Board OR Consultant OR Coach OR Fractional OR Intern OR AssociateCurrent job title > switch to Exclude > paste (red pills)
Industry INCLUDEIT Services and IT Consulting + TelecommunicationsIndustry filter > type each one > green pills
Industry EXCLUDEFinancial ServicesIndustry filter > type > click stop-sign icon > red pill
SeniorityOwnerSeniority level > check Owner ONLY
Posted on LinkedInVaries per sub-search (see table below)Left sidebar > Posted on LinkedIn
Top keyword barLEAVE BLANKDo not type anything in the top search bar
Verify industry names first. Open the Industry filter dropdown in Sales Nav, type each name, and confirm it appears exactly as written. If it does not appear, use the closest matching name.

Sub-Searches

#Sub-SearchHeadcountPostedEst. CountSales Nav URL
3aIT Services + Telecom -- Posted (11-50)11-50ONVerify *Build search
3bIT Services + Telecom -- Posted (51-200)51-200ONVerify *Build search
3cIT Services + Telecom -- Not Posted (11-50)11-50OFFVerify *Build search
After building each search: Copy the full URL from your browser, open the Lead Source URL Tracker (Jamie tab), and paste it in the matching row.
* Estimated counts must be verified. Run the search in Sales Nav and check the actual result count before scraping. If any sub-search shows 2,500+, split further.

Raw count: Verify in Sales Nav. After dedup: ~150-250 unique qualified leads.

2 Scrape via HeyReach (Click-by-Click)
HeyReach (Free)
Workspace matters: HeyReach scraping requires a connected LinkedIn Sales Navigator account. Log in to Jamie's HeyReach workspace if Jamie has Sales Nav -- this keeps all lead lists organized under his account. If Jamie does NOT have a Sales Navigator subscription, use the Dopamine Digital workspace instead (we have Sales Nav). Either way, the CSV export is the same -- but keeping it in the client workspace is cleaner for ongoing management.

Do this for each sub-search URL you saved in Step 1:

Log in to app.heyreach.io. Make sure you are in the correct workspace (check the workspace name in the top-left corner).

In the left sidebar, click "Lead Lists"

Click the "+ Create New List" button (top right)

Give the list a name that matches the sub-search. Example: Jamie - SaaS Series A - Posted 11-50 (1a). Use the sub-search ID from the tables above so you can track it later.

Select "Import from Sales Navigator URL" (NOT "Upload CSV" -- that's a different option)

Paste the full Sales Nav URL you copied in Step 1. Click "Import".

HeyReach will show a progress bar. Scraping takes 5-15 minutes depending on list size. You can start another sub-search while this one runs -- HeyReach allows multiple imports at once.

When status changes to "Completed", click on the list name to open it.

Click the "Export" button (top right of the list view). Select "Download as CSV".

The CSV will contain: First Name, Last Name, Company Name, Job Title, LinkedIn Profile URL. Save the file as Jamie_Raw_1a.csv (matching the sub-search ID).

Repeat steps 3-10 for all remaining sub-searches (1b, 2a, 2b, 3a, 3b, 3c).

If HeyReach shows an error: The Sales Nav URL may have expired (LinkedIn session timed out). Go back to Sales Nav, re-run the same search, copy the new URL, and try again.
If the import stalls at 0%: Your LinkedIn account may be disconnected from HeyReach. Go to Settings > LinkedIn Accounts and check the connection status. Reconnect if needed.
Output: 7 raw CSVs (one per sub-search), each with up to 2,500 leads. Total raw: ~8,800 across all sub-searches. After dedup in Step 3: ~2,000-3,000 unique leads.
Drop your raw HeyReach CSVs here
or click to browse (multiple files OK)
or download locally
Phase 2: Qualify + Source Funded Founders
Steps 3A-3C -- Two parallel tracks: Sales Nav leads + Crunchbase funded founder discovery
Two lead sources, one pipeline

Track A (Step 3A): Clean and dedup your Sales Nav leads from HeyReach. Track B (Steps 3B-3C): Source funded startup companies from Crunchbase, then use Prospeo to find the founders' personal LinkedIn profiles. Both tracks produce separate lead lists that merge at Expandi in Phase 3.

3A Dedup, Filter to Founders/CEOs, Split for Expandi
Claude Code

Clean, filter, and split your HeyReach leads before uploading to Expandi. This is a 3-part process: dedup across segments, filter to only founders/CEOs, then split into parts under 2,500 for Expandi upload.

How this works: Open Claude Code in your terminal, drag in all CSVs from Step 2, paste the prompt below, and Claude does everything automatically. No manual spreadsheet work needed.

Part 1: Merge and Dedup

Open your terminal and type claude to start Claude Code

Drag all your raw CSVs from Step 2 into the Claude Code window (one per sub-search segment). You can drag them all at once.

Copy the prompt below and paste it into Claude Code. Hit Enter.

Claude Code Prompt: Merge + Dedup
I've dragged in my raw HeyReach CSVs from multiple Sales Nav segments. Please: 1. Find and merge all the Jamie CSV files I provided into one dataframe 2. Dedup by LinkedIn Profile URL (normalized, lowercased, trailing slash stripped) 3. Export the deduped file to ~/Downloads/Jamie_Deduped_Clean.csv 4. Print: total raw rows, duplicates removed, final unique count

Part 2: Filter to Founders/CEOs Only

This is critical. HeyReach exports contain ALL titles from Sales Nav -- Partners, Managing Partners, Presidents, VPs, etc. You MUST filter down to only decision-makers before spending Expandi credits.

Claude Code Prompt: Filter to founders/CEOs
Take the deduped file Jamie_Deduped_Clean.csv and filter it to ONLY keep founders, co-founders, and CEOs: KEEP titles containing: founder, co-founder, cofounder, ceo, chief executive officer REMOVE even if they contain "founder": - founding engineer, founding designer, founding sales, founding member, founding partner, founding product, founding recruiter, founding bdr, founding gtm, founding software, founding strategic, founding business dev, associate founder - executive assistant, assistant to, ea to, chief of staff, office of the ceo - founder's (possessive), founders office EXCEPTION: If title has "co-founder" alongside another role (e.g. "Co-Founder & CTO"), KEEP it. Also remove: junk companies (Facebook, Meta, Google, Amazon, Microsoft, Apple, LinkedIn), names shorter than 3 chars. Check BOTH the Job Title field AND the Headline field -- HeyReach sometimes puts headlines in the title column. Output to ~/Downloads/Jamie_Deduped_Clean_Updated.csv Print: original count, kept count, removed count
Jamie's first run: 5,187 deduped leads filtered to 4,232 founders/CEOs (removed 955 non-founders).

Part 3: Split for Expandi Upload

Expandi has a ~2,500 lead limit per CSV upload. If your filtered list exceeds 2,500, split it into parts.

Claude Code Prompt: Split for Expandi
Split Jamie_Deduped_Clean_Updated.csv into 2 equal CSVs (approximately 2,000-2,100 each) for Expandi upload. Name them: - Jamie_Deduped_Clean_Updated_Part1.csv - Jamie_Deduped_Clean_Updated_Part2.csv
Jamie's result: 4,232 leads split into 2 x 2,116
Drop your split CSVs here
or click to browse (Part1 and Part2 files)
or download locally
Output: 2 CSVs of ~2,000 founders/CEOs each, ready for Expandi upload in Phase 3.
3B Source Funded Companies from Crunchbase (Click-by-Click)
DataScraper Chrome ExtensionCrunchbase

Purpose: Build a list of Series A and Series B funded companies that match Jamie's ICP. This is a sourcing step -- Crunchbase gives you the companies, then Step 3C finds the founders. Crunchbase does NOT have founder contact info -- it only has company-level data (funding, revenue, headcount).

Crunchbase is GOOD at: Funding stage, last raise date, revenue estimates, company age, investor names
Crunchbase does NOT give you people. It gives you companies. Step 3C (Prospeo) finds the founders at those companies.

Part 1: Install Instant Data Scraper (one-time setup)

Open Google Chrome. Go to Instant Data Scraper on Chrome Web Store

Click "Add to Chrome" > "Add Extension". This is Instant Data Scraper (free, no credit limits, no account needed).

You will see a small Instant Data Scraper icon in your Chrome toolbar (top right). If you don't see it, click the puzzle piece icon and pin it.

Why Instant Data Scraper instead of Crunchbase Export? Crunchbase Pro export costs credits (1,000 rows/export, 2,000/month). Instant Data Scraper is free, scrapes directly from the results table, auto-paginates through all pages, and exports to CSV/XLSX with zero credit cost. This is the recommended approach.

Part 2: Set up Crunchbase search with exact Jamie ICP filters

Go to crunchbase.com/discover/organization.companies. Log in if prompted. You will see the Companies tab selected at the top left (next to Contacts and Investors). Below it is a row of filter category cards: Overview, Contacts, Predictions, Insights, Competitors, Financials, Company Status, Deals, Notes, Lists, Tags, Partner Filters. Each card has a small icon on the right side.

Open the Overview card: Click the icon on the "Overview" card. A panel opens showing: Description Keywords, Headquarters Location, Industry, Number of Employees, Founded, Actively Hiring, and more.

Set Industry: In the Overview panel, find the "Industry" search box. Type Software and select it from the dropdown. It appears as a blue pill with an X. (For Jamie, "Software" alone is sufficient -- it captures SaaS, software dev, and tech companies without pulling in irrelevant results.)

Set Headquarters Location: In the same Overview panel, find the "Headquarters Location" search box. Type Canada and select it (blue pill). Then type United States and select it (blue pill). Both should appear as removable pills.

Set Number of Employees: In the same Overview panel, find "Number of Employees". This is a slider with two drag handles (NOT checkboxes). Drag the left handle to 11 and the right handle to 100. The numbers display above the slider as you drag. This targets companies with 11-100 employees.

Close the Overview panel by clicking the X in the top right of the panel. Your Overview card should now show "Overview: Canada, United States, +3" (indicating 3 additional filters active).

Open the Financials card: Click the icon on the "Financials" card. A panel opens showing: Last Funding Date, Last Funding Type, Last Funding Amount, Total Funding Raised, Valuation, Investors.

Set Last Funding Type: In the Financials panel, find "Last Funding Type" on the right side. You will see checkboxes: Pre-Seed, Seed, Series A, Series B, and a "+ More Options" link. Check Series B. (For a separate search, you can also check Series A -- run them as separate saved searches to stay under the 1,000 export limit.)

Set Last Funding Date (optional): In the same Financials panel, find "Last Funding Date" on the left side. You will see radio buttons: Past 30 Days, Past 60 Days, Past 90 Days, Past Year, Custom Date Range. Select "Past Year" to target companies that raised within the last 12 months. Or leave it unset to capture all recently-funded companies regardless of when they raised.

Close the Financials panel. Your Financials card should now show "Financials: Series B".

Open the Company Status card: Click the icon on the "Company Status" card. A panel opens showing: Type, Operating Status, M&A Status, and IPO Status.

Set Type: Check For Profit (uncheck Non-profit).

Set Operating Status: Check Active (uncheck Closed).

Set IPO Status: Check Private (uncheck Public and Delisted -- you want pre-IPO companies only).

Close the Company Status panel. Your card should show "Company Status: For Profit, Active, +1".

Save the search: Click "Save Changes" (top right, next to the green "Sync" button) or click the 🔖 pin icon next to the search title to save it to My Dashboard > Saved Searches. Name it descriptively, e.g., Jamie software, 11 to 100 employees, Series B, for-profit, Canada/USA.

Tip: Use the "Filters" button (top right, next to "Query") to see all active filters at once. The green "Sync" button refreshes results. You can also click "Save Search" in the results area to bookmark this search for later.
Filter CardFilter NameValue to SetUI Control
OverviewIndustrySoftwareSearch box > type > select pill
OverviewHeadquarters LocationCanada, United StatesSearch box > type > select pills
OverviewNumber of Employees11 to 100Drag slider handles (left=11, right=100)
FinancialsLast Funding TypeSeries B (or Series A for second search)Checkbox
FinancialsLast Funding DatePast Year (optional)Radio button
Company StatusTypeFor ProfitCheckbox
Company StatusOperating StatusActiveCheckbox
Company StatusIPO StatusPrivateCheckbox

After setting all filters, check the result count at the top left of the results table (e.g., "1-50 of 765 results"). The results table shows columns: Organization Name, employee count, Headquarters Location, Description, Website, LinkedIn, Contact Email, Full Description, Estimated Revenue Range, Founded Date. Your target is under 1,000 results. If it shows more than 1,000, tighten filters (see warning below) before proceeding to Part 3.

Part 3: Scrape the results with Instant Data Scraper

Make sure you are on the Crunchbase search results page and the results table is visible (showing Organization Name, employee count, etc.).

Click the Instant Data Scraper icon in your Chrome toolbar. A popup window opens showing a preview of the scraped data from the current page. It auto-detects the table columns (Organization Name, Headquarters Location, Description, Website, LinkedIn, Contact Email, etc.).

If Instant Data Scraper does not detect the table, scroll down the Crunchbase page first so the table fully loads, then click the icon again.

Click "Locate 'Next' button" (blue button, top left of the popup). Instant Data Scraper will highlight the Crunchbase "Next >" pagination link. If it highlights the correct button, you are ready. If it highlights the wrong element, click "Locate 'Next' button" again until it finds the right one.

Set the delay between pages: Min delay: 2 sec, Max delay: 20 sec. This prevents Crunchbase from rate-limiting you. Leave the "Infinite scroll" checkbox unchecked (Crunchbase uses pagination, not infinite scroll).

Click "Start crawling". Instant Data Scraper will automatically: scrape the current page (50 rows), click "Next", wait 2-20 seconds, scrape the next page, and repeat through all pages. The status shows: Pages scraped, Rows collected, and Working time. Let it run until it reaches the last page.

When it finishes (or you click "Stop crawling"), click the "CSV" button (green, top right of popup) to download all collected data as a CSV. Save as Jamie_Crunchbase_Companies.csv. You can also click "XLSX" for Excel format or "COPY ALL" to paste into Google Sheets.

This is free and has no credit limits. Instant Data Scraper scrapes directly from the visible table -- it does NOT use Crunchbase's export feature, so it costs zero Crunchbase credits. You can scrape all 765 results in ~5 minutes.
Source additional leads while you're here. Since you're already on Crunchbase with Jamie's ICP filters set, run additional searches with Last Funding Date = Past Year to find recently funded companies (6-12 months post-raise = when the plateau hits hardest). These companies have raised capital but growth hasn't matched -- exactly Jamie's sweet spot. Export those too and drop them into the CSV cleaner below alongside your verification export. The Crunchbase data IS the verification for these leads -- no need to re-verify them separately.
Keep search results under ~1,000. Although Instant Data Scraper can handle 1,000+ rows, Crunchbase pagination slows significantly past 1,000 results and may timeout. For best results, tighten your filters to keep under 1,000.
How to get under 1,000 results: If your search returns more than 1,000, tighten filters in this order: (1) Set Last Funding Date to "Past Year" instead of leaving it unset, (2) Split into separate searches by funding type (Series A only, then Series B only), (3) Narrow the Number of Employees slider (e.g., 11-50 then 51-100 as separate searches), (4) In the Overview panel, select "Founded" > "Custom Date Range" and set to 2018-2024. Re-check the count after each change. The goal is 500-1,000 high-quality results per search.

Part 3B: Clean Your Raw Crunchbase CSV

The CSV from Instant Data Scraper has raw CSS class names as column headers. The cleaner below fixes this automatically -- drop your raw CSV in and it renames columns, removes junk, and shows a preview.

Drop your raw Crunchbase CSV here
or

Output: A clean CSV of funded companies with proper column names (Company Name, Website, Funding Type, Total Funding, Employees, etc.). You will use this in Step 3C.

3C Find Founders at Crunchbase Companies (Prospeo)
Prospeo APIClaude Code

The problem: Crunchbase gives you funded companies but NOT the people running them. You can't load company URLs into HeyReach or Expandi -- those tools need personal LinkedIn profile URLs.

The solution: Use Prospeo's Search Person API to find CEOs, Founders, and Co-Founders at each company by domain. Prospeo returns their name, title, and personal LinkedIn URL -- exactly what you need for outreach.

How it works: Prospeo searches 200M+ professional profiles by company website domain. You give it a list of company domains, filter for Founder/CEO seniority, and it returns the decision-makers with their LinkedIn URLs. No enrichment credits needed -- the search itself returns LinkedIn URLs.

Step-by-step:

1. Open your terminal and type claude to start Claude Code

2. Drag in your clean Crunchbase CSV from Step 3B (the one with proper column names)

3. Copy the prompt below and paste it into Claude Code. Hit Enter.

4. Claude extracts the company domains, calls the Prospeo API in batches, filters results to only CEOs/Founders/Co-Founders, and outputs a clean CSV with personal LinkedIn URLs.

5. This takes 20-40 minutes depending on how many companies (rate limits). Claude handles the waiting automatically.

Claude Code Prompt: paste this after dragging in your Crunchbase CSV
I have a clean Crunchbase CSV with funded companies. I need to find the founders/CEOs at these companies. Please: 1. Extract company website domains from the CSV 2. Use the Prospeo Search Person API (key in ~/.env as PROSPEO_API_KEY) to search for people at each domain 3. Filter by seniority: Founder/Owner and C-Suite 4. From the results, ONLY keep people whose title contains: founder, co-founder, cofounder, CEO, or chief executive officer 5. Remove noise: executive assistants, chiefs of staff, founding engineers (unless they're also co-founders), Facebook/Meta profiles, fake LinkedIn profiles 6. Deduplicate by LinkedIn URL 7. Output to ~/Downloads/Jamie_CB_Founders_Clean.csv with columns: First Name, Last Name, Full Name, Job Title, LinkedIn URL, Company Name, Company Website, Company Industry, Company Headcount 8. Print summary: domains searched, founders found, unique companies covered Use chunks of 10 domains per API call, max 4 pages per chunk, 2 second delays between pages, 45 second wait on rate limits.
The Prospeo API costs 1 credit per search call (not per domain). ~3,000 companies costs ~500-700 credits. Results come back with LinkedIn URLs included -- no separate enrichment step needed.
Drop your Crunchbase founders CSV here
or download locally
Prospeo rate limits: The API allows ~50 calls before throttling. Claude handles this with automatic retries and pauses. If the script gets stuck, you can kill it and restart -- it will write whatever results it has so far.

Expected results: ~3,000 founders/CEOs across ~1,700 companies (55-60% hit rate). This is a separate lead list from your Sales Nav leads -- these are guaranteed funded startup founders.

Output: Jamie_CB_Founders_Clean.csv -- founders/CEOs with personal LinkedIn URLs, ready to load into Expandi as its own segment.

Phase 2 Summary: Two Lists Going Into Expandi

ListSourceWhat It ContainsSize
Track ASales Nav + HeyReachSaaS founders/CEOs, deduped + filtered, split into 2 parts for upload4,232 (2 x 2,116)
Track BCrunchbase + ProspeoCEOs/Founders at Series A/B funded companies with LinkedIn URLs2,997

Both lists go into Expandi as separate segments in Phase 3. Track A = "SaaS Growth Founders" (split into Part 1 and Part 2). Track B = "CB Founders Series A/B". This lets you measure which source converts better and tailor messaging accordingly.

Phase 3: Enrich
Steps 4-7 -- Waterfall enrichment: Expandi first, Apollo second, Prospeo third
Both tracks merge here

Your two lead lists from Phase 2 -- Track A (Sales Nav leads) and Track B (Crunchbase funded founders) -- both go into Expandi as separate segments. Expandi finds personal Gmail/Outlook addresses from LinkedIn profiles. After Expandi, Apollo backfills the misses using its 5K monthly credits. Prospeo runs third on whatever Apollo missed. Each tool catches leads the previous one couldn't find.

Enrichment Waterfall -- Actual Results (Jamie, March 2026)

This is exactly what happened at each step. Every number is real.

StepToolInputEmails FoundHit RateStill Missing
4Expandi6,180 leads2,69243.6%3,488
5Apollo (4K credits)3,488 leads2,11460.6%1,374
6ProspeoNot yet run -- tested 50% hit rate on 10-lead sample. Rate limited, pending re-run.
Total after enrichment6,1804,80677.8%1,374
8MillionVerifier4,806 emails3,633 verified OK75.6%1,173 failed
TrackTotal LeadsExpandi FoundApollo FoundVerified OKNo Email
Track A (Sales Nav)3,3351,329 (40%)1,084 (32%)1,7962,006
Track B (Crunchbase)2,8451,363 (48%)1,030 (36%)1,8371,482
Combined6,1802,6922,1143,6333,488
What this means: Expandi alone found 43.6% of emails. Apollo added another 34.2%, bringing the total to 77.8%. After MillionVerifier removed bad emails, 3,633 leads were campaign-ready. The remaining 3,488 leads have LinkedIn URLs but no email -- these go to HeyReach as LinkedIn-only campaigns (Step 9).
4 Upload to Expandi + Email Enrichment (Click-by-Click)
ExpandiPersonal Inbox

Why Expandi first: Expandi finds personal Gmail/Outlook addresses from LinkedIn profiles. Founders read personal inbox more than corporate. Personal emails get 2-3x higher reply rates.

Expandi upload limit: ~2,500 leads per CSV. If your list exceeds 2,500, you must split it first (Step 3A Part 3 handles this). Upload each part as a separate list.

Upload Process

Log in to app.expandi.io

In the left sidebar, click "People" > "Import" > "Upload CSV"

Select your first file. For Track A, upload Jamie_Deduped_Clean_Updated_Part1.csv first.

Map columns CAREFULLY -- this is the most important step:

CSV ColumnMap to Expandi FieldPriority
Profile URL / LinkedIn URLprofile_linkCRITICAL -- without this, nothing works
First Namefirst_nameRequired
Last Namelast_nameRequired
Full NamenameRequired
Job Titlejob_titleRequired
Company / Company Namecompany_nameRequired
Company Websitedynamic placeholderOptional (for personalization)

Name the list clearly so you can tell lists apart later. Examples: Jamie - SaaS Growth Founders Part 1 - Mar 2026, Jamie - CB Founders Series A/B - Mar 2026

Leave "Auto-refresh" and "Auto-assign to campaign" unchecked. Click "Confirm".

Repeat for each remaining CSV: Part 2 of Track A, and Track B (Jamie_CB_Founders_Clean.csv).

Expandi processes the lists in the background. Email enrichment happens automatically -- wait 1-4 hours for large lists.

When complete, export each list from Expandi as CSV. The export includes all original fields PLUS the enriched email, phone, location, and company data from LinkedIn.

If you see "Email lookup failed": Some profiles are private or have no public email trail. This is normal -- that's why we have Steps 5-7.

Expected hit rate: ~50% will have emails found. Jamie's results: Track A = 1,329 of 3,335 (40%), Track B = 1,363 of 2,845 (48%).

Drop your Expandi export CSVs here
or click to browse (all export files)
or download locally
Output: Expandi export CSVs with enriched emails. Keep these -- you will merge them in the next steps.
4B Recombine Expandi Exports + Merge Funding Data (Track B Only)
Claude Code

After Expandi, recombine your split files and (for Track B) merge back the Crunchbase funding data that Expandi doesn't carry.

Track A: Recombine Split Parts

Claude Code Prompt: Recombine Track A
I have two Expandi export CSVs from my Sales Nav leads (they were split for upload). Combine them into one master list, dedup by profile_link, and save as Jamie_SaaS_Growth_Founders_Master.csv
Jamie's result: 2,116 + 2,116 = 4,232 leads, zero duplicates between parts.

Track B: Merge Crunchbase Funding Data

The Expandi export has emails and LinkedIn data but lost the Crunchbase funding data (revenue, total funding, last raise, founded year, growth score). Merge it back by matching on company website domain.

Claude Code Prompt: Merge funding data
I have two files: 1. My Expandi export of Track B leads (has emails, LinkedIn data) 2. My original Crunchbase company list (has funding data: revenue, total funding, last funding amount/type/date, founded year, growth score) Merge the Crunchbase funding data into the Expandi export by matching on normalized company website domain. Use clean column names. Save as Jamie_CB_Founders_Master.csv Print: total leads, how many matched to funding data, how many unmatched.
Jamie's result: 94% match rate -- 2,847 of 2,997 leads matched to Crunchbase funding data. 22 columns covering person details, contact info, company info, and all funding data.
Output: Jamie_SaaS_Growth_Founders_Master.csv (Track A) + Jamie_CB_Founders_Master.csv (Track B with funding data)
4C Dedup Against Master Contacted Lists
Claude CodeDNC Check

Before spending credits on email backfill, remove anyone you have already contacted. Check against ALL outreach history: Instantly campaigns, HeyReach LinkedIn outreach, and any manual sends.

Do NOT skip this step. Contacting someone twice from different channels (email + LinkedIn) is fine and intentional. But sending the same cold email twice destroys credibility. This step removes people already in your Instantly and HeyReach databases.
Claude Code Prompt: Dedup against contacted lists
I have two master lead lists and I need to remove anyone I've already contacted. Here are my contacted lists: [Drag in your Instantly Mastersheet CSV, Instantly Database CSV, and any HeyReach export CSVs] Match on email address (normalized lowercase) AND LinkedIn profile URL (normalized to /in/username slug). Do NOT remove based on name alone -- too many false positives. For each master list, remove matches and save: - Jamie_SalesNav_Founders_Uncontacted.csv (Track A cleaned) - Jamie_CB_Founders_Uncontacted.csv (Track B cleaned) Print: original count, matches found, clean count for each list.
Jamie's result: Track A removed 897 (21% overlap with existing campaigns). Track B removed only 152 (5% -- Crunchbase founders are mostly fresh leads).

Final Uncontacted Lists

ListTotalWith EmailMissing Email
Track A -- Sales Nav Founders3,3351,329 (40%)2,006
Track B -- CB Founders (with funding data)2,8451,363 (48%)1,482
Total6,1802,6923,488 need backfill
Output: Jamie_SalesNav_Founders_Uncontacted.csv + Jamie_CB_Founders_Uncontacted.csv -- these are your final clean lists ready for email backfill in Steps 5-7.
5 Apollo -- Primary Email Backfill (Click-by-Click)
Apollo ($109/mo, 5K credits)
Credit optimization: Apollo has the most credits (5,000/mo) so it runs FIRST on all Expandi misses. Prospeo (2,000/mo) runs second on Apollo misses only. This order maximizes email coverage while preserving the smaller Prospeo budget.

Go to app.apollo.io and log in

Click "People" in the left sidebar

Click "Import" > "Upload CSV". Select your uncontacted list (leads WITHOUT email from Expandi).

Map columns: First Name, Last Name, Company, LinkedIn URL. Click "Import".

Once imported, select all contacts. Click "Enrich" or "Find Emails" (each lookup costs 1 Apollo credit -- you will NOT be charged if no email is found).

When enrichment completes, filter by contacts that have an email. Export as CSV: Jamie_Apollo_Found.csv.

Filter by contacts with NO email. Export: Jamie_Apollo_Missing.csv.

Cost note: Apollo Basic = $109/mo for 5,000 credits (includes 2,500 add-on). At ~3,500 lookups you will use most of one month's credits. Plan billing cycle: renews on the 5th of each month.
Jamie's Actual Apollo Results (March 2026)
MetricTrack A (Sales Nav)Track B (Crunchbase)Combined
Input (Expandi misses)2,0061,4823,488
Emails found1,0841,0302,114
Hit rate54%70%60.6%
Credits used~4,000 (close to monthly cap)
Still missing email9224521,374

Actual hit rate: 60.6% -- Apollo found emails for nearly 2 out of 3 Expandi misses. Crunchbase leads had a higher hit rate (70%) because company data was richer.

Output: Jamie_Apollo_Found.csv (2,114 leads with email) + Jamie_Apollo_Missing.csv (1,374 leads, no email found)
6 Prospeo -- Fill Apollo Gaps (Click-by-Click)
Prospeo (2K credits/mo)
MetricValue
Input1,374 Apollo misses (deduped + qualified subset: 1,864)
Leads processed1,000 (hit daily rate limit at batch 22/38)
Emails found by Prospeo313
Hit rate (processed)31.3%
MillionVerifier pass302 verified (11 rejected as bad/invalid)
MV breakdown99 good (31.6%) + 204 catch-all (65.2%) + 10 bad (3.2%)
Credits used106 Prospeo ($0.34/email) + 313 MV
Tier breakdownT1: 192 | T2: 110
T1 with personalization192/192 (merged from Exa research)
Prospeo only charges for successful matches -- no credit used if no email found. However, always run Prospeo results through MillionVerifier before sending. Prospeo marked all 313 as "VERIFIED" but MV caught 11 bad emails (3.5%). The $0.001/email MV cost is worth avoiding bounces that tank sender reputation.

Option A: Use the Prospeo Dashboard (Easier)

Go to app.prospeo.io and log in

Click "Enrich" in the top navigation

Click "Bulk Enrich" > "Upload CSV". Select your Jamie_Apollo_Missing.csv file (1,374 leads without email).

Map the "LinkedIn URL" column. Prospeo uses LinkedIn URLs as the primary lookup method. If a lead has no LinkedIn URL, it falls back to "First Name" + "Last Name" + "Company Website".

Click "Start". Processing takes 15-30 minutes for ~1,400 leads.

When complete, click "Download Results". The CSV will have an "Email" column.

Split the results: rows WITH email = Jamie_Prospeo_Found.csv. Rows WITHOUT = Jamie_No_Email_Final.csv.

Option B: Use the Prospeo API (Faster for Large Batches)

The API endpoint is POST https://api.prospeo.io/bulk-enrich-person with header X-KEY: your_api_key.

Sends up to 50 leads per batch. The script at 7-Scripts/waterfall_enrich.py handles batching, rate limiting, and output automatically.

Run: python3 7-Scripts/waterfall_enrich.py --prospeo-only

Rate limit warning: Prospeo allows 2,000 API calls/day and 300/minute. For 1,864 leads at 50/batch, that's 38 calls -- well within daily limits. If you hit a rate limit, the script auto-retries with 60s backoff. If daily limit is exhausted, wait until reset (check x-daily-reset-seconds header).

Actual hit rate: 31.3% on processed leads. Lower than Apollo (60.6%) because these are the hardest-to-find contacts -- two tools already failed on them. Still recovered 312 new emails for 106 credits.

Credit cost: 106 credits for 312 emails = $0.34/email. Extremely efficient since Prospeo only charges for successful matches.

Output: Jamie_Prospeo_Found.csv (313 emails) -- these go through MillionVerifier in Step 7 before being added to campaign CSVs. Jamie_Prospeo_Missed.csv (remaining leads go to LinkedIn-only pipeline in Step 9)
9 LinkedIn-Only Pipeline -- Dedup, Qualify, Tier, Research, Upload
HeyReachLinkedIn Only
Do NOT skip these leads. After the waterfall (Expandi -> Apollo -> Prospeo) and MillionVerifier, you will still have leads with LinkedIn URLs but no verified email. For Jamie, that was 1,562 leads. These are reachable via LinkedIn (HeyReach) and represent 28% of your total pipeline. Dedup them against your email campaigns (don't LinkedIn-message someone you're already emailing), qualify, tier, research T1s, then upload to HeyReach.

9A: Dedup Against Existing Campaigns

Before doing anything else, remove leads that already exist in your email campaign CSVs or Instantly. Many of these leads have email addresses in the campaign files -- you don't want to LinkedIn-message someone you're already emailing.

Claude Code Prompt: dedup LinkedIn-only leads
[Drag in your no-email CSV files from 6-Uncontacted-Ready/ AND all campaign CSVs from 10-Instantly-Campaigns/] These leads have LinkedIn URLs but no email. Before processing them, dedup against: 1. All existing campaign CSVs (match on LinkedIn URL AND name+company combo) 2. Any existing Instantly lead exports 3. Internal duplicates between CB and SalesNav sources Print: how many removed at each dedup pass, how many remain.
Jamie's LinkedIn-Only Dedup Results (March 2026)
StageCountNotes
Starting pool (no email)3,488CB: 1,482 + SalesNav: 2,006
Removed (already in campaigns)-1,488Already being emailed via Instantly
After dedup2,000CB: 782 + SalesNav: 1,218

9B: Qualify by ICP

Filter out leads that don't match Jamie's ICP. Same criteria as Step 9 but applied before tiering to avoid wasting research time on bad fits.

Disqualify leads with bad titles: intern, student, retired, former, freelance, consultant, professor, advisor, board member, volunteer, assistant, coordinator, analyst, associate, recruiter

Disqualify leads in wrong industries: staffing, recruiting, real estate, construction, healthcare, banking, insurance, airline, mining, oil, gas, agriculture, food, restaurant, hospitality, retail, fashion, fitness, entertainment, government, military, nonprofit, legal, accounting, architecture

Disqualify leads at large companies: 500+ employees (Jamie targets 11-100)

Keep everyone else -- even partial matches go to T2/T3.

Jamie's ICP Qualification Results (March 2026)
CategoryCountNotes
Input (after dedup)2,000
Qualified (good title, not excluded)1,814Founders/CEOs in acceptable industries
Marginal (partial match)50Good industry but not founder title, kept for T2/T3
Disqualified -- bad title26Advisors, interns, board members
Disqualified -- wrong industry110Healthcare, construction, legal, etc.
Moving forward1,864Qualified + marginal

9C: Tier the Qualified Leads

Score each lead using the same ICP scoring as Step 9. Higher score = better fit = higher tier.

SignalPoints
Founder/CEO/President title+30
Software/SaaS/Tech industry+20
11-200 employees+15
Series A or B funding+15
$1M-$50M last funding amount+10
Growth score >= 7 (CB only)+5
Jamie's LinkedIn-Only Tiering Results (March 2026)
TierScoreCountSourceAction
T1>= 50487CB: 227, SN: 260Research + personalize, then priority HeyReach campaign
T220-491,327CB: 494, SN: 833Standard HeyReach campaign, no individual research
T3< 2050SN: 50Low priority. Upload last or skip.
Total1,864

9D: Research T1 Leads (Exa Intelligence)

Run Exa web searches on each T1 lead to find personalization hooks: podcast appearances, press mentions, funding announcements, blog posts. This is the same research process as Steps 10-11 for the email leads.

Exa is for research, not email finding. We tested Exa for email discovery and it returned 0%. But it's excellent at finding podcast appearances and press mentions. Use it for what it's good at.
MetricCountRate
Total T1 leads researched487--
Podcast mentions found38078.0%
Press mentions found9319.1%
Fallback personalization142.9%
Total with hooks (podcast or press)47397.1%
Total personalized (all types)487100%
Claude Code Prompt: research T1 LinkedIn-only leads
[Drag in Jamie_LinkedIn_Only_T1.csv] These are 487 T1 LinkedIn-only leads. For each one, search Exa for: 1. Podcast appearances: "[Name]" podcast OR interview OR keynote 2. Press/news: "[Company]" funding OR launch OR raised OR award Build a personalization_line for each lead: - If podcast found: 'Caught your appearance on "[title]" -- really liked your take' - If press found: 'Saw the news about [company] -- "[headline]" -- congrats' - Fallback: use funding round data or generic company reference Export to Jamie_LinkedIn_Only_T1_Personalized.csv with personalization_line, hook, podcast_mention, and press_mention columns added.
The script at 7-Scripts/t1_linkedin_research.py handles this. Takes ~8 minutes for 487 leads. Uses 2 Exa calls per lead.

9E: Upload to HeyReach

Go to app.heyreach.io and log in

Click "Lists" > "Create List" > "Upload CSV"

Upload Jamie_LinkedIn_Only_T1_Personalized.csv first. Name it: Jamie - LinkedIn Only T1 - Mar 2026

Column mapping:

CSV ColumnHeyReach FieldNotes
linkedin_urlLinkedIn URLCRITICAL -- without this, nothing works
first_nameFirst NameMaps to built-in {FIRST_NAME}
last_nameLast NameMaps to built-in {LAST_NAME}
company_nameCompanyMaps to built-in {COMPANY}
titlePositionMaps to built-in {POSITION}
personalization_lineClick "Add custom variable"Becomes {PERSONALIZATION_LINE} in message editor

Important: When uploading, click "Add custom variable" at the bottom of the column mapping screen. Map the personalization_line column to a new custom variable called PERSONALIZATION_LINE. This becomes {PERSONALIZATION_LINE} in the message editor alongside the built-in variables ({FIRST_NAME}, {COMPANY}, {POSITION}, etc.).

Repeat for T2 (without personalization) and T3 as separate lists.

9F: Create LinkedIn-Only Campaigns

Create a new campaign. Name: Jamie - LinkedIn Only T1 - [Month]

Add the T1 list. Set campaign type to connection request + follow-up.

Sequence: Connection request with note > Wait 3 days > Follow-up message 1 (after accept) > Wait 5 days > Follow-up message 2

Daily limit: 20-25 connection requests per sender account.

This is NOT the same as the T1 multi-channel overlay (Step 14). Step 14 adds LinkedIn on top of email for your best leads who already have email addresses. This step catches the leads that email couldn't reach at all. Different lists, different campaigns, no overlap.
T1 Connection Request (Personalized)

Hi {FIRST_NAME}, {PERSONALIZATION_LINE}. I host a podcast for venture-backed founders who've hit that growth ceiling between 1 and 10M. Would love to connect.

T2 Connection Request (Generic)

Hi {FIRST_NAME}, saw you're running {COMPANY} in the SaaS space. I host a podcast for venture-backed founders who've hit that growth ceiling between 1 and 10M. Would love to connect.

Follow-Up 1 (After Connection Accept -- All Tiers)

Thanks for connecting {FIRST_NAME}. Quick question -- have you hit a point where pushing harder on growth actually made things worse? That's a pattern I keep seeing with Series A/B founders. Would love to hear your take, even if it's just a 2 min voice note.

Follow-Up 2 (5 Days Later -- All Tiers)

Hey {FIRST_NAME}, no worries if you're slammed. I'm interviewing a few founders this month for the podcast about that exact "stuck" moment. Zero prep, just a real conversation. Would that interest you?

Expected results: At 30-40% connection acceptance on 487 T1 leads, expect 145-195 new connections. At 20-30% reply rate on follow-ups, that's 30-60 conversations from leads that would otherwise have been lost entirely.

Phase 4: Verify
Step 7 -- Verify ALL emails (Expandi + Apollo + Prospeo) before tiering
Especially critical for Jamie

Jamie's new domains are actively sending -- protect their reputation. Every bounce hurts deliverability. Verify aggressively -- OK status ONLY.

7 MillionVerifier -- Verify ALL Emails (Click-by-Click)
MillionVerifier
This step protects Jamie's new domains. Every bounced email hurts deliverability. Verify before sending -- no exceptions.
Claude Code Prompt: run this before uploading to MillionVerifier
[Drag in all your enriched lead CSVs -- Track A, Track B, AND the Prospeo recovery CSV (Jamie_Prospeo_Found.csv)] Please merge them into one file, dedup by email address (keep the row with the most data if duplicates exist), and export to ~/Downloads/Jamie_All_Emails_For_Verification.csv. Print: total rows per source file, duplicates removed, final count. Important: Include Prospeo emails even though Prospeo marks them "VERIFIED" -- we found 3.5% of Prospeo "verified" emails were actually bad when checked by MillionVerifier.
Run this before uploading to MillionVerifier. One clean file is easier to upload than 4 separate ones. Always verify ALL sources -- no tool's built-in verification is 100% reliable.

Take your merged Jamie_All_Emails_For_Verification.csv (from the Claude Code step above).

Go to millionverifier.com and log in

Click "Bulk Verifier" in the top menu

Click "Upload File". Select your merged CSV.

MillionVerifier will ask which column contains the email addresses. Select the email column. Click "Start Verification".

Verification takes 5-30 minutes. When complete, you will see a breakdown: OK, Catch-All, Risky, Unknown, Invalid, Disposable.

Click "Download Results". The CSV will have a new "Result" column next to each email.

Open the results CSV. Filter the "Result" column to show ONLY "ok". Delete every other row.

DO NOT keep "catch-all" (spam traps hide here). DO NOT keep "risky" or "unknown." Only "ok" status. No exceptions.

Save as Jamie_Verified_Master.csv.

Drop your verified master CSV here
or download locally
Claude Code Prompt: run this after downloading MillionVerifier results
[Drag in your MillionVerifier results CSV -- it will have a "Result" or "quality" column with values like "ok", "catch_all", "risky", "unknown", "invalid", "disposable"] Please: 1. Keep ONLY rows where Result/quality = "ok". Delete everything else 2. Remove the Result/quality column (not needed anymore) 3. Export to ~/Downloads/Jamie_Verified_Clean.csv 4. Print a breakdown: how many ok, catch_all, risky, unknown, invalid, disposable, and the final verified count
Never keep "catch_all". Spam traps hide behind these. OK-only is non-negotiable.

Option B: Use the MillionVerifier API (Faster)

The API endpoint is GET https://api.millionverifier.com/api/v3/?api=YOUR_KEY&email=EMAIL

Single email verification -- loop through your CSV, 0.1s between calls to be polite.

The response JSON has a quality field: "good", "risky", "bad", or "unknown".

Keep "good" only. "risky" with result: "catch_all" can be kept for LinkedIn-aware domains but monitor bounce rates.

MetricExpandi + Apollo BatchProspeo BatchTotal
Input4,8063135,119
Passed (good + catch-all)3,6333023,935
Rejected (bad/invalid)1,173111,184
Pass rate75.6%96.5%76.9%
Cost: ~$5-10 per 1,000 verifications via API (we have 17,973 credits). Bulk upload costs $29-49.

Actual pass rate: 76.9% overall. Prospeo emails passed at 96.5% (much higher than Expandi+Apollo's 75.6%).

Output: Jamie_Verified_Master.csv -- 3,935 clean, verified emails ready for tiering. Leads without email go to the LinkedIn-only pipeline.
Phase 5: Tier + Split
Step 8 -- Score all leads, split into Email (Instantly) vs LinkedIn-only (HeyReach)
Tiering for maximum ROI

T1 hyper-personalized = 15-25% reply. T2 segment-personalized = 5-10%. T3 volume = 2-5%. Concentrate dossier research on T1 where $15K deals close.

8 Score + Tier All Leads (Email + LinkedIn-Only)
Claude Code

Scoring signals (use what's available per track):

SignalPointsAvailable In
Title Signals
Founder + CEO combo title+25Both tracks
Founder / Co-Founder+20Both tracks
CEO only+15Both tracks
Industry Signals
SaaS / Software / Tech / AI / Cloud / Data+30Both tracks
Adjacent (Marketing, Consulting, E-commerce, Financial Services)+15Both tracks
Company Size
11-200 employees (sweet spot)+25Both tracks
2-10 employees+15Both tracks
201-500 employees+10Both tracks
Funding (Track B only)
Venture-backed Series A/B+40Track B
$1-10M ARR range+25Track B
Raised in last 12 months+20Track B
Total funding $1-50M+10Track B
Location & Legitimacy
English-speaking market (US, UK, Canada, Australia)+10Both tracks
Has company website+5Both tracks
Has phone number+5Both tracks
TierScore (Track B)Score (Track A)Target VolumeTreatment
T0Hand-selected from T1 pool -- highest VIP Score + Workshop Fit + Reachability10-20MANUAL ONLY. Dedicated Sherlock dossier page, Loom audit, direct outreach by Jamie. Not in Instantly or HeyReach.
T1120+90+ (tech founder, right size, English-speaking)~300-650Dossier + hyper-personalized + LinkedIn
T280-11950-89~1,500-2,500Segment-personalized templates
T3<80<50~500-1,000Volume play, generic but targeted
T0 VIP Leads (Manual Only): After scoring, the top 10-20 highest-value T1 leads are promoted to T0. These are hand-selected based on VIP Score, Workshop Fit ($15K engagement potential), and Reachability. T0 leads are removed from all Instantly and HeyReach campaigns and handled with white-glove manual outreach by Jamie. Each T0 lead gets a dedicated Sherlock dossier page at jamie.dopaminedigital.io/vip-pipeline/ with Loom audit scripts, multi-channel outreach plans, and customized messaging. In Airtable, T0 leads are marked: Tier=T0, Priority=Urgent, Outreach Queue=Manual, Lead Status="VIP - Manual Outreach", with Subject Hook and Personalization Line cleared (Jamie uses the full dossier instead).
Track A vs Track B: Track A (Sales Nav) has industry (100%), employee count (80%), location (75%), and website (76%) data -- enough to score up to ~100 points. Track B (Crunchbase) also has funding, revenue, and growth data so can score up to ~140. Both tracks can produce T1 leads. Track A T1 = tech founder + right company size + English-speaking market. Track B T1 = all that plus confirmed funding and revenue signals.
Claude Code Prompt: paste this with your verified master CSV
[Drag in your verified lead CSVs from MillionVerifier -- both Track A (Sales Nav) and Track B (Crunchbase) OK-only files] These two files have different column names. Read both files first and show me the headers before scoring. SCORING RULES — apply what's available per track. Read every column header and use ALL available data. TITLE SIGNALS (both tracks): - Founder + CEO combo in title (e.g. "Co-Founder & CEO"): +25 - Founder / Co-Founder: +20 - CEO / Chief Executive: +15 INDUSTRY SIGNALS (both tracks — check "industries" or "Company Industry" column): - SaaS / Software / Tech / IT / AI / Cloud / Data / Automation / Cybersecurity / Fintech / Platform: +30 - Adjacent (Marketing, Advertising, Consulting, E-commerce, Financial Services, HR, Recruiting): +15 COMPANY SIZE (both tracks — check "employee_count_start/end" OR "Employees Min/Max" OR "CB Headcount"): - 11-200 employees: +25 (sweet spot) - 2-10 employees: +15 (small but viable) - 201-500 employees: +10 (bigger, still viable) - 500+: +0 LOCATION (both tracks — check "location" or "Location" column): - English-speaking market (US, UK, Canada, Australia, Ireland, NZ): +10 - Any other location with data: +3 LEGITIMACY (both tracks): - Has company website: +5 - Has phone number: +5 FUNDING SIGNALS (Track B only — check "Total Funding", "Last Funding Type", "Last Funding Date", "Estimated Revenue"): - Venture-backed Series A or B: +40 - Early Stage Venture: +30 - Seed: +15 - Estimated $1-10M revenue: +25 - $10M+ revenue: +15 - Raised in last 12 months: +20 - Raised in last 24 months: +10 - Total funding $1-50M: +10 - Growth Score High: +10 | Medium: +5 TIER THRESHOLDS (track-relative): Track A (Sales Nav — max ~100 points): - T1 (90+): Tech founder + right company size + English-speaking market - T2 (50-89): Good signals but missing one or two dimensions - T3 (below 50): Weak signals or missing data Track B (Crunchbase — max ~140 points): - T1 (120+): Full ICP match with funding confirmation - T2 (80-119): Solid match, weaker funding or size signals - T3 (below 80): Volume play Target distribution: ~300-650 T1 (dossier), ~1,500-2,500 T2 (segment), ~500-1,000 T3 (volume). If T1 exceeds 700, raise thresholds. If under 200, lower them. Please: 1. Score every lead across both files using ALL available columns 2. Add Score, Tier, and Track (A or B) columns 3. Update each CSV in place (keep them separate -- do NOT combine) 4. Print: total per track, T1/T2/T3 counts per track and combined, score distribution, sample T1 leads from each track
This is the most important step. Both tracks can produce T1 leads. Track A uses industry + headcount + location + title. Track B adds funding + revenue on top. Review the T1 list manually before spending credits on dossier research.
Output: Both CSVs updated in place with Score and Tier columns added. Keep them separate -- Track A and Track B stay as individual files. Filter by Tier within each file to work with each group.
Phase 6: Tier 1 Deep Research
Steps 10-12 -- Deep research for your 634 T1 leads
What to look for in Jamie's T1 research

Plateau signals: "hiring freeze" in blog, flat team size on LinkedIn, job posts removed, founder posting about "refocusing." These are personalization gold -- "I noticed [Company] seems to be at an inflection point..."

Steps 10-11 COMPLETE (Mar 6, 2026)

634 T1 leads enriched with FireCrawl + Exa. Results:

Track B (263 T1): 99% hooks, 27% press, 37% podcasts, 26% hiring, 82% LinkedIn posts, 66% website signals

Track A (371 T1): 98% hooks, 12% press, 16% podcasts, 60% LinkedIn posts

Cost: ~2,074 FireCrawl credits + ~$9.51 Exa. Track B produces richer signals (funded companies have more press/content).

Step 12 COMPLETE (Mar 7, 2026) -- 95 Deep Dossiers

Sherlock ran for ~12 hours on Mac Mini, completing 95 deep-dive dossiers (~7.5 min each). Stopped early -- full 634 would have taken 3+ days, and the podcast invite template works without heavy personalization. The 95 dossiers cover the highest-value T1 leads.

Dossiers location: ~/Downloads/Jamie-Lead-Gen/6a-Sherlock-Dossiers/ (95 markdown files, copied from Mini)

Key learning: Deep dossiers are overkill for podcast invite outreach. FireCrawl + Exa hooks from Steps 10-11 give enough for a one-liner. Reserve Sherlock deep dives for top 50-100 leads only -- not the full T1 pool.

10 FireCrawl -- Website Intelligence
FireCrawlClaude Code

What this does: For each T1 lead's company website, FireCrawl scrapes the site and extracts structured signals -- hiring activity, team size, blog freshness, product updates. Claude Code runs the script and writes the results back to your CSV.

How it works (what Claude Code does behind the scenes):

1. Map the site -- FireCrawl's /map endpoint gets all pages on the domain in one call (1 credit). This finds the about, careers, blog, and team pages automatically.

2. Scrape key pages -- From the sitemap, it picks the 2-3 most useful pages (about/team, careers/jobs, blog/news) and scrapes them as markdown (1 credit each).

3. Extract signals -- Claude reads the scraped content and extracts: current team size, open job postings (hiring = growth), blog last updated date (stale = plateau), product launches, and any "refocusing" language.

4. Write to CSV -- Adds columns to your T1 file: Hiring_Active (yes/no/unknown), Blog_Last_Updated, Team_Size_Estimate, Website_Signals (free text summary).

Rate limits: FireCrawl allows ~3 requests/second. The script auto-throttles. For 300 T1 leads at 2-3 pages each, expect ~15-30 minutes runtime.
Claude Code Prompt: paste this to run FireCrawl + Exa on ALL T1 leads (combined batch)
[Drag in your tiered CSV files -- both Track A and Track B with Score and Tier columns] I need you to build a Python script that enriches ALL T1 leads with FireCrawl (website intelligence) and Exa (founder/company intelligence) in one batch. Here's the spec: CRITICAL: Both FireCrawl and Exa APIs require a User-Agent header or Cloudflare blocks with 403 error code 1010. Add "User-Agent": "DopamineDigital/1.0" to ALL API requests. FOR EACH T1 LEAD, DO: 1. FIRECRAWL - Website scraping: a) POST /map to get sitemap (1 credit). Find pages with "about", "team", "careers", "jobs", "blog", "news" in URL b) POST /scrape on 2-3 key pages as markdown (1 credit each). Fallback to homepage if no key pages c) Extract from scraped text: - Hiring_Active: "yes" (hiring keywords found), "no" (company pages found but no hiring), "unknown" (no useful pages) - Open_Roles_Count: count of role-type words near hiring context - Website_Signals: one-line summary of what was found 2. EXA - Three searches per lead: a) Founder search: "{Name} {Company} founder" -- type: "auto", numResults: 5, contents.text.maxCharacters: 500 b) Company search: "{Company} startup" -- same params c) Social search: "{Name} {Company}" with includeDomains: ["linkedin.com", "twitter.com", "x.com", "medium.com", "substack.com", "youtube.com"] d) Extract from combined results: - Podcast_Appearances: episode titles found (or "none_found") - Recent_Press: funding/launch articles (or "none_found") - Personalization_Hook: best one-liner for email opening. Priority: press > LinkedIn post > podcast > company page - Hook_Source_URL: where the hook came from - LinkedIn_Posts_Found: count of LinkedIn post results API DETAILS: - FireCrawl: POST https://api.firecrawl.dev/v1/map and /scrape. Auth: "Authorization: Bearer $FIRECRAWL_API_KEY" - Exa: POST https://api.exa.ai/search. Auth: "x-api-key: $EXA_API_KEY" - Both keys in ~/.env - MUST include "User-Agent: DopamineDigital/1.0" on ALL requests (Cloudflare blocks without it) - Rate: 0.5s pause for FireCrawl, 1.0s pause for Exa. Auto-retry on 429/403 with 30s wait. SCRIPT BEHAVIOR: - Process Track B first (richer data), then Track A - Run with python3 -u (unbuffered) so log file updates in real time - Print progress per lead: [N/total] name @ company + key findings - Print summary every 25 leads - Update each CSV in place -- add 10 new columns to T1 rows, leave T2/T3 rows blank - Handle errors gracefully -- log and skip, don't crash the batch - Estimated runtime: ~15 seconds per lead, ~2.5 hours for 600+ T1 leads Run it in the background: python3 -u script.py > enrichment.log 2>&1 &
This is a combined batch -- FireCrawl and Exa run together for each lead. Faster and simpler than running them as separate steps. Always check the first 5-10 results in the log before walking away.
Output: Both CSVs updated with 10 new columns: Hiring_Active, Open_Roles_Count, Website_Signals, Podcast_Appearances, Recent_Press, Personalization_Hook, Hook_Source_URL, LinkedIn_Posts_Found, Blog_Last_Updated, Team_Size_Estimate. Only T1 rows populated.
11 Exa -- Founder Intelligence + Personalization Hooks (skip if done in Step 10)
ExaClaude Code

What this does: Exa is an AI-powered web search. It finds things Google misses -- podcast appearances, niche press coverage, conference talks, and recent company news. This is where your best personalization hooks come from.

How it works:

1. Search for the founder -- Exa neural search for "[Founder Name] [Company Name]" finds podcast episodes, interviews, LinkedIn posts, and press mentions.

2. Search for the company -- Second search for "[Company Name] launch OR funding OR pivot OR growth" finds recent news, product announcements, and pivot signals.

3. Extract hooks -- Claude reads the Exa results (titles + URLs + text snippets) and pulls out the best personalization angles: "I heard your episode on [podcast]", "Saw [Company] just launched [feature]", "Congrats on the Series A".

4. Write to CSV -- Adds columns: Podcast_Appearances, Recent_Press, Personalization_Hook (the best opening line), Hook_Source_URL (link to the source).

Best hooks by type: Podcast appearance ("loved your take on X") > Press/conference ("saw the announcement about Y") > Product launch ("the new feature Z looks interesting") > Generic funding ("congrats on the raise"). The more specific, the higher the reply rate.
Already handled: The combined batch script from Step 10 runs Exa alongside FireCrawl for each lead. If you used the Step 10 prompt, Exa columns are already in your CSVs. This step only applies if you ran FireCrawl separately and need to add Exa data afterwards.
Claude Code Prompt: run Exa ONLY (if not already done in Step 10 combined batch)
[Drag in your CSVs -- should already have FireCrawl columns from Step 10] Run Exa research on all T1 leads that don't have Personalization_Hook data yet. For each: THREE Exa searches (MUST include "User-Agent: DopamineDigital/1.0" header or get 403): 1. "{Name} {Company} founder" -- type: "auto", numResults: 5 2. "{Company} startup" -- finds funding news, press 3. "{Name} {Company}" with includeDomains: ["linkedin.com", "twitter.com", "x.com", "medium.com", "substack.com"] -- finds social posts Extract: Podcast_Appearances, Recent_Press, Personalization_Hook (best opener), Hook_Source_URL, LinkedIn_Posts_Found Hook priority: press/funding news > LinkedIn posts > podcast mentions > company page > fallback from CSV data Fallback hooks (when Exa finds nothing): - Has recent funding? -> "Congrats on the recent raise" - Hiring from FireCrawl? -> "Noticed you're scaling the team" - Tech + right size? -> "Building a [industry] company to [headcount] people is no small feat" API: POST https://api.exa.ai/search. Auth: x-api-key: $EXA_API_KEY. Cost: ~$0.015/lead (3 searches).
Exa finds small companies too -- LinkedIn profiles, posts, and niche press. The key is using type "auto" and including the User-Agent header. Without it, every request returns 403.
Output: Both CSVs updated with 5 Exa columns: Podcast_Appearances, Recent_Press, Personalization_Hook, Hook_Source_URL, LinkedIn_Posts_Found.
12 Sherlock Deep Research -- OpenClaw Agent (Opus)
Sherlock (Opus)OpenClawMac Mini

What this does: Sherlock is a deployed OpenClaw agent running Opus on the Mac Mini. It gets ALL the enrichment data from Steps 10-11 as starting context, then uses its own web search, browser, and scraping tools to go deeper. It doesn't rediscover what Exa already found -- it follows threads, reads podcast transcripts, discovers pivot stories, corrects wrong data, and writes genuine intelligence briefs.

Proven results (Mar 6, 4 deep dives completed):

Matthew Peters / Envive AI (ICP 8.3/10): Corrected CEO title (he's Chief Architect), found ELMo paper (18,000+ citations), identified "invisible co-founder" pain signal. 14 sources, 12/15 searches.

Pratap Ranade / Arena (ICP 8.5/10): Discovered $62M double-raise pivot, Palantir acquisition backstory, ~$25.7M revenue (not in our data), active Substack. 15/15 searches.

Ben Borton / PodPlay (ICP 8.0/10): $8M Series A, spun out of PingPod ($19M Sequoia Heritage). 1M+ users, 200+ venues. Done 4+ niche podcasts but zero general founder shows -- media gap flagged.

Rob Hayden / Renew (ICP 8.8/10): $33M raised, created "retention management" category. Only 2 media appearances despite funding -- enormous media gap. Soundbite identified: "Renters don't churn from renting, they churn from relationships."

How it works:

1. Feed Sherlock the enriched lead data -- All CSV columns (name, company, title, funding, headcount, industry) PLUS the FireCrawl signals (hiring, website intel) PLUS the Exa signals (press, podcasts, hooks, LinkedIn posts). Sherlock starts from a strong position.

2. Sherlock investigates independently -- Using web_search (Brave API), web_fetch, and browser tools, Opus decides what to research. It might search for a founder's podcast appearance, scrape the episode page, read what they said, then search for their company's recent pivot. Each lead gets a unique research path.

3. Sherlock writes the brief -- A full intelligence memo saved as markdown: executive summary, ICP fit score table, company profile with pivot history, decision-maker profile with career timeline, pain signals with confidence levels, podcast fit assessment with objection reframes, recommended approach with specific hooks.

4. Batch script collects results -- Extracts summary columns (Email_Opening, Outreach_Angle, ICP_Fit_Score, Plateau_Score, Pain_Signal) from each brief and writes them back to the CSV.

Scale tiers (DEPLOYED -- running now):

Deep diveTop 50 per track (100 total)Full 15-search investigation, complete dossier (~3 min each)~5 hrs
Quick scanNext 150 per track (300 total)5 searches, executive summary + top pain signals + email opener (~1 min)~5 hrs
SynthesisRemaining ~234 T1No web searches -- analyzes existing FireCrawl + Exa data into brief format~1 hr
Total634 T1 leadsAutonomous overnight batch on dedicated Mac Mini~11 hrs
Claude Code Prompt: build and run the Sherlock batch researcher
[Drag in your enriched CSV files -- both Track A and Track B with FireCrawl + Exa columns from Step 11] Build sherlock_batch.py -- a script that feeds each T1 lead to the Sherlock OpenClaw agent on Mac Mini for deep research. Sherlock runs Opus with web search, browser, and scraping tools. TRIGGERING SHERLOCK (base64 encoding to avoid shell quoting issues): 1. Base64-encode the full lead message 2. SSH to mini, decode to temp file, pass via $(cat file) 3. Each lead gets a unique --session-id for a fresh session (15 web searches per session) ssh mini 'export PATH="$HOME/.nvm/versions/node/v22.22.0/bin:$PATH" && \ echo [BASE64_MSG] | base64 -d > /tmp/sherlock-msg-[slug].txt && \ openclaw --profile sherlock agent --agent main --timeout 300 --json \ --session-id lead-[company-slug] \ -m "$(cat /tmp/sherlock-msg-[slug].txt)" && \ rm -f /tmp/sherlock-msg-[slug].txt' MESSAGE FORMAT (what Sherlock receives): "Lead Deep-Dive mode. Here is everything from our enrichment pipeline. Use as starting point -- do not waste searches rediscovering what we have. [budget instruction based on mode] LEAD DATA: Name, Title, Company, Website, Email (verified/none), LinkedIn, Location, Industry, Headcount FUNDING/COMPANY DATA: Revenue, Total Funding, Last Funding Amount/Type/Date, Founded, Growth Score ENRICHMENT SIGNALS: Hiring_Active, Open_Roles_Count, Website_Signals, Podcast_Appearances, Recent_Press, Personalization_Hook, Hook_Source_URL, LinkedIn_Posts_Found YOUR MISSION: Investigate what [first name] actually cares about, what [Company] is going through right now, pain signals, and why they would take a meeting about appearing on a business growth podcast for venture-backed founders. Write full intelligence brief with sources and confidence levels. Save to shared/sherlock/ as markdown." SCALE MODES: - --test N: Test on N leads (default 5) - --deep N: Full deep dive on top N T1 leads (by Score descending). 15 searches, ~3 min, 300s timeout. - --quick N: Quick scan -- MAX 5 searches, concise brief. 120s timeout. - --synth N: No web searches, analyze existing data only. 60s timeout. - --track A|B|both: Which CSV to process AFTER EACH LEAD: 1. Read brief from Mac Mini (~/.openclaw-sherlock/workspace/shared/sherlock/) matching by company slug then name slug 2. Copy brief to local ~/Downloads/sherlock-briefs/ 3. Extract summary fields via regex: ICP_Fit_Score, Email_Opening, Outreach_Angle, Plateau_Score, Pain_Signal 4. Write back to CSV. Auto-save progress every 10 leads. TIMEOUT RECOVERY: If a lead times out, send a follow-up message to the same session: "Continue and finish the brief. Save to shared/sherlock/ as markdown." PREREQUISITES: OpenClaw v2026.3.2+ on Mac Mini (npm update -g openclaw). Sherlock profile at ~/.openclaw-sherlock/. Gateway running. RECOMMENDED BATCH ORDER: python3 -u sherlock_batch.py --deep 50 --track B # highest-value leads first python3 -u sherlock_batch.py --deep 50 --track A python3 -u sherlock_batch.py --quick 150 --track B python3 -u sherlock_batch.py --quick 150 --track A Or wrap in a shell script and run overnight: nohup bash sherlock-full-run.sh > ~/Downloads/sherlock-full-run.log 2>&1 &
Sherlock independently investigates each lead -- corrects wrong titles, discovers pivot stories, reads podcast transcripts, finds revenue data not in any database. Each brief has 10-15 cited sources. Skips leads that already have a Sherlock_Brief value in the CSV (use --no-skip to force re-research).
Output: Markdown intelligence briefs on Mac Mini + local ~/Downloads/sherlock-briefs/. CSV columns added: Sherlock_Brief, Email_Opening, Outreach_Angle, ICP_Fit_Score, Plateau_Score, Pain_Signal. T1 leads are now fully researched -- ready for campaign build (Phase 7).
Phase 7: Campaign Build
Steps 13-14 -- Build campaign CSVs + LinkedIn upload

Sending Infrastructure: Ready to Go

6 warmed domains active since January 2026, already generating replies. 18 sending accounts across 6 domains = 540 emails/day max capacity. Monitor spam scores weekly in Instantly -- pull back volume on any domain that dips below 95%.

Campaign math (18 sending accounts x 30/day = 540 emails/day max)

T1: ~100-150 at 15-25% = 15-37 replies. T2: ~350-400 at 5-10% = 17-40 replies. T3: ~400-500 at 2-5% = 8-25 replies. Total: 40-100 conversations. Ramp: Week 1 at 10/day/domain (180/day), Week 2 at 20/day/domain (360/day), Week 3+ at 30/day/domain (540/day). Each lead gets a 3-email sequence over 7 days = actual runtime is ~3-4 weeks per batch.

Step 13 COMPLETE (Mar 13, 2026) -- 10 Email Campaigns (3,829 Leads) + 4 LinkedIn Campaigns (1,575 Leads)

All 10 email campaigns ACTIVE in Instantly and 4 LinkedIn campaigns in HeyReach. T1 was split into Personalized (leads with Sherlock/Haiku hooks) and Non-Personalized (generic template) for both SaaS and Funded segments. Claude Haiku generated 69 personalized hooks from Sherlock dossiers; rule-based fallback covered 383 more. ASCII sanitized, names/companies cleaned of LinkedIn artifacts, cross-file email dedup applied. 18 whitelisted sending accounts distributed across 10 campaigns. Files in ~/Downloads/Jamie-Lead-Gen/10-Instantly-Campaigns/final/

13 Build Campaign CSVs
Instantly (6 Active Domains)HeyReach

Take verified, tiered leads from Phase 5 and Sherlock dossiers from Phase 6. Run through the rebuild_csvs.py pipeline which: (1) generates personalized hooks via Claude Haiku from dossier data, (2) applies rule-based fallback hooks for leads without dossiers, (3) sanitizes all content to ASCII-only, (4) cleans LinkedIn artifacts from names and companies, (5) deduplicates emails across all files, and (6) outputs both Instantly email CSVs and HeyReach LinkedIn CSVs.

Personalization pipeline (T1 only):

1. Claude Haiku + Sherlock Dossiers (69 leads): generate_hooks.py sends each dossier's executive summary to Claude Haiku, returns subject_hook + personalization_line pairs saved to dossier_hooks.json.

2. Rule-Based Pattern Matching (383 leads): Regex patterns extract hooks from funding rounds, industry, company stage, and title keywords for leads without dossiers.

3. No Hook / Generic Template (370 leads): Email uses generic Approach B/C template -- no subject_hook or personalization_line columns populated.

Data cleaning pipeline (all tiers):

1. ASCII sanitization: Em dashes to hyphens, smart quotes to straight quotes, accented characters to ASCII equivalents, emojis stripped. Non-ASCII in subject lines triggers spam filters.

2. Name cleaning: Strips LinkedIn profile emojis, country flags, event tags ("[Speaker]", "(Hiring)"), parenthetical aliases.

3. Company cleaning: Removes taglines after dashes/pipes, truncates descriptions over 50 chars at word boundary.

4. Garbage hook filter: 25+ rules catch LinkedIn artifacts ("Name's Post"), dossier note fragments, truncated text, hashtag spam, URL-only content. Failed hooks are removed so the lead falls back to generic template.

5. Cross-file email dedup: T1 emails processed first; any T1 email appearing in T2/T3 files is removed from T2/T3 (lead keeps highest tier placement).

Email campaign CSVs (Instantly) -- 3,829 leads across 10 campaigns:

Campaign FileTierLeadsPersonalization
Jamie_Recently_Funded_T1_Personalized.csvT1100Sherlock dossier + Claude Haiku hooks (subject_hook + personalization_line)
Jamie_SaaS_SeriesA_T1_Personalized.csvT1142Sherlock dossier + Claude Haiku hooks (subject_hook + personalization_line)
Jamie_SaaS_SeriesA_T1.csvT1415Rule-based hooks + generic (no personalization_line)
Jamie_Recently_Funded_T1.csvT1163Rule-based funding hooks (no personalization_line)
Jamie_SaaS_SeriesA_T2.csvT2653companyName only
Jamie_Recently_Funded_T2.csvT21,079companyName only
Jamie_Tech_Founders_T2.csvT2370companyName only
Jamie_SaaS_SeriesA_T3.csvT3102companyName only
Jamie_Recently_Funded_T3.csvT3282companyName only
Jamie_Tech_Founders_T3.csvT3523companyName only
TOTAL (Email)3,82910 campaigns, 18 whitelisted accounts, cross-file deduped

LinkedIn campaign CSVs (HeyReach) -- 1,575 leads across 4 campaigns:

Campaign FileHeyReach CampaignLeadsNotes
Jamie_LinkedIn_Only_T1_Personalized.csvJamie - T1 Personalized - Mar 202673Sherlock dossier leads, personalized connection messages
Jamie_LinkedIn_Only_T1.csvJamie - T1 LinkedIn - Mar 2026294Multi-channel with email (connection request fires first)
Jamie_LinkedIn_Only_T2.csvJamie - T2 LinkedIn - Mar 20261,197LinkedIn-only outreach, blank connection note
Jamie_LinkedIn_Only_T3.csvJamie - T3 LinkedIn - Mar 202611Low volume, LinkedIn-only
TOTAL (LinkedIn)1,5754 campaigns, merged by tier (not segment)

Email CSV columns (camelCase for Instantly auto-mapping):

T1: email, firstName, lastName, companyName, title, linkedin_url, subject_hook, personalization_line, tier

T2/T3: email, firstName, lastName, companyName, title, linkedin_url, tier

LinkedIn CSV columns:

first_name, last_name, company_name, title, linkedin_url, personalization_line

Only 6 columns -- all enrichment columns stripped. HeyReach needs the LinkedIn URL to match profiles. Name/company/title map to HeyReach built-ins ({FIRST_NAME}, {COMPANY}, {POSITION}). The personalization_line column must be mapped as a custom variable during upload -- it becomes {PERSONALIZATION_LINE} in the message editor.

Why 8 email + 5 LinkedIn campaigns: SaaS Series B merged into Series A (same template). Tech Founders have no T1 (promoted to T2). No T3 LinkedIn campaigns (low-fit leads not worth connection request slots). This avoids tiny campaigns with <30 leads that can't A/B test effectively.
Claude Code Prompt: rebuild all campaign CSVs (if re-running with new data)
[Drag in verified CSV files from ~/Downloads/Jamie-Lead-Gen/5-Verify/ and dossiers from 6a-Sherlock-Dossiers/] Run the full rebuild pipeline: 1. Generate hooks: python3 generate_hooks.py (reads dossiers, calls Claude Haiku, outputs dossier_hooks.json) 2. Rebuild CSVs: python3 rebuild_csvs.py (reads verified CSVs + dossier_hooks.json, outputs 13 final CSVs) The rebuild script handles: - Hook matching (dossier_hooks.json first, then rule-based fallback) - ASCII sanitization (em dashes, smart quotes, accented chars, emojis) - Name cleaning (LinkedIn artifacts, flags, event tags) - Company cleaning (taglines, long descriptions) - Garbage hook filter (25+ rules for bad personalization) - Cross-file email dedup (T1 processed first, dupes removed from T2/T3) - LinkedIn CSV generation (6 columns only, snake_case) Output: ~/Downloads/Jamie-Lead-Gen/10-Instantly-Campaigns/final/ Print: file count, leads per file, total, hooks matched.
Scripts: generate_hooks.py (Claude Haiku hook generation), rebuild_csvs.py (full CSV pipeline). Run from ~/Downloads/Jamie-Lead-Gen/10-Instantly-Campaigns/
14 Upload to Instantly + HeyReach + Airtable
InstantlyHeyReachAirtable

Upload Order

1. Instantly (Email): Upload 8 email CSVs. Create one campaign per CSV file. Map columns automatically (camelCase names match Instantly fields). Set sequences per the Campaign Playbook -- Approach A for T1 (uses {{subject_hook}} + {{personalization_line}}), Approach B/C for T2/T3.

2. HeyReach (LinkedIn): Upload 5 LinkedIn CSVs. Create one campaign per CSV. T1 campaigns are multi-channel (LinkedIn connection request fires first, email starts 2-3 days later). T2 campaigns are LinkedIn-only. Max 20 connection requests/day. All templates in Campaign Playbook.

3. Airtable (Master Database): Push all leads to Airtable at the same time as Instantly/HeyReach uploads. Airtable serves as the master lead database for tracking status, replies, and pipeline progression across both channels. Include all columns from the email CSVs plus a channel field (email, linkedin, or both).

LinkedIn CSV Format (HeyReach)

HeyReach CSVs use only 6 columns in snake_case. Custom CSV columns become template variables (e.g. personalization_line becomes {PERSONALIZATION_LINE}). HeyReach also has built-in variables from LinkedIn: {FIRST_NAME}, {COMPANY}, {POSITION}.

first_name, last_name, company_name, title, linkedin_url, personalization_line

All enrichment columns (email, tier, ICP scores, etc.) are stripped -- HeyReach only needs the LinkedIn URL to match profiles and the name/company for templates.

Final Lead Counts

ChannelT1T2T3Total
Instantly (Email)8202,1029073,829
HeyReach (LinkedIn)3671,197111,575
Multi-channel coordination: T1 leads appear in BOTH Instantly and HeyReach. LinkedIn connection request sends first (Day 0), email sequence starts Day 2-3. This creates a "surround sound" effect -- they see you on LinkedIn then get the email. HeyReach and Instantly don't sync automatically, so track reply status in Airtable to avoid double-touching responders.
Next: Deploy Campaigns
Email templates, LinkedIn scripts, spam rules, send volumes, A/B tests, campaign ops

CSVs are built. The next step is configuring and launching campaigns in Instantly and HeyReach. The Campaign Playbook is the single source of truth for:

Email templates -- T1 personalized (subject_hook + personalization_line), T2/T3 generic, follow-up sequences

LinkedIn scripts -- blank connection request, follow-up messages after accept

Spam-safe rules -- no links in Email 1, under 100 words, under 50 char subjects, ASCII only

Campaign operations -- 30/account/day hard cap, 6-campaign structure, account allocation, ramp schedule

Email + LinkedIn coordination -- parallel pacing, weekly send plan, channel offset timing

Subject line A/B tests -- personalized subject_hook vs generic variants, split test framework

Open Campaign Playbook →

Channel Partner A: VC Partners & Associates
~200 leads -- VCs have stuck portfolio companies
Why they fit Jamie (with a caveat)

VCs have portfolio companies that are stuck. They need advisors to recommend. But: founders DISTRUST VC recommendations. Jamie needs to build independent credibility first -- podcast appearances, case studies, social proof. VCs are a medium-term play, not immediate.

Outreach Approach
Email Template

Subject: Resource for portfolio companies that have plateaued

Hi {{first_name}},

Quick question, do any of your portfolio companies feel stuck? Growing but not as fast as the model predicted?

I work with venture-backed SaaS founders specifically on this plateau problem. 2-4 week intensive focused on finding the 1-2 strategic blind spots holding them back.

If you ever need an outside advisor to recommend, happy to share my approach.

Channel Partner B: Fractional CMOs, CROs & COOs
~150 leads -- they see the plateau from inside
Why they fit Jamie

Fractional executives working inside SaaS companies see the plateau daily. They handle execution, Jamie handles strategy. Cross-referral opportunity -- zero competition.

Outreach Approach
LinkedIn (Primary)Email
LinkedIn Connection Note

Hi {{first_name}}, I noticed we work with similar companies (venture-backed SaaS, post-PMF). I focus on strategy (why growth stalled), sounds like you focus on {{their_specialty}}. Our clients often need both. Open to a referral relationship?

Channel Partner C: EOS Implementers & Business Coaches
~150 leads -- when process isn't enough, they need strategy
Why they fit Jamie

EOS gives companies process and structure. But sometimes the issue is strategic -- wrong market, wrong positioning, wrong GTM. Jamie is the complement: strategy on top of structure.

Outreach Approach
Email (Primary)
Email Template

Subject: When EOS isn't enough, the strategy layer

Hi {{first_name}},

I keep meeting founders who've implemented EOS but are still stuck. The process is great, but sometimes the issue is strategic. Wrong market, wrong positioning, wrong go-to-market.

That's where I come in. I work with venture-backed SaaS founders on exactly this. Would love to be a referral option for your clients who need strategy, not just structure.

Efficiency Audit: Credit & Tool Usage
Maximize output per dollar spent
Credit Budget Estimate
ToolCredits UsedCostWhat You Get
Sales NavigatorIncluded in subscription~99/moUnlimited searches + filter access
HeyReach (scraping)Included in planExisting planUnlimited Sales Nav URL imports + CSV downloads
Expandi~2,000-3,000 lookupsExisting plan~1,000-1,500 personal emails found
Prospeo~800-1,200 lookups~29/mo (1K credits)~300-500 additional emails
Apollo~400-600 lookups~$109/mo (5K credits incl. 2.5K add-on)~100-200 additional emails
Exa~200-300 searchesExisting plan~30-80 emails + T1 research data
MillionVerifier~1,500-2,000 verifications~15-20 one-time~1,000+ OK-status emails
Crunchbase ProSearch + spot-check~49/mo1,000 per export, 2,000 exports/mo. Tighten filters to stay under 1K.
FireCrawl~100-150 pagesExisting planT1 company data

Total incremental cost: ~100-140 for this entire 1,000-lead build. Most tools are already on existing plans.

Where Credits Are Wasted (and How This Pipeline Prevents It)

Biggest credit waste: Enriching leads that don't fit. Old approach: scrape 8K leads, enrich all of them, then qualify. That burns 8K Expandi credits on leads where 70% get rejected.

This pipeline's fix: Qualify BEFORE enriching (Step 3 before Step 4). Only ~2-3K qualified leads hit Expandi. Saves 5,000+ wasted lookups.

Second biggest waste: Researching leads with dead emails. Old approach: build dossiers, then verify. This pipeline verifies BEFORE tiering (Step 8 before Step 9). No dossier time wasted on bounced emails.

Third waste: Using expensive tools when cheap ones work. Waterfall order matters: Expandi (included) catches 40-60%, then Prospeo (cheap), then Apollo (moderate), then Exa (most expensive per lead). Reverse this order and you spend 3x more.

File Organization
All pipeline data in ~/Downloads/Jamie-Lead-Gen/
Folder Structure (organized Mar 7, 2026)
FolderContentsKey Files
1-Raw-Scrapes/HeyReach exports + Sales Nav cleaned CSVs5 files
2-Crunchbase-Sources/CB company exports, domain lists, Prospeo founder lookups13 files
3-Dedup-Clean/Deduped master files (original + updated parts)5 files
4-Enrich/Enriched leads -- CB + SalesNav + combined master4 files
5-Verify/MillionVerifier OK-only files (campaign-ready emails)SalesNav_Founders_Verified_OK.csv (1,816), CB_Founders_Verified_OK.csv (1,845)
6-Uncontacted-Ready/Uncontacted leads deduped against existing contacts2 files
6a-Sherlock-Dossiers/95 deep-dive markdown intelligence briefs from Sherlock95 .md files (copied from Mac Mini)
7-Scripts/Python scripts, enrichment logs, strategy docst1_enrichment.py, filter_jamie_leads.py
8-Old-Tests/Early test batches, ICP filtering, rejected leads13 files (archival)
9-Airtable-Exports/Mastersheet database dumps from Airtable2 files
10-Instantly-Campaigns/Final campaign CSVs for Instantly + HeyReach10 email CSVs (3,829 leads) + 4 LinkedIn CSVs (1,575) + rebuild_csvs.py + generate_hooks.py + push_to_campaigns.py
Complete Execution Checklist
All 15 steps with tools, inputs, and outputs
Full Pipeline
StepActionToolInputOutput
0VERIFY DOMAIN HEALTHInstantly6 active domainsConfirm all scoring 95%+ deliverability
1Build 9-12 Sales Nav sub-searches (each under 2,500)Sales NavigatorFilters from tables aboveSearch URLs
2Scrape each sub-search URLHeyReach7 Sales Nav URLs7 CSVs (~8.8K raw, heavy overlap)
3ADedup + qualify (remove bad titles/industries)Claude Code + Google SheetsRaw CSVs merged~2-3K unique qualified leads
3BSource funded companies from CrunchbaseDataScraper + Crunchbase + SheetsICP filters from Step 3BFunded company CSV with revenue/headcount data
4Find personal emailsExpandiQualified CSV~1,000-1,500 with emails
5Fill Expandi gapsProspeoExpandi misses+200-400 emails
6Fill Prospeo gapsApolloProspeo misses+100-200 emails
7Last-resort searchExaApollo misses+30-80 emails
9No-email leads to LinkedInHeyReachNo-email CSV200-500 LinkedIn-only campaigns
8Verify all emailsMillionVerifierAll enriched merged~1,000+ OK emails
9Tier leads T1/T2/T3Claude CodeVerified CSV678 T1 / 2,102 T2 / 907 T3
10Scrape T1 websitesFireCrawlT1 CSVCompany data + hiring signals
11Company intel searchExaT1 CSVNews, podcasts, hooks (629 enriched)
12Deep research (top T1)Sherlock (OpenClaw)FireCrawl + Exa95 deep dossiers (stopped early -- overkill for podcast invite)
13Build campaign CSVs + deploy to InstantlyClaude Code + Python + Instantly APIVerified + tiered CSVs + dossiers + Claude Haiku10 Instantly campaigns (3,829 leads) + 4 HeyReach campaigns (1,575 LinkedIn). T1 split into Personalized + Non-Personalized for both SaaS and Funded. 18 whitelisted accounts distributed.
14Upload T1 to HeyReachHeyReachT1 CSVsLinkedIn connection sequences (blank note)
Continue to Campaign Playbook →
Pipeline Audit - What to Fix for the Next Client
Findings from the Jamie pipeline. Every mistake documented so it never happens again.

The 5 Biggest Wastes

  1. Sherlock research depth was too high. 7.5 min/lead x 95 leads = 12 hours of Mac Mini compute. Full intelligence briefs when all we needed for T1 was a one-line podcast hook. Sherlock is valuable (it surfaced the 15 T0 VIPs), but a lighter research pass (~3 min/lead) would cut runtime to ~5 hours and still find T0 candidates + hooks.
  2. Apollo burned 1,000 credits in 40 minutes. No guards existed. Guards were added after the fact. Next time: guards are pre-built into the pipeline script, not bolt-on fixes.
  3. 23% of found emails were invalid. 1,184 emails discovered by enrichment tools got rejected by MillionVerifier. That's wasted reveal credits. A domain-level pre-check would catch disposable/parked domains before spending credits.
  4. 5 remediation scripts were written post-launch. fix_airtable_records.py, repush_missing_to_airtable.py, clean_personalization.py, push_remaining.py, clean_and_manifest.py. Each one exists because a validation wasn't run before pushing. The validation code exists - it just wasn't enforced.
Funnel: Where Leads Dropped
StageInOutDropWhat Was Wasted
Raw scrape (both tracks)~8,8006,18030%Expected overlap from sub-search design
Title cleanup (post-scrape)6,180~5,22515%Minor cleanup - Sales Nav filters did the heavy qualification
Enrichment (3 tools)~5,2255,119 emails2%1,061 leads with no email found anywhere
MillionVerifier5,1193,93523%1,184 bad emails = wasted enrichment credits
ICP qualification3,9353,6338%Wrong-fit leads that made it through title filter
Sherlock (T1 only)95 run69 usable hooks + 15 T0 VIPs27%Research depth too high - full briefs when lighter pass would suffice
Raw to campaign-ready~8,8003,82957%Main leaks: bad emails (23%), Sherlock depth, remediation script overhead
Ordering Mistakes
What HappenedWhat Should HappenImpact
Sherlock full-depth dossiers (7.5 min/lead x 95)Lighter research pass (~3 min/lead) - still finds T0 candidates + hooks~7 hours saved (5h vs 12h)
No manual T1 review before Sherlock15-min manual T1 review to flag obvious non-fits before spending computeCould cut Sherlock run by 20-30%
Cross-track dedup (Sales Nav vs Crunchbase) at Step 9Cross-track dedup BEFORE enrichmentUnknown overlap enriched twice
Waterfall: Expandi (43.6%) then Apollo (60.6%)Test Apollo first - higher hit rate on this ICPMay reduce total enrichment calls needed
Over-Engineered
  • T2 vs T3 campaigns use identical templates. The only difference is account count and priority. For a podcast invite, collapse T2+T3 into one "Volume" campaign per segment. Fewer campaigns = less management overhead.
  • FireCrawl ran on all 634 T1 leads. Generated 10 columns (hiring signals, blog freshness, team size). Only the personalization hook was used downstream. Run FireCrawl on T0 candidates only (15-30 leads).
  • 3 separate CSV-building scripts. build_campaigns.py had wrong column names, so rebuild_csvs.py was written to fix it, then add_website_to_csvs.py added more columns. One consolidated script with validation gates would have prevented all three.
  • T0 HTML dossier pages for 15 leads. Each got a full deployed web page. For leads where Jordan is doing the outreach himself, a markdown brief is sufficient - save the HTML pages for client-facing deliverables.
Should Be Automated (Wasn't)
  • Sending account whitelist check. push_to_campaigns.py auto-distributes accounts but doesn't filter against the whitelist. 2 non-whitelisted accounts got assigned. Build the whitelist into the script.
  • Workspace daily cap verification. Instantly has NO automatic cap. The script should compute sum(daily_limits) and warn if it exceeds total_accounts * 30.
  • Cross-file email dedup as a gate. The check exists as a function but wasn't enforced before push. Make it a blocking validation - script refuses to push if dupes exist.
  • Forwarded message date in Email 1B. The "Sent:" date must be day-before-launch. Currently requires manual edit. Auto-inject with datetime.now() - timedelta(days=1).
  • Non-responder LinkedIn follow-up. After 10-14 days, pull non-responders from Instantly and upload to HeyReach for LinkedIn touch. Designed but never built.

v2 Pipeline - Streamlined for Next Client (8 Steps)

Same output quality. Half the steps. No remediation scripts.

v2: The Streamlined Pipeline
StepActionToolKey Change from v1
1ICP + Segment Design
Sales Nav sub-searches + Crunchbase filters. Explicit exclusion review before scraping.
Sales Nav + CrunchbaseAdd pre-scrape exclusion checklist (Financial Services, CXO, etc.)
2Scrape (parallel tracks)
Sales Nav via HeyReach + Crunchbase via DataScraper. Cross-track dedup immediately.
HeyReach + DataScraperCross-track dedup BEFORE enrichment, not after
3Title Cleanup + Cross-Track Dedup
Quick title filter (remove non-founders that slipped through Sales Nav). Dedup Sales Nav vs Crunchbase leads before enrichment.
Claude Code + PythonCross-track dedup moved earlier. Sales Nav filters handle main qualification.
4Waterfall Enrichment
Prospeo (or Apollo) first, then backfill. Domain pre-check before reveals. Hard credit caps enforced in script.
Prospeo + ApolloReorder based on hit rate data. Domain pre-check catches bad domains before credits spent.
5Verify
MillionVerifier on all found emails. Track which enrichment source had the highest invalid rate.
MillionVerifierAdd per-source quality metrics to inform future waterfall ordering
6Tier + Manual T1 Review
Score into T0/T1/Volume. 15-minute manual T1 review is a hard gate before Step 7.
Claude CodeCollapse T2+T3 into "Volume." Manual review gate is non-negotiable.
7Sherlock (lighter depth) + Exa
Sherlock runs on all T1 at reduced depth (~3 min/lead instead of 7.5). Surfaces T0 VIP candidates + produces hooks. Exa backfills podcast/press data for leads Sherlock missed. Manual T1 review first.
Sherlock + ExaSame coverage, half the runtime. ~5 hours instead of 12. Lighter research pass still finds T0s and hooks.
8Build + Validate + Deploy (one script)
CSV build, cross-file dedup, whitelist audit, workspace cap check, push to Instantly + HeyReach + Airtable. All validations are blocking gates.
Python + Instantly API + HeyReach APIOne script replaces 3 build scripts + 5 fix scripts. Validations are gates, not afterthoughts.

Estimated time savings: v1 took ~5 days end-to-end with fixes. v2 targets 2-3 days with zero remediation scripts. The pipeline script for Step 8 should be built once and reused across all clients - only the ICP filters and templates change.