Weekly cost-per-lead trends by newsletter and agency. Use filters to isolate time periods.
CPL by Week
Jake's original Mar 1–12 scoring: Excellent / Good / Medium / Bad. Target = Excellent + Good only.
Quality Breakdown by Newsletter
Target Rate Summary
| Newsletter | Agency | Scored | Target (E+G) | Target Rate | Advantage |
|---|
TFM Overall Target Rate
33.5%
69 of 206 scored
GL Overall Target Rate
20.7%
30 of 145 scored
TFM Advantage
1.6x
Overall blended
Scoring Definitions (Mar 1–12 system)
- Excellent
- Senior ICP role at verified health system
- Good
- 1–2 criteria met, or all 3 at low level. Residents count.
- Medium
- Not ICP but OK on list
- Bad
- Did not fit any criteria
- Target
- Excellent + Good only
Jake's March methodology: binary Good/Bad with a separate ProvOrg (provider org email) category for unscored subs.
IW Quality (since 3/1)
CW Quality (since 3/12)
Good Rate Comparison
| Newsletter | Agency | Good | Bad | ProvOrg | Total | Good Rate | Generous (incl ProvOrg) |
|---|---|---|---|---|---|---|---|
| IW | TFM | 41 | 14 | 23 | 55 | 74.5% | 82.1% |
| IW | GL | 42 | 6 | 22 | 48 | 87.5% | 91.4% |
| CW | TFM | 59 | 30 | 14 | 89 | 66.3% | 70.9% |
| CW | GL | 47 | 10 | 23 | 57 | 82.5% | 87.5% |
Why This Methodology Advantages GL
- GL collects first-party (1P) data at a lower rate (23.2% vs TFM's 26%).
- More GL subscribers fall into the "email domain only" bucket — they can't be scored as Bad because there's no form data to evaluate.
- TFM's form-based creative generates a quality advantage but also means fewer subs appear in the ProvOrg-only category (which is always counted favorably).
- This metric rewards lower 1P data collection. The agency that knows less about its subscribers looks better.
Email domain analysis for subscribers without first-party form data (CW since 3/12). How good are the "unknown" subs?
Domain Quality Distribution — No-1P Subscribers (CW since 3/12)
TFM (n=32)
GL (n=90)
Top Health Systems Found
TFM "Bad" Domain Examples
Community colleges, ISPs, health tech vendors — not healthcare providers.
Key Insight
- GL has more healthcare org email subscribers in absolute terms (larger no-1P pool).
- But TFM converts more subscribers to scorable first-party data: 56.1% vs 43.5%.
- TFM's form-driven approach means its "unknown" pool is smaller, but the known pool is richer and more actionable.
Job title analysis reveals who each agency actually delivers. ICP alignment is what matters, not just "good" vs "bad."
Growletter
12 : 1
Nurse : Cardiologist ratio
vs
TFM
1 : 1
Nurse : Cardiologist ratio
GL — CW Top Titles
TFM — CW Top Titles (Good+)
The Nurse Problem (CW)
- Cardiology Week's ICP is cardiologists, cardiac surgeons, and senior HC executives — not general nurses.
- GL's top title is "RN" (53 subs) followed by "registered nurse" (34). Combined nurse titles = 112 subs.
- TFM's top title is "physician" (15) followed by "cardiologist" (14). These are direct ICP matches.
- A "Good/Bad" binary may mark an RN as "Good" (healthcare worker), but an RN at a cardiology newsletter isn't the same as a cardiologist.
Three scoring systems, three different conclusions. The methodology you choose determines the "winner."
Who Wins Under Each Methodology?
| Methodology | Date Range | CW Winner | IW Winner | DHW Winner | Overall |
|---|---|---|---|---|---|
| 4-Tier (E/G/M/B) | Mar 1–12 | TFM 5.5x | TFM 6.9x | Tie ~1.0x | T |
| Binary Targeted | Feb 1+ | ? | ? | ? | G slight (36.6% vs 32.2%) |
| Good/Bad + ProvOrg | March | GL 82.5% | GL 87.5% | N/A | G |
What Changed Between Methodologies?
Shift 1: "Good" Definition Expanded
4-tier collapsed to binary — Medium got absorbed into Good.
- The 4-tier system had Excellent, Good, Medium, Bad. "Target" = Excellent + Good only.
- Binary scoring collapses Medium (OK on list, not ICP) into the "Good" bucket.
- This inflates both agencies' good rates but disproportionately helps GL, which had more Medium-tier subs.
Shift 2: Date Window Expanded
12-day snapshot became a 7-week lookback.
- The original 4-tier analysis covered Mar 1–12 (12 days). Binary targeted expanded to Feb 1+.
- Larger windows can smooth noise but also mix campaign phases, creative rotations, and budget shifts.
- TFM ramped CW spending in March; earlier Feb data may not reflect current performance.
Shift 3: ProvOrg Email Metric
Provider org email domain used as quality proxy for unscored subs.
- Subscribers without 1P form data get classified by email domain (e.g., @clevelandclinic.org = ProvOrg).
- ProvOrg is always "good" in the generous calculation. Agencies with lower 1P rates have more subs in this bucket.
- GL's 1P rate is 23.2% vs TFM's 26%. Lower data collection = more ProvOrg credit = higher score.
- This rewards the agency that collects less subscriber information.
Recommendation
- Pick one methodology. Agree on what "good" means for each newsletter's ICP.
- Apply it consistently. Same date windows, same scoring criteria, same evaluator.
- Track the trend. Absolute rates matter less than direction. Is quality improving or declining week over week?
- Consider weighting by ICP specificity: a cardiologist for CW is worth more than a generic RN, even if both are "healthcare workers."