Every agency operator has had this conversation. You sit down with a client at month three. You walk through the attribution dashboard. You show them that the campaign produced 47 booked jobs, $94K of attributable revenue, a 4.3x return on the investment.
The client nods. Then says: "But most of those customers told me they saw me on Yelp."
The conversation goes downhill from there. You explain that Yelp was probably the last touch but the first touch was your campaign. The client politely says "interesting" and then renews at the same tier instead of expanding. Six months later they don't renew. The work was excellent. The attribution conversation killed it.
This is a structural problem, not a delivery problem. Operators bring sophisticated multi-touch models to clients who experience the world through last-vendor-mentioned. The model is right; the conversation is wrong. Here's how to bridge it.
Why the "I saw them on Yelp" objection is technically correct (and structurally wrong)
When a customer says they "saw the business on Yelp," they're usually telling the truth — about the last touch. Yelp may genuinely be the platform where they decided to call. But the question that matters for attribution isn't "where did the customer remember seeing them?" It's "what set of touchpoints, in what order, produced the booking?"
Most small-business owners haven't thought about this distinction. They categorize customers by the last reported touch, which means they:
- Over-credit obvious last-touch sources (Yelp, Google, direct referrals)
- Under-credit invisible early-touch sources (cold outbound, content, brand awareness)
- Conclude that channels they can't see don't work, even when the data shows otherwise
This isn't malicious. It's the mental model people use to navigate everyday decisions, and it doesn't match how funnels actually work.
The four attribution models that matter
Before you can have the conversation, you need clarity on what you're measuring. Four models cover most agency situations:
1. Last-click attribution
The customer's final touchpoint before conversion gets 100% of the credit. This is the model that matches the client's intuition. It systematically over-credits late-funnel channels and under-credits early-funnel ones, but it's the model the client is already running in their head.
Use case: explaining the obvious. "Yes, Yelp was the last touch on 31 of these 47 conversions."
2. First-click attribution
The customer's first touchpoint gets 100% of the credit. This is the inverse of last-click — it over-credits early-funnel channels.
Use case: showing channels that originated revenue, even if a different channel closed it. "But the cold email campaign was the first touch on 22 of those 31 Yelp conversions."
3. Linear attribution
Every touchpoint in the path gets equal credit. If a customer touched 5 channels before converting, each channel gets 20% credit.
Use case: showing the breadth of the funnel. "On average, conversions in your business touched 3.4 channels before booking. Here's how the credit distributes."
4. Position-based attribution (the working model)
40% credit to first touch, 40% to last touch, 20% distributed across middle touches. This is the model that produces the most defensible per-channel ROI numbers because it acknowledges both that origination matters and that closing matters.
Use case: the actual ROI conversation. "Using the model that splits credit between first and last touch, your cold campaign earns $1.80 of attributed revenue for every dollar invested."
The conversation framework
When the client says "but I saw them on Yelp," don't argue with the model. Use a layered explanation that meets them where they are.
Step 1: Validate. "You're right — Yelp was the last touch on most of these. We see that in the data."
Step 2: Reveal the path. "Here's what the customer actually did before they got to Yelp. The average customer touched 3.4 channels before booking. The first touch was usually [your channel]."
Step 3: Show the cohort. "If we look at customers who never saw your cold email campaign — the control group — they reached Yelp at one-tenth the volume. The campaign isn't competing with Yelp; it's feeding Yelp."
Step 4: Close with the model that accounts for both. "Using a model that gives credit to both the channel that originated the customer and the channel that closed them, here's the per-channel ROI. Yelp is producing $X. Your cold campaign is producing $Y. Both numbers go down if either channel is removed."
This conversation works because it never invalidates the client's experience. They DID see those customers come from Yelp. The data confirms it. But the data also reveals what they couldn't see directly — that the customers got to Yelp because of an earlier touch.
What a working attribution implementation requires
Three pieces of infrastructure:
1. A first-party pixel on every client property
Every page of the client's website fires a pixel that captures: timestamp, page URL, referrer, UTM parameters, anonymized session ID, and a hashed identifier when the visitor enters their email or phone (e.g., on a contact form). The pixel data lives in the agency's analytics layer, not in a third-party tool the client could lose access to.
Without first-party data, you're stuck with whatever last-touch attribution the client's existing tools provide — usually inadequate for a real ROI conversation. The AcquireOS platform handles this with a privacy-respecting pixel that's GDPR/CCPA-aware (covered in the compliance frameworks post).
2. Cross-channel identity stitching
When a single customer touches your client's site from three different devices over six weeks, those sessions need to stitch into one identity. The stitching happens through hashed email/phone matches at conversion time — every time a customer enters their email or phone, all prior anonymous sessions from the same device hash to that identity.
Without identity stitching, every multi-touch journey looks like multiple disconnected sessions, and the attribution falls apart.
3. Conversion event capture
Every conversion event (form submission, phone call, in-person booking, signed quote) has to be captured with full context. The hardest one is in-person and phone bookings — most agencies stop at form submissions because those are easy. The clients who care about ROI care about the harder events too.
Phone-call conversion capture requires a dynamic phone number that swaps based on the visitor's session, so every call ties back to the originating session. Walk-in conversion is harder still — usually captured via a manual "how did you hear about us" field at point of sale, with the data fed back into the attribution pipeline.
The reporting cadence
Three layers of attribution reporting, ordered by sophistication:
Daily (operator-side, internal): the raw funnel — sessions, conversion events, channel attribution by all four models. Operator monitors for anomalies, channel drift, and pipeline health.
Weekly (operator-to-client, casual): a single number per channel, plus the conversion count and the trend. Five lines of summary. The client doesn't need detail; they need a heartbeat.
Monthly (operator-to-client, formal): the full attribution view across models. Side-by-side comparison of last-click, first-click, linear, and position-based. The conversation framework above is run from this report.
Quarterly (operator-to-client, strategic): the cohort analysis. What did clients who converted last quarter look like at the channel level? What does that suggest about budget allocation for the next quarter? This is where the attribution work shifts from reporting to strategy.
What kills attribution conversations
Five mistakes that make the ROI conversation fall apart:
Showing five models on one slide. Pick one or two. Five looks like math the client can't follow.
Not naming the model. "Here's the ROI" is meaningless without "using position-based attribution."
Comparing year-over-year on incomparable cohorts. The customer mix shifts. Don't compare Q1 2025 to Q1 2026 unless the customer mix is genuinely comparable.
Hiding the soft channels. Brand awareness, content, organic search — all hard to attribute, all real. Don't pretend they don't exist; show them with a note about attribution confidence.
Treating the client's intuition as the enemy. Their intuition is correct about what they observed. The attribution model fills in what they couldn't observe. Don't argue with the observation.
How AcquireOS handles attribution
The platform ships with first-party pixel deployment, identity stitching, conversion event capture, and the four-model attribution view as defaults. The client portal exposes the position-based attribution by default — the model most operators want their clients to see — with last-click available as a secondary view to validate the client's intuition.
The conversation framework above is also baked into the monthly client report template. Every report ends with the layered explanation: "Last touch shows X. First touch shows Y. The blended model accounts for both, and produces this ROI." The framing converts the attribution conversation from a debate into a presentation.
The summary
- The "I saw them on Yelp" objection is technically true and structurally wrong
- Four attribution models matter; position-based is the defensible default
- The conversation framework: validate, reveal path, show cohort, close with the layered model
- Working attribution requires first-party pixel, identity stitching, and full conversion event capture
- Don't argue with the client's intuition; layer the data on top of it
If your monthly attribution conversations regularly turn adversarial, the model probably isn't the problem — the conversation framework is. Run the four-step layered framework on your next monthly review and the dynamic shifts within one meeting.
For a walk-through of the platform's attribution stack and the client-portal view, book a call.



