Skip to main content
Be Better Blog
Strategy & Insights

What Last Year Clarified About Paid Media

By Danni Winter
January 20, 2026
Four people sitting on a bench facing abstract charts and shapes, symbolizing reflection and interpretation of performance data rather than precise measurement.

It’s that season again, and paid media predictions are everywhere.

New platforms, new formats, new algorithms, all promising to change everything.

What’s harder to find is an honest look back at what actually changed once teams had to live with the results.

Last year didn’t just introduce new tools. It exposed which assumptions no longer hold. Attribution got weaker. Automation expanded. Familiar metrics stopped telling the full story. And yet, teams still had to plan, spend, optimize and defend decisions.

This blog isn’t about forecasting what’s next. We asked our paid media team to step back and look at what last year actually clarified once the work was done. As they compared notes across accounts, channels and roles, the same lessons kept surfacing. Not trends to chase or predictions to debate, just realities that held up under pressure.

Paid Media Marketing Lessons That Stuck

Automation worked best when efficiency had clear human guardrails.

Danni Winter, Director of Integrated Planning and Paid Media

One of the biggest shifts last year was how we worked with algorithms. The focus moved away from asking platforms who to target and toward helping them understand what actually performs.

That meant loosening rigid audience structures and combining segments to create better learning environments. By simplifying setups, we gave algorithms clearer feedback on creative, messaging and placements. In many cases, performance improved not because targeting got more precise, but because the system had more room to learn patterns.

At the same time, we saw the limits of efficiency-driven automation. Left unchecked, platforms often pursued the cheapest conversion at the expense of brand context or suitability. The lesson was not to resist automation, but to define it. Strong results came from pairing machine learning with clear human guardrails, such as tighter standards around brand safety, exclusions and placement quality.

Last year showed that AI performs best when people decide what success should look like.

Top-of-funnel media proved its worth.”

Davis Duong, Associate Director of Performance Marketing

Last year made it clear that top-of-funnel (TOFU) paid media does its real work well before a conversion ever happens.

Sustained investment in upper-funnel efforts played a meaningful role in driving long-term event demand in several cases. That included flighting out of home placements, paid social campaigns optimized for awareness and engagement vs conversion, and extending offline activities to the digital world. Judged through a conversion-first lens, these initiatives would have looked inefficient. In reality, they built awareness at scale, drove qualified engagement and created audience signals that made lower-funnel campaigns more effective over time.

Those early signals mattered. Engagement patterns and repeated exposure helped downstream tactics convert more efficiently, even if the impact was delayed. When TOFU is evaluated only on immediate return, it is easy to cut programs before they have time to work.

Traffic volume alone is not proof of success, but neither is short-term efficiency in isolation. Last year reinforced the importance of treating TOFU as a demand and signal engine. When that investment is intentional and patient, the entire funnel benefits.

“Attribution got messy.”

Cyndee Hopkins, Senior Performance Marketing Manager

Last year forced a hard truth into the open, and it showed up quickly in how hard attribution became to defend. Clean, complete measurement stopped being a realistic goal. The teams that performed best were the ones that accepted that early and adjusted how they made decisions instead of waiting for clarity that never came.

The change wasn’t subtle. Measurement weakened across platforms at the same time. GA4 added technical complexity without giving most teams clearer answers, while ad platforms leaned further into aggressive modeling as observable data declined. The gap between platform-reported conversions and GA4 totals widened. Assisted conversions became harder to see, especially for upper-funnel activity. In some cases, channel-level ROAS fell even as total revenue held steady or grew — making performance look worse on paper than it was in reality.

 Our team responded by reframing how they used the data. GA4 revenue became a proxy for sourced demand. Platform attribution was treated as influenced revenue, with the truth usually landing somewhere in between. Decision making shifted toward blended performance trends, directional signals like frequency and time to convert, and pressure testing results against CRM data or downstream revenue.

That shift had real budget implications. Channels that could not clearly explain their role in the funnel lost trust and spend. Reporting alone stopped being enough to support investment decisions. Clear interpretation, context, and judgment became just as important as the numbers themselves when it came time to allocate budget.

“If you rely on final conversions alone, automation can’t learn fast enough.”

Joe Hopper, Senior Performance Marketing Manager

As automation accelerated last year, intent signals became a critical input for performance.

With Google continuing to expand automated bidding and broad match strategies, algorithms needed better guidance, especially for B2B accounts with longer sales cycles or lower conversion volume. Relying only on final conversions slowed learning and limited scale.

Across several accounts, we saw clear improvement when we introduced secondary conversions tied to meaningful engagement earlier in the funnel. Signals like viewing multiple pages, signing up for webinars or emails, engaging with deeper content or completing self-guided tours helped platforms identify higher-quality users sooner.

Using those actions to inform bidding and optimization created stronger feedback loops. Learning became more efficient, overall conversion volume increased and traffic quality improved well before primary conversions fully ramped.

As platforms automate more of the mechanics, strategy shifts upstream. Teaching the algorithm what real intent looks like is no longer optional. It is foundational to sustainable performance.

Themes We Heard Across the Team

When we stepped back from the individual perspectives, clear patterns emerged. Across roles and channels, teams ran into the same constraints: less certainty, fewer clean signals and more automation. The strongest responses were not tactical overhauls, but shifts in how decisions were made and which signals were trusted.

Data-informed meant more than data-driven.

Every contributor described working with less definitive data than before. Attribution weakened, modeled conversions increased and upper-funnel impact became harder to isolate. As Cyndee noted, the teams that performed best stopped waiting for perfect answers and leaned into experience, trend direction and informed tradeoffs.

Signals became as important as outcomes.

Conversions still mattered, but they were no longer enough on their own. Davis pointed to TOFU activity as a source of demand and downstream signals, while Joe showed how secondary intent actions improved learning. Better inputs, not just final outcomes, drove stronger performance.

Automation increased the need for human definition.

More automation did not simplify decision-making. It raised the bar for clarity. Danni emphasized defining brand boundaries and testing environments, while Joe focused on teaching platforms what meaningful intent looks like. Algorithms performed best when priorities were clearly set, not left to efficiency alone.

Upper-funnel investment became harder to prove but harder to ignore.

TOFU showed up across every perspective. Davis saw it shaping audience quality and future demand. Cyndee saw its impact become less visible in attribution. Even without clean credit, upper-funnel activity continued to influence total performance.

Success shifted from channel metrics to blended performance.

Teams moved away from judging channels in isolation. Instead, they looked at how media worked together to drive demand, learning and revenue over time. This required clearer narratives, longer evaluation windows and comfort with imperfect data.

These themes all point to the same conclusion: the work didn’t get simpler, but the rules for making good decisions became clearer.

How Teams Put These Lessons to Work

In practice, these shifts changed how teams planned, optimized and evaluated paid media. The teams that adapted best made a small set of deliberate operating changes that showed up consistently across accounts.

  1. Set expectations for TOFU before launch. Define upfront what success looks like for upper-funnel efforts and when results should appear. Evaluate TOFU on audience growth, engagement quality and downstream efficiency rather than short-term conversions. Protect these budgets long enough for signals to mature.
  2. Define and deploy secondary intent signals. Identify actions that reflect real interest in your business, such as multi-page visits, content engagement, demos or sign-ups. Track them consistently and use them to guide bidding and optimization when primary conversions are limited or delayed.
  3. Simplify account structure to support learning. Consolidate audiences and reduce unnecessary segmentation so platforms can identify patterns faster. Fewer variables improve feedback loops across creative, messaging and placements.
  4. Put firm guardrails around automation. Allow algorithms to optimize within clearly defined boundaries. Lock down brand safety controls, exclusions and placement standards so efficiency does not override suitability or quality.
  5. Shift reporting toward blended performance. Stop judging channels in isolation. Look at total revenue trends, time-to-convert, frequency and assisted behavior alongside platform and GA4 data. Treat platform attribution as influenced and analytics revenue as sourced, then validate against downstream systems.
  6. Explain the role each channel plays. Pair reporting with narrative. Be explicit about how each channel contributes to demand, learning or conversion. Channels that can clearly articulate their purpose earn more trust and more durable budget support.

Navigating Change

If you’re navigating similar challenges, whether it’s defending upper-funnel investment, making automation work within real constraints, or pressure-testing performance without relying on false precision, this is the kind of work we’re already doing with our partners.

If you want a second set of eyes on how these shifts show up in your accounts, or help translate messy signals into confident decisions, let’s talk.

Up Next