Back to Blog
Apple Ads

The Modern ASA Stack in 2026: What Serious Teams (and Their Advisors) Actually Use

Pablo Cabrera

Pablo Cabrera

AI Product Lead
The Modern ASA Stack in 2026: What Serious Teams (and Their Advisors) Actually Use

Take fifteen minutes this week and write down five functions on the left side of a page: bid optimization, keyword lifecycle, reporting reconciliation, budget governance, strategic review. On the right, mark which ones your team has covered well, which are partial, which nobody owns.

Most teams I do this exercise with discover the same shape: two functions covered well, two partial, one missing entirely. The missing one is almost always strategic review — but that's a symptom, not the disease.

The disease is that the modern ASA stack in 2026 isn't a shopping list of tools. It's a coverage map. And after years at Phiture watching mobile growth teams from the inside — in-house at fintechs, agencies running portfolios for a dozen apps, consultancies advising on six-figure budgets — I'd say more than half of the serious ASA operations I see are running with holes in that map they can't see.

This piece is about what those five functions are, where the holes typically sit, and why "modern" has almost nothing to do with which tools you pay for.

What "the stack" actually means

When people say "ASA stack," they usually mean a list of tools: Apple's own dashboard, a bid optimizer, maybe an MMP, maybe a spreadsheet or two, maybe a BI tool that surfaces the numbers.

That's not a stack. That's a software inventory.

A stack is a set of functions your operation needs to perform — reliably, on cadence, at the speed the auction moves. The tools are interchangeable. The functions aren't.

Across the ASA operations I've seen at scale, there are five functions that have to be covered. Most teams have one or two covered well, two more covered partially, and one that nobody owns and nobody talks about.

The five:

  1. Bid optimization — deciding what to bid on every keyword, in every campaign, every day.
  2. Keyword discovery and lifecycle management — finding new keywords worth bidding on, retiring dead ones, moving terms between campaign types as they mature.
  3. Reporting and attribution reconciliation — making the numbers in Apple's dashboard match the numbers in your MMP, and explaining the gap when they don't.
  4. Budget governance — making sure spend stays inside the right targets at the right altitude (campaign, ad group, portfolio, market).
  5. Strategic review — pulling back to ask whether the structure, the targets, and the spend allocation still make sense given what the business is doing now.

A modern stack covers all five. A typical stack covers two or three.

The five functions, walked through

Let me go through each one — what good looks like in 2026, what manual looks like, and where the breaking points sit.

1. Bid optimization

This is the most measurable function, and it's where the gap between "manual" and "modern" is widest.

What manual looks like: rules in a spreadsheet. "If CPA > $X, lower bid by 10%." "If installs > Y, raise bid by 5%." A human reviews the numbers on Monday, adjusts the rules, and the rules execute until next Monday.

What good looks like in 2026: intelligent software adjusting bids continuously, learning from the auction's response to each adjustment, and balancing many keywords against a shared budget constraint. Not "AI" as a marketing term — software that actually updates its policy based on the outcomes it observed yesterday.

Where manual breaks: around 500 keywords, manual bid management starts losing money. Around 1,000, it's actively bleeding. Around 2,000, you're not managing the auction — you're hoping. I've seen this threshold play out repeatedly. The math isn't subtle. A team of three good UA managers can hold the line on a few hundred keywords. They cannot hold the line on two thousand, no matter how good their spreadsheet is.

The receipts on this are public. Pinger ran a controlled test against a rule-based bidding tool — same keywords, same budgets, six weeks. CPA dropped 23 percent and installs grew 31 percent against the control group on the side running adaptive bidding. The most striking subset was the brand campaign: Brand Campaign B saw CPA fall 43.6 percent and acquisitions grow 44.3 percent. Brand keywords are the ones teams audit least often, because surface metrics make them look fine. They were not fine.

2. Keyword discovery and lifecycle management

What manual looks like: someone exports search term reports every two weeks, eyeballs them, copies new candidates into a sheet, debates whether to add them, and slowly trickles them into Discovery campaigns. Negatives are added when something obviously bad shows up. Dead keywords stay in campaigns for months.

What good looks like in 2026: a clear lifecycle policy. Terms come in through Discovery, get promoted to Exact based on performance thresholds, get demoted or paused when they stop converting, and have a defined retirement criterion. Whether software runs this or a human runs it matters less than whether the policy exists and is followed.

Where manual breaks: when the search term backlog grows faster than the team can review it. For an account with healthy Discovery campaigns, this happens fast. By month four, you have hundreds of new candidate terms sitting unreviewed. The cost of that backlog isn't visible on any dashboard, but it's real.

3. Reporting and attribution reconciliation

What manual looks like: Apple's dashboard says one number, the MMP says another, and someone spends Friday afternoon reconciling the two for the Monday meeting. The reconciliation usually involves a Sheets file that nobody outside the UA team understands.

What good looks like in 2026: a clear answer to which source you trust for which decision, with a documented logic for the gap. Apple's view-through versus tap-through, the MMP's attribution window, the postback timing — all of this should be a known shape, not a weekly mystery to solve.

Where manual breaks: when the team has to explain ASA performance to anyone outside the UA function. The CFO doesn't want to hear "Apple says this, our MMP says that." They want a number, defensibly produced. If the reporting function isn't running reliably, the conversation with finance becomes a recurring tax on the team's credibility.

4. Budget governance

What manual looks like: monthly caps set in Apple, reviewed in the Monday meeting, with someone manually pausing campaigns that pace too hot. Reactive, not anticipatory.

What good looks like in 2026: budget governance that runs at the right altitude. Daily caps at the campaign level. Weekly review at the ad group level. Monthly review at the portfolio level. Quarterly review of the allocation across markets. Each altitude has its own cadence and its own decision-maker.

Where manual breaks: when ASA scales across more than three markets, or when the account has more than five distinct campaign types running in parallel. The number of governance decisions exceeds what a weekly meeting can handle, and things either go on autopilot (overspending) or get over-restricted (underspending). Both are expensive in different ways.

5. Strategic review

This is the function most teams don't realize they're missing.

What manual looks like: the strategic review happens implicitly, in the corners of weekly meetings, when someone notices that something has drifted. It rarely happens on cadence. It often happens too late.

What good looks like in 2026: a scheduled quarterly pull-back. Not a tactical review — a structural one. Are the campaign types we set up two years ago still the right shape? Is the market mix still right given what the business is doing? Are the cost targets we set in January still the right targets in May? Strategic review is what keeps the tactical work pointed at something that matters.

Where it breaks: strategic review breaks not because teams forget to do it. It breaks because the day-to-day execution swallows the time. If the first four functions are running on manual, there's no slack left for the fifth one. The fix usually isn't "schedule the review" — it's reducing the operational load on the team so the review has somewhere to live.

What good looks like vs what manual looks like — at a glance

Three operating models, same five functions

The point of framing the stack as five functions rather than a list of tools is that the functions don't change based on who runs the account. The operator changes. The work doesn't.

There are three legitimate operating models I see at scale:

In-house team builds the stack themselves. The UA team owns all five functions. They buy software for the parts where software wins (bid optimization, mostly), and they staff the rest. This works when the team has the depth for it — usually three or more people, with at least one who can think about ASA structurally, not just tactically.

Agency-managed. A specialist agency runs the account end to end. They bring their own stack and their own people. The five functions get covered by their team, on their cadence, with their tools. This works when the company doesn't want to build internal mobile growth capacity but wants the work done well.

Consultancy plus intelligent infrastructure. A strategic advisor — Phiture's model, and others — handles strategic review and shapes the keyword lifecycle policy and budget governance approach. Intelligent software handles bid optimization at a depth no human team can match. The in-house team owns day-to-day execution and reporting. The advisor is in the room for the quarterly review.

I have a bias to declare here — Catchbase is the software piece of that third model, and Phiture is the consultancy. But the model exists regardless of who you buy it from. The pattern works because it gives each piece of the work to the operator best suited to do it: strategic judgment to humans with cross-account experience, continuous bid adjustment to software, day-to-day execution to the in-house team that owns the account.

The point isn't which of the three models is "best." It's that all three cover the five functions, just with different operators. A modern stack is one where all five are covered. A typical stack is one where two or three are.

Where manual still makes sense

I should be honest about where this argument has limits.

Manual ASA management is fine — sometimes better than software — when:

  • The account is genuinely small. Under 200 active keywords, a sharp UA manager with a spreadsheet outperforms most tools. The complexity isn't there yet for adaptive bidding to add value.
  • The account is in transition. Mid-restructure, mid-migration, mid-rebrand — software needs stable signal to learn, and a chaotic structure produces unstable signal. Wait for the structure to settle.
  • The account has unusual constraints that don't fit standard optimization. Some highly regulated verticals, some apps with extreme seasonality, some accounts where business rules override pure performance optimization. Software handles these poorly without serious customization.

Outside those cases, manual bid optimization is a tax the team pays without realizing it. The team's time is too valuable to spend executing what software does better.

What this means for your stack this quarter

The exercise from the top of this piece is the work. Five functions on the left, your honest coverage on the right. Spend fifteen minutes on it before the next quarterly planning conversation.

Most teams I do this with discover the same pattern: bid optimization and reporting get most of the attention. Keyword lifecycle and budget governance get attention when something breaks. Strategic review hasn't happened on cadence in six months.

That gap list is your priority. Not "should we buy a new tool?" — but "which of these functions is currently running on hope instead of process?"

The modern ASA stack isn't a shopping list. It's a coverage map. The teams that do this work well in 2026 won't be the ones with the most software. They'll be the ones whose coverage map doesn't have holes.

Ready to optimize your Apple Ads?

Start your free trial and see the difference AI-powered optimization can make