Menu

When to adopt new google ads features: a continuous testing strategy for success

Apr 20, 2025

Learn why running single tests on new Google Ads features fails and discover the continuous testing approach that helps you identify exactly when new technologies start outperforming manual methods.


The critical mass problem with new google technologies

When it comes to any new technology that Google releases, like Search Max, there is no fixed point in time (if any) after the release of said technology when it starts performing well.

At some point, there is enough critical mass because advertisers have adopted and using this data Google is able to make their product exponentially better. This is another reason (besides obviously, increasing Google's revenue) why their sales reps get targets and are incentivized to push advertisers to adopt these new productsβ€”to achieve that critical mass.

The smart bidding evolution

When smart bidding became available to the general audience in 2016 it initially underperformed, often losing in head-to-head A/B tests. Smart bidding was the challenger to manual bidding. Advertisers were understandably sceptical. Over time, Google collected more data and improved the product and it began to perform better in experiments. This feedback loop of data collection and improvement continued making the product exponentially better. Now smart bidding, given sufficient data, outperforms manual bidding in 90% of experiments and has been the status quo for a pretty long time. Though recently I've seen some really thoughtful and interesting examples of manual CPC winning again and so, manual CPC becomes the challenger of smart bidding.

Continious testing approach

Because there is no fixed point in time in when a new technology beats the status quo, and because we do not know when the product has reached critical mass, the idea is to have a continuous layer in your Google Ads setup with the sole focus of testing these new technologies, but with a low experimental budget. Doing these continuous, smaller tests and experiments over time that continuously follow up after each other allows you to identify exactly the point when or even IF the challenger (new technology) starts to outperform and when it will do better. You keep it low risk until it starts to outperform and then scale it towards a higher evergreen budget.

Why most advertisers get it wrong

I think this is where most advertisers take the wrong direction: running 1 test on a new technology and then making a conclusion over a longer time period whether something should be adopted or not. A lot can change, even within a couple of months. Therefore, the next step after any experiment loss of a new Google Ads technology should be to do another experiment. Maybe directly after or maybe 3 months later depending on how close the challenger was to winning. It doesn't mean infinite testing, not all new technologies will work for each ad account, but it does mean having a multiple experiments approach rather than a one-off.

Framework for new technology adoption

Advertiser adoption is key for Google to improve these products exponentially. At some point, there is enough critical mass because enough advertisers have adopted and using this data Google is able to make their product exponentially better. For example, smart bidding initially underperformed in all head-to-head A/B tests I ran when released in 2016 but now outperforms manual bidding in 90% of cases due to Google collecting more data and making their product better.

The difficult question to answer is: when to adopt? I suggest to keep a continuous testing layer in your Google Ads setup with a low experimental budget the moment this new technology is released. This approach helps identify when and if a new technology starts to outperform the current one.

The Rule of Four

Avoid making decisions based on a single test; instead, run multiple experiments over time in this layerβ€”the idea is to try and pinpoint when the new tech starts working better. Also, some technologies are better than others. Not every new technology will beat your control setup. You can use the rule of four, meaning running a maximum of 4 experiments over a longer time period (e.g. a year) to determine if this technology needs to be adopted or not.

This strategy minimizes risk and ensures you're ready to scale when a new technology proves its worth, exactly when it starts working for you (if it ever will). Allowing you to pinpoint (in a low-risk low-budget situation) exactly when new technologies work better for your own account and then scaling that rapidly should allow you to be more effective over time.