The internet and public protection: when it comes to advertising, we're already on it

“Out of control”, “wild west”, “ungoverned space”. Words often used to describe the internet, a place where some say the usual rules don’t apply.  

The digital age brought new markets for small businesses and more choice for consumers. But a dizzying pace of technological change, misuse of data and the darker corners of the net have prompted a collective anxiety. There’s growing concern about how we protect people, particularly more vulnerable people, from harmful content.  Witness the Government’s proposals for a new Internet Regulator to deal with Online Harms, as well as its ongoing Online Advertising Review.

While it’s true that some parts of the internet are unregulated, advertising isn’t one of them. The ASA has regulated online ads since the days of dial-up and floppy disks.  And they’re a big driver of our workload: 90% of the 10,850 ads we got amended or removed in 2018 appeared online in whole or part. 

What do we mean when we say “online advertising”? Most of the online ads we take action against are in non-paid-for space – meaning companies, organisations or sole traders who are advertising on their own websites, social media spaces, apps or advergames. Last year these ads generated around 3.5 times as many complaints as the number of ads in paid-for space online. A good example is influencer advertising, where ads aren’t being labelled properly or are otherwise irresponsible.  

The focus on online advertising, in all its guises, reflects a dramatic shift in media consumption habits. Almost all of us are spending much more time online. Unsurprisingly, businesses are following us, so online ad spend has exploded in recent years. Figures from the Internet Advertising Bureau reveal that in 2018 UK digital ad spend was £13.44bn, 60% of the total. As recently as 2010 it was just 29%. It’s unsurprising that the pace of change has fuelled concerns about online content potentially causing harm, particularly to more vulnerable people. 

Take children for example. Our rules demand that businesses in sensitive categories – think alcoholic drinks, gambling and foods and drinks high in fat, salt and sugar (HFSS) - have to use available data, tools and filters to ensure their ads are targeted at appropriate audiences. Platforms have increasingly sophisticated tools for advertisers to target their ads at users most likely to buy the products. Our rules take advantage of that technology: if brands can micro-target the best potential customers, they should be able to use the same tools to filter out users who shouldn’t see the ad. In a recent case, we banned an in-app gambling ad because it was seen by a seven-year old child on a shared device owned by the parent. 

This strict approach responds to the reality that parents and guardians can’t be expected to look over their children’s shoulders checking what they’re seeing on their devices. And it sends a clear signal to businesses, ad agencies, platforms and networks that being irresponsible, or even just careless, has consequences.

More and more of our work in this area is proactive. Responding to the fact that most online paid ads are targeted at us based on our previous browsing, we’ve worked with a technology partner to build ‘child avatars’ – programmes which simulate the online profiles of children of different ages – to enable us to see through the eyes of children. We send those avatars to websites and video sharing platforms, then capture, analyse and, if necessary, ban the ads served to them. This has already led to swift action to stop age-restricted ads for gambling and HFSS foods and drinks from appearing on children’s websites and YouTube channels. 

We’re going to repeat these sweeps regularly and take action against any brands who break the rules. And this is just the start. We’re exploring how other technologies, including machine learning and data-driven monitoring of chatter on social media, can help us spot problems and better protect the public.

We don’t work alone. The tech giants, Facebook and Google in particular, play an important role as enforcement partners, taking down irresponsible ads we’ve ruled against. Google also suspends the Google ad campaigns of problem businesses who refuse to play ball. Ultimately, a number of statutory regulators act as legal back-stops to deal with the (small) number of advertisers who refuse to comply with the rules. 

Government and observers are right when they raise concerns about the darker sides of internet content. But the lesson from our work to date is that the very technologies that can contribute to online harms are key to providing solutions.


More on


  • Guy Parker

    Guy Parker

    Chief Executive

  • Keep up to date

    Sign up to our rulings, newsletters and emargoed access for Press. Subscribe now.

Guy became Chief Executive of the ASA, the UK regulator of ads in all media, in 2009.  Responsible for executing the ASA’s strategy to make UK ads responsible, he oversees all functions of the ASA system. Read more