Why we need to be smarter when it comes to risk

Why we need to be smarter when it comes to risk

By David Price, Managing Director at The Grove Media

Advertisers need to find smarter solutions to harmful content and online risk or miss out on monetisable content.

Right now, there is seemingly much to concern advertisers when it comes to online risk.

The issues of harmful content are front and centre once again with the news that the Online Safety Bill has been kicked into touch, hateful content around trans rights and other LGBTQ+ issues is on the rise, and the war in Ukraine has focused attention on many media and marketing decisions, but particularly associations with distressing content.

Problems surrounding harmful, hateful and also fake content have persisted for several years and the industry has rightly been taking action to deal with these through stricter content and platform scrutiny, greater use of technology and lobbying for better regulation of online content.

But you only have to read some of the recent commentary around the Online Safety Bill to see just how difficult an issue this is to tackle — as soon as measures are proposed to protect one set of people, another is up in arms about the potentially negative consequences of well-intended actions.

There are no simple solutions to what is a very broad and complex problem. But as we move forward in our efforts to tackle this situation, can we be a little smarter and more nuanced in our approach to risk? I think there are a few important things we can do as an industry in this area.

Maintain audience focus while shifting suitability strategy

Firstly, we need to maintain a clear focus on audience targeting. This may seem blindingly obvious, but I’m not sure that we are always as focused and aware as we should be.

In these times of economic uncertainty and crunches on talent, we are all under a lot of pressure, but we need to maintain a ‘conscious’ approach to media planning when it comes to audiences and platforms.

Using sources such as Ads For News can help with focusing on trusted platforms. And maintaining control and continual scrutiny of programmatic is also essential in this area — we are still seeing examples highlighted in the tabloid press of brands inadvertently juxtaposed with inappropriate content.

We should move more towards a strategy of brand suitability. There is evidence to suggest that our baseline of brand safety has progressively become outdated as digital consumption has increased.

While many brands will have clear red lines when it comes to content they don’t want to be associated with, it is arguably time for brands and their agencies to re-examine their safety parameters for a digital-first world.

This means building out from a brand safety approach — often blanket avoidance of anything deemed inappropriate – to a suitability strategy that allows for a more considered approach. This is inevitably more time consuming, but technology is available to help. More on that in a moment.

Is it time to reassess our attitude to ‘hard news’?

At the recent Festival of News, organised by newspaper trade body Newsworks, a panel session on news journalism argued that while brand safety remains vital, ‘hard news is good news’.

You might suggest that they would say that wouldn’t they, but the facts do speak for themselves.

The newspaper industry has some of the most trusted brands in media, it is subject to strict regulation, and hard news content — by its very nature — consistently delivers higher engagement rates than some other news brand content.

Again, it’s down to suitability, but having broad brush policies on news content makes little sense in today’s world.

And related to the issue of hard news is the issue of ‘blocklists’ or ‘blacklists’ as they are also commonly known.

Understandably, the industry has traditionally used lists of words as a means of identifying harmful or inappropriate content.

In the absence of any other methods of filtration or selection this made a lot of sense, but now blocklists seem an outdated and crude approach to a far more nuanced issue.

Too many words can be misinterpreted or taken out of context by using blocklists, meaning that great content goes under-monetised.

There are many examples of this but take the word ‘shoot’ – on the one hand it’s an indicator of crime-related content, but it’s also a commonly used word in highly popular sport content.

Blocklists need to be constantly checked and updated or maybe not used at all.

Use tech to make ‘bespoke whitelists’

This is where technology comes in. Rather than use crude blocklists or brand safety bans, we need to use tech more smartly to help us make better and more considered decisions.

All reputable programmatic platforms offer brand safety measures, either proprietary solutions or via partnership with third party specialists such as Oracle, through Grapeshot, and DoubleVerify.

There are increasingly smart tech solutions that can perform rapid semantic and context analysis of content, such as Grapeshot, which covers web, in-app, video and display inventory.

In addition to tech solutions, agencies and advertisers are increasingly using ‘whitelists’ – listings of sites within a network or platform that they’re happy to appear on.

Many programmatic sellers take this approach and will create bespoke whitelists for brands.

Finally, we must maintain our focus on the smaller websites.

The Online Safety Bill has been stopped in its tracks for now, and a lot of its proposals and wording are set to be reviewed.

One area that needs greater focus is what’s termed ‘smaller websites’. The focus is rightly on the biggest platforms, such as Facebook and Twitter, but allowing the smaller sites not to be ‘in scope’ for some of the more stringent rules while understandable is problematic.

Some of the worst content is on smaller sites and they can be feeders for the bigger social platforms. Again, this is a vast and complex area, and in the immediate future we will need to rely on tech solutions to help filter out content from smaller sites.

Nobody is under any illusion that the issue of harmful content and online risk is an easy one to crack. It’s arguably something that will never be completely resolved to everybody’s satisfaction.

It will remain an ongoing issue that we will need to work together to tackle – to continually reduce the problem and to find ever smarter solutions to make life safer for everyone.

This article was originally published by The Media Leader, link below