Article

Addressing the challenge of online harms

In our recent report, Transforming Government: Six key digital transformation recommendations for a modern government, we set out the major focus areas for central government as it takes on economic recovery after covid-19, and attempts to seize the evolving benefits heralded by digital technology. In an extract from the report we look at the hugely important challenge of tackling online harms.

The what, why and how of keeping people safe online

With the growing scope and influence of the digital sphere on our lives, there is increasing awareness of the need to protect people from online harms. This has been a major focus for government since the release of the 2019 Online Harms White Paper, which proposed a new system of accountability for technology companies.

Since then, a consultation phase saw over 2,400 responses to the White Paper, Ofcom has been appointed as the new Online Harms regulator, and industry legislation is underway, for introduction in the next few years.

What are online harms?

Safeguarding against online harm is a complex issue as it must of course be balanced against freedom of expression. The government response to the Online Harms White Paper, published in December 2020, acknowledges that online freedoms must be respected, with upcoming legislation intended to 'protect users’ rights online, including freedom of expression and the need to maintain a vibrant and diverse public square.'

But what do we mean when we talk about protecting users from harms in this context?

Any work around online safety aims to protect individuals from physical and emotional harms as they use web-based products or services. These kinds of harms vary from the recognisably illegal - such as child pornography or terrorist activity - to harms which can be illegal but are dependent on context or are related to behaviour over time, such as grooming.

In addition, protection is also required from activity which is legal but harmful, such as cyber bullying and misinformation. These harms can often be more difficult to challenge as they are more pervasive. In a growing digital culture, particularly during the pandemic, they have also proliferated as more people spend greater amounts of time online.

These different kinds of harms identify online safety challenges as extremely varied and difficult to navigate. There is a common thread here, however, in that these harms result from people interacting with each other in the digital sphere.

Time for new legislation

In the wake of increasing levels of hate speech online, the rise of misinformation, and scandals such as Facebook and Cambridge Analytica, it has become increasingly apparent that stronger measures must be taken to protect individuals in the digital world. Voluntary measures – pursued by government up until 2017 – are no longer enough as a way of holding companies to account for their content.

Indeed, the aim of new legislation (as expressed in the White Paper response) is to require companies 'to explicitly state what content and behaviour is acceptable on their sites and then for platforms to enforce this consistently.' This clearly targets content publishers and social media sites: anywhere with user generated content or which provides a means for people to interact with each other.

Under the upcoming legislation, these companies will have to understand and mitigate the risk of users visiting their platforms; put reporting systems in place to notify the police of any incidents; inform Ofcom about the levels of harm on their sites and what they are doing about it; and provide ways for users to make complaints. There will be sanctions for those companies who fail to comply properly.


The regulatory framework

  • Independent regulator: Ofcom will implement and enforce the new regulatory framework
  • A duty of care: companies will have a statutory duty of care to protect their users, with the regulator setting out what they need to do to fulfil this
  • Understand and mitigate the risks: companies must understand and mitigate the risk of online harms occurring to users on their platforms
  • Transparency: companies will have to submit annual reports to the regulator if asked
  • Protecting freedom of expression: regulation will help companies to balance the risk of online harms against protecting users' rights to freedom of expression
  • User complaints: companies will be required to have a robust complaints procedure in place

A digital regulator for the digital age

As highlighted in other areas of our Transforming Government Report, the traditional divide between policy and delivery also affects the approach to online harms regulation. Currently, it is not clear to what extent those delivering the regulation are considering the need for specialist Digital, Data and Technology (DDaT) capability to design robust, user-focused, end-to-end processes.

Rather than handing Ofcom a policy document, the Department for Digital, Culture, Media and Sport (DCMS), as the policy owner, should instead seek to work collaboratively with digital delivery teams to design and build a robust, flexible service that is able to hit the ground running.

As it attempts to establish itself as a digital regulator, now is the perfect time for Ofcom to itself embrace digital transformation and act in a digital way. As online harms constantly change and evolve, so must the regulator. In order for this to happen, it must first become an agile, digitally-enabled organisation.

Safety tech

Another important feature in the online harms landscape is safety technology. This sector is thriving in the UK, with UK companies holding 25 per cent of the global market share.

According to Jess McBeath, online safety consultant and non-executive member of Ofcom's Advisory Committee for Scotland, what is currently found under the umbrella of safety technology is broad and varied.

It can encompass everything from organisations working to combat known forms of illegal online activity such as child sexual abuse, to creating parental control apps to support digital parenting, better ways to manage data privacy or digital products to carry out age verification.

“Technology can combat online harms in lots of different ways, and for different stakeholders,” she says. “From an end user perspective, if you look at something like parental control apps, these have mushroomed in scope to include things like filtering, privacy, and digital downtime. From an infrastructure perspective, we’re seeing attempts to navigate thorny challenges such as encryption vs child protection. However, I feel we need to set safety technologies within context. Much is made of the need for ‘tech companies to do something’ about online harms. But technology alone rarely solves human problems.”

These human problems include the reasons why people seek to harm others online in the first place. Technology can stop people from searching for illegal content, for example, but it doesn't address their motivation for carrying out this activity. A key focus therefore needs to be on understanding the full scope of the problem, embedding positive values and behaviour through technology design, and creating more holistic solutions.

“We now have a better understanding that technology design influences behaviour, with persuasive design, nudges etc.” says McBeath. “I would like to see this included in ‘safety tech’ as well. Prompting a user about how much time they’ve spent in an app, for example, is swimming against the tide when there is persuasive design keeping them on screen in the first place.”

What are the consequences?

Part of the government response to tackling online harms is for the regulator to promote the safety tech sector, and empower users to manage their own safety online. The principle of 'safety by design' is an important aspect of the online safety ecosystem, and government will create a framework to help businesses design digital products responsibly from the outset.

Organisations can also be encouraged to carry out the emerging practice of consequence scanning – where greater attention is paid to the consequences of a digital product, whether intended or unintended, positive or negative, for all stakeholders both now and in the future.

Embedding this process in a formalised way within a multidisciplinary product team will help organisations to consider the impact of what they are building. It is an important step in tackling online harms, and representative of a more responsible approach than that taken by many technology companies in the past.

“Safety tech is extremely important, now more than ever, because it bridges the gap between existing technologies and their real world impact, which is long overdue,” McBeath notes. “But all tech should be tech for good – if you can’t articulate the benefit of your technology product in human terms, then why are you creating it?”

Download Download the full report
pdf 5.403 MB

Download the full report

Authors

Natalie head shot
Natalie Taylor
Managing Director, Public Sector
Foundry4

Natalie has fifteen years experience in Digital Transformation, including key roles in the transformation of UK government, NHS.UK, GDS, and london.gov.uk.

View profile
Sarah final 3
Sarah Finch
Research and Insights Manager
Foundry4

Sarah is renowned for her ability to communicate complex concepts with clarity. She plays a central role in managing the insights programme at Foundry4.

View profile