CALL US TODAY: 0203 983 8278
Home // Analysing the Online Safety Bill – what does it mean for the public?

Analysing the Online Safety Bill – what does it mean for the public?

Recently, the Online Safety Bill passed through the House of Commons and is now ready to be made law in the United Kingdom. The much-feted bill is designed to add protections to children and adults browsing online and places more pressure on social media companies to assume responsibility for their users’ safety on their platforms. In this blog, we will delve into the intricacies of the Online Safety Bill, examining its key provisions and the implications for the general public.

Understanding the Objectives of the Online Safety Bill

The Online Safety Bill has been a subject of significant attention in the United Kingdom, especially given the increasing concerns about online safety for both children and adults. The bill was introduced to address the growing challenges posed by harmful content online, including hate speech, defamation, cyberbullying, and disinformation. The government’s manifesto commitment was to make the UK the safest place in the world to be online while defending free expression.

The primary objectives of the Online Safety Bill are twofold. First, it aims to protect vulnerable populations, particularly children, from harmful content and interactions online and on social media. Second, it seeks to hold social media companies accountable for the safety of their users. These platforms are required to take robust action against online terrorist activities and other forms of harmful content. Several high-profile cases have underscored the urgency of implementing such a bill. For instance, the tragic case of Molly Russell, a 14-year-old who took her own life after viewing self-harm and suicide content on Instagram, highlighted the need for stricter regulations. Additionally, the proliferation of hate speech and disinformation during political events like the Brexit referendum has raised questions about the role of social media platforms in moderating content.

One of the most nuanced aspects of the Online Safety Bill is its attempt to balance the need for online safety with the preservation of freedom of expression. The bill specifies that companies are required to take content down, including terrorist content, when they become aware of it on their site. However, it also emphasizes that the legislation does not impose any general obligation of monitoring or filtering, thereby protecting the right to freedom of expression.

The Most Important New Offences Created by the Bill

The Online Safety Bill introduces a series of new offences aimed at curbing harmful online activities. One of the most significant is the “harmful communications offence,” which is designed to penalize those who send or post communications that are likely to cause serious emotional distress to the recipient. This offence is a step forward in tackling online harassment and cyberbullying, which have been rampant issues on social media platforms. Another noteworthy offence is the “false communications offence,” which criminalizes the act of sending or posting false information with the intent to cause non-trivial emotional, psychological, or physical harm. This is particularly relevant in the era of ‘fake news’ and misinformation campaigns that can have real-world consequences.

The Online Safety Bill also introduces a series of new responsibilities for social media companies, marking a significant departure from previous regulations. One of the key clauses states:

“A duty to ensure, by the use of age verification or age estimation (or both), that children are not normally able to encounter content that is regulated provider pornographic content in relation to the service.” (Page 19 of the Online Safety Bill)

This clause imposes a stringent requirement on social media companies to implement age verification mechanisms. It aims to protect minors from encountering inappropriate content, thereby elevating the standard of care required from these platforms.

Another noteworthy section mentions:

“A duty to summarise in the terms of service the findings of the most recent illegal content risk assessment of a service (including as to levels of risk and as to nature, and severity, of potential harm to individuals).” (Page 2 of the Online Safety Bill)

This duty mandates transparency in how social media companies assess and communicate risks associated with illegal content. It serves as a mechanism for holding these companies accountable for the content they host.

Furthermore, the Bill specifies:

“In determining what is proportionate for the purposes of subsection (2), the size and capacity of the provider of a service, in particular, is relevant.” (Page 6 of the Online Safety Bill)

This clause introduces a nuanced approach, taking into account the size and capacity of the service provider when determining what actions are proportionate for ensuring online safety. It suggests that larger platforms may be held to a higher standard.

Prior to the Online Safety Bill, the legal framework for dealing with online offences was fragmented and often ineffective. For example, the Malicious Communications Act 1988 was criticized for its vague terms and low criminality threshold4. The new offences introduced by the Online Safety Bill are more specific and carry heavier penalties, thus providing a more robust legal mechanism to combat online harms.

Several social media companies have already started adapting their policies to align with the new offences outlined in the bill. For instance, Twitter has updated its rules to explicitly prohibit harmful and false communications, while Facebook has implemented algorithms to detect and remove threatening content.


In summary, the Online Safety Bill represents a watershed moment in the regulation of social media platforms in the United Kingdom. It introduces a comprehensive framework that places unprecedented responsibilities on these companies, far exceeding the scope of previous regulations. From stringent age verification mechanisms to explicit accountability for Child Sexual Exploitation and Abuse (CSEA) content, the bill leaves no stone unturned in its quest to make the online world safer for both children and adults.

The bill also mandates transparency through children’s risk assessments and an accessible complaints procedure. These clauses not only make the platforms more accountable but also empower users by giving them the information and tools they need to protect themselves and their children.

However, it is crucial to note that these new responsibilities come with their own set of challenges and criticisms. The bill has sparked debates on how it may impact freedom of expression and the practical difficulties companies may face in implementing these changes. Yet, these challenges do not diminish the bill’s significance as a pioneering piece of legislation aimed at redefining online safety standards.

As we move forward, it will be interesting to observe how social media companies adapt to these new requirements and what impact this will have on user experience and safety. One thing is clear: the Online Safety Bill has set a new benchmark, and it is now up to social media companies to rise to the occasion.

Contact Our Online Defamation Lawyers Today 

If you require expert advice on matters of Defamation or online harm, please contact Shubha Nath at Nath Solicitors on 020 8138 9373 or get in touch with the firm online.


    I accept the privacy policy

    To prove you are not a robot, please answer the following question:


    Copyright. Nath Solicitors Limited. Registered in England and Wales. Company Number: 08724944. VAT number: 207490711. Office Located at: 35 Berkeley Square, London, W1J 5BF. Nath Solicitors Limited is authorised and regulated by the Solicitors Regulatory Authority. Registration number 608014. Terms Of Use. Privacy Policy. Cookies Policy. Complaints Procedure.