Nandini Roy Choudhury, writer
Brief news
- The UK has implemented its Online Safety Act, requiring tech companies to combat illegal content and face fines for non-compliance, with Ofcom overseeing the regulations.
- Companies must complete risk assessments by March 16, 2025, and enhance moderation and reporting mechanisms to address illegal harms.
- Ofcom can impose significant fines and restrict services for violations, while future codes will include advanced measures like AI to tackle illegal content.
Detailed news
LONDON — It was on Monday that the United Kingdom formally implemented its comprehensive online safety law, which paved the way for more stringent oversight of hazardous content that can be found on the internet as well as the possibility of hefty fines for Internet companies like as Meta, Google, and TikTok.
Ofcom, the British government agency that oversees the media and telecoms industry, has released its first-edition codes of practice and advice for technology companies. These documents outline the actions that companies should take in order to combat illegal harms such as terrorism, hate crimes, fraud, and sexual exploitation of children that occur on their platforms.
Under the Online Safety Act, which is a comprehensive regulation that requires technology platforms to take further efforts to counteract illegal content that is found online, the measures constitute the first set of responsibilities that have been imposed by the regulator.
Due to the Online Safety Act, these technology companies are required to fulfill specific “duties of care” in order to guarantee that they will take responsibility for any dangerous content that is posted and distributed on their platforms.
Despite the fact that the act was enacted into law in October 2023, it had not yet reached its full extent of implementation. However, the event that occurred on Monday practically marks the legal passage into force of the safety obligations.
As a result of Ofcom’s announcement, technological platforms will have until March 16, 2025 to finish conducting risk assessments for illegal damages. This basically gives them three months to bring their platforms into line with the regulations.
According to Ofcom, once that date has passed, platforms are required to begin implementing measures to avoid risks of illegal damages. These measures include improved moderation, easier reporting, and built-in safety testing.
Melanie Dawes, the Chief Executive Officer of Ofcom, issued a statement on Monday stating, “We will be closely monitoring the industry to ensure that companies meet the stringent safety standards that have been established for them in accordance with our initial codes and guidance. Additional requirements will be implemented promptly in the first half of the following year.”
The potential for hefty fines and suspensions of service
Ofcom has the authority to impose fines of up to ten percent of a company’s global annual revenues if it is discovered that the company has violated the restrictions that are outlined in the Online Safety Act.
Individual top managers who have committed several breaches may be subject to possible jail time. In the most severe situations, Ofcom may seek a court order to restrict access to a service in the United Kingdom or to restrict its access to payment suppliers or advertising.
Following the far-right riots that occurred in the United Kingdom earlier this year, which were in part caused by the dissemination of disinformation on social media, Ofcom was put under pressure to strengthen the law.
According to Ofcom, the responsibilities will include websites that host pornographic content and file-sharing services, as well as social media companies, search engines, messaging, gaming, and dating applications.
In accordance with the first edition of the code, the functions of reporting and complaint must be more easily accessible and utilised. With regard to high-risk platforms, businesses will be forced to implement a technology known as hash-matching in order to identify and remove content that contains child sexual abuse material (CSAM).
Through the use of hash-matching technologies, known photos of CSAM from police databases are linked to encrypted digital fingerprints, also known as “hashes,” for each individual piece of information. This assists the automatic filtering systems of social media sites in recognizing and removing the content.
Ofcom emphasized that the codes that were released on Monday were only the first set of codes, and that the regulator would aim to consult on further codes in the spring of 2025. These other codes would include the ability to block accounts that were discovered to have shared content classified as CSAM, as well as the ability to employ artificial intelligence to combat illegal harms.
“Ofcom’s illegal content codes are a material step change in online safety,” said British Technology Minister Peter Kyle in a statement on Monday. “This means that beginning in March, platforms will be required to proactively take down terrorist material, child and intimate image abuse, and a host of other illegal content.” This will bridge the gap between the laws that protect us in the offline world and the laws that protect us in the online world.
“If platforms fail to step up, the regulator has my support to use its full powers, including issuing fines and asking the courts to block access to sites,” Kyle continued. “All of these options are available to the regulator.”