The U.K. government is not ruling out further beefing up of existing online safety rules by adding an Australian-style ban on social media for kids under 16 technology secretary Peter Kyle has said.
Back in the summer, the government warned it may toughen laws for tech platforms in the wake of riots that were perceived to have been fueled by online disinformation following a knife attack that killed three young girls.
Since then it’s emerged that some of the people prosecuted for rioting were minors — amping up concerns about social media’s influence on impressionable, developing minds.
Speaking to BBC Radio 4’s Today program on Wednesday, Kyle was asked whether the government would ban social media for kids under 16. He responded by saying, “Everything is on the table with me.”
Kyle was being interviewed as the Department for Science, Innovation and Technology (DSIT) set out its priorities for enforcement of the Online Safety Act (OSA), which parliament passed last year.
The OSA targets a grab bag of online harms, from cyberbullying and hate speech to intimate image abuse, scam ads, and animal cruelty, with U.K. legislators saying they want to make the country the safest place to go online in the world. Although the strongest driver has been a child safeguarding impetus, with lawmakers responding to concerns that kids are accessing harmful and inappropriate content online.
DSIT’s Statement of Strategic Priorities continues this theme by putting child safety at the top of the list.
Strategic Priorities for online safety
Here are DSIT’s five priorities for the OSA in full:
1. Safety by design: Embed safety by design to deliver safe online experiences for all users but especially children, tackle violence against women and girls, and work towards ensuring that there are no safe havens for illegal content and activity, including fraud, child sexual exploitation and abuse, and illegal disinformation.
2. Transparency and accountability: Ensure industry transparency and accountability from platforms to deliver online safety outcomes, promoting increased trust and expanding the evidence-base to provide safer experiences for users.
3. Agile regulation: Deliver an agile approach to regulation, ensuring the framework is robust in monitoring and tackling emerging harms — such as AI generated content.
4. Inclusivity and resilience: Create an inclusive, informed and vibrant digital world which is resilient to potential harms, including disinformation.
5. Technology and innovation: Foster the innovation of online safety technologies to improve the safety of users and drive growth.
The mention of “illegal disinformation” is interesting since the last government removed clauses in the bill that had focused on this area over freedom-of-speech concerns. But in the wake of the summer riots, the government said it would review OSA powers and could seek to strengthen them in light of social media use during the disorder.
“It is essential that we learn from these events and hold platforms to account for their part in securing the UK online information environment and safeguarding the UK from future crises,” the government wrote.
In Wednesday’s full draft statement, it also had this to say on online mis/disinformation:
A particular area of focus for the government is the vast amount of misinformation and disinformation that can be encountered by users online. Platforms should have robust policies and tools in place to minimise this content where it relates to their duties under the Act. Countering misinformation and disinformation is challenging for services, given the need to preserve legitimate debate and free speech online. However, the growing presence of disinformation poses a unique threat to our democratic processes and to societal cohesion in the UK and must be robustly countered. Services should also remain live to emerging information threats, with the flexibility to quickly and robustly respond, and minimise the damaging effects on users, particularly vulnerable groups.
DSIT’s intervention will steer how Ofcom enforces the law by requiring it to report back on the government’s priorities.
For over a year, Ofcom, the regulator tasked with overseeing internet platforms’ and services’ compliance with the OSA, has been preparing to implement the OSA by consulting and producing detailed guidance, such as in areas like age verification technology.
Enforcement of the regime is finally expected to start from next spring — when Ofcom will actively take up powers that could lead to fines of up to 10% of global annual turnover for tech firms that fail to meet the law’s duty of care.
“There are more powers which are being released to Ofcom. I just want to make sure that Ofcom knows that government expects them to be used … in a way that keeps moving forward,” Kyle also told the BBC.
“For example, age verification is one that’s coming into force from January. … I want to make sure that tech companies know that if they don’t take very seriously the need to keep young people protected from certain activity online, certain websites online, that there will be assertive response to it.”
On kids and social media, Kyle said the government wants to “look at the evidence,” pointing to the simultaneous launch of a “feasibility study” that he said would “look at the areas where evidence is lacking.”
Per DSIT, this study will “explore the effects of smartphone and social media use on children, to help bolster research and strengthen the evidence needed to build a safer online world.”
The government said a 2019 review by the U.K. chief medical officer found that the evidence base around the links to social media and smartphone use and children’s mental health were “insufficient to provide strong conclusions.”
“There are assumptions about the impact [social media] has on children and young people, but there is no firm, peer reviewed evidence,” Kyle told the BBC, suggesting that any U.K. ban on kids’ use of social media must be evidence-led.
During the interview with the BBC’s Emma Barnett, Kyle was also pressed on what the government has done to tackle gaps that he had previously suggested the online safety law contained. He responded by flagging a change it’s enacted that requires platforms to be more proactive about tackling intimate image abuse.
Tackling intimate image abuse
In September, DSIT announced that it is making sharing intimate images without consent a “priority offence” under the OSA — requiring social media and other in-scope platforms and services to clamp down on the abusive practice or face the risk of big fines.
“The move effectively bumped up the severity of the intimate image abuse sharing offence within the Online Safety Act, so platforms have to be proactive in removing the content and prevent it from appearing in the first place,” DSIT spokesman Glen Mcalpine confirmed.
In further remarks to the BBC, Kyle said the change has meant social media companies must use algorithms to prevent intimate images from being uploaded in the first place.
“They had to proactively demonstrate to our regulator Ofcom that the algorithms would prevent that material going on in the first place. And if an image did appear online they needed to be taken down as fast as reasonably could be expected after being alerted,” he said, warning of “heavy fines” for noncompliance.
“It’s one area where you can see that harm is being prevented, rather than actually getting out into society and then us dealing with it afterwards — which is what was happening before,” he added. “Now, thousands and thousands of women are now protected — prevented from having the degradation, the humiliation, and sometimes being pushed towards suicidal thoughts because of that one power that I enacted.”