Monday, November 25, 2024

Age assurance tech among key UK govt focuses against social media harms | Biometric Update

Must read

The UK government is taking a tougher stance on protecting children online, threatening tech platforms to introduce stricter regulation unless they introduce measures that will keep kids away from harmful content. Developers of age assurance technology should play a part in that, according to UK’s Technology Secretary Peter Kyle.

The Online Safety Act should be implemented “as quickly and effectively as possible,” according to a new document released last Wednesday by the UK Department of Science, Innovation and Technology. The Draft Statement of Strategic Priorities outlines the government’s key focus areas for online safety.

The new document is a part of the Technology Secretary’s drive to ensure tech companies will comply with the Online Safety Act, introduced last year. The regulation requires companies to keep children away from harmful and age-inappropriate content, including pornography, violence, hate, bullying, content supporting suicide, self-harm or eating disorders and others.

Since the introduction of the new regulation, however, concerns have been brewing that the Act may not live up to its expectations. Kyle, who was appointed as head of the by the new Labor government in July, is hoping to disprove this.

Enforcement of the regulation falls on the UK’s communications regulator Ofcom which has the power to fines tech firms of up to 18 million pounds (US$22.6 million) or 10 percent of their annual global turnover for non-compliance. The Secretary told the media last week that he wants the agency to take a stricter stance toward policing social media companies that have been operating in a “gray area.”

“Some of these companies are spending more on R&D than the British state is in total. So don’t tell me you can’t throw some resources together and have a conversation about how things like age verification can be made more robust, and that safety can’t be built in,” Kyle said in an interview with The Telegraph.

Ofcom’s tasks include publishing Codes of Practice and providing guidance on how companies can comply with their duties. The first edition of the Illegal Harms Codes of Practice and the illegal content risk assessment guidance is expected to be made public in December 2024.

Age assurance tech added to key focuses

The Draft Statement of Strategic Priorities places effective age assurance among its key focuses.

“Services should take advantage of the technologies that are already available to identify child users and ensure that they cannot access harmful content on their services,” says the draft. “Age assurance should be deployed consistently, effectively and fairly to users from all backgrounds and age ranges.”

The document also notes that the government has been supporting the development of third-party solutions for online harm through funding grants, hackathons and innovation challenges. This will help Ofcom outline ambitious recommendations for adopting technologies for online services.

“The UK safety tech sector has an important role to play by developing innovative solutions to support platforms, improve online safety outcomes and enable agile regulation,” says the document.

The strategy also outlines focuses such as safety by design or preventing online harm, increasing transparency and accountability of online platforms maintaining regulatory agility to keep pace with changing technology and behavior as well as building an inclusive and resilient online society.

Stricter regulation if tech firms don’t comply

According to the Technology Secretary, tech firms have been claiming that age verification technology cannot yet provide the highly effective age checks required to enforce restricting social media use for children under the age of 13. However, companies already have the technology to understand people’s personalities which means they can understand a person’s age “with some precision,” he notes.

Kyle explained that the government is reluctant to introduce new regulations before it sees the effects of the Online Safety Act. But he also promised the country would not hesitate to introduce more strict solutions to keep children safe online, including legislation similar to Australia. Last week, the country introduced a bill in parliament that aims to ban social media for children under 16.

For now, the Online Safety Act is proving effective at least for some tech firms, he adds.

Instagram has been rolling out new safety features, including verifying users’ age by using Yoti’s facial age estimation or by uploading an identity document. Online gaming platform Roblox also announced this month that it plans to limit children under 13 from messaging other users. The decision was welcomed by Ofcom.

Article Topics

 |   |   |   |   |   |   |   |   |   | 

Latest Biometrics News

 

The Australian government is hitting the gas in its drive to require age verification measures for social media and porn…

 

Digital identity networks that go beyond identity verification to cover enterprise identity and fraud protection needs from end to end…

 

Face biometrics are a common theme running through the most-read articles of the week on Biometric Update, along with the…

 

Europe has delayed the introduction of its biometric-based traveler registration scheme, the Entry-Exit System (EES) but it continues working on…

 

This week, U.S. Senator Jeff Merkley, a vocal advocate for privacy rights and transparency, joined forces with a bipartisan coalition…

 

Face morphing spoof attacks against biometric systems are particularly challenging for border systems to detect, and as morph attack detection…

Latest article