Saturday, December 21, 2024

TikTok to slash ‘hundreds’ of jobs as they shift towards AI: UK staff receive email about ‘difficult’ cuts

Must read

TikTok is axing ‘several hundred’ jobs in the UK and Malaysia as part of a drive for more artificial intelligence in its content moderation.

Around 125 people have been told they might be made redundant, according to the Communication Workers Union.

TikTok employs about 500 UK workers in its UK moderation division, and an internal email to staff seen by MailOnline warned of ‘difficult’ decisions.

This is due to ‘fluctuating volumes, growing complexity, and a wider range of harmful content and bad actors’.

As a result, the company said its ‘standards for content moderation have risen’ and it requires more ‘advanced technology’ to tackle this.

On Friday, it was announced that around 500 employees are losing their jobs in TikTok’s Malaysian branch as well.

TikTok is axing ‘several hundred’ jobs in the UK and Malaysia as part of a drive for more artificial intelligence in its content moderation

The social media company employs a mix of automated detection and human moderators to review content posted on the site

The social media company employs a mix of automated detection and human moderators to review content posted on the site

The employees in Malaysia, most of whom were involved in the firm’s content moderation operations, were informed of their dismissal by email late Wednesday. 

TikTok confirmed the layoffs and said that several hundred employees were expected to be impacted globally as part of a wider plan to improve its moderation operations.

The social media company employs a mix of automated detection and human moderators to review content posted on the site.

Malaysia reported a sharp increase in harmful social media content earlier this year and urged firms, including TikTok, to step up monitoring on their platforms.

A TikTok spokesperson told MailOnline: ‘We’re making these changes as part of our ongoing efforts to further strengthen our global operating model for content moderation.

‘We expect to invest $2bn globally in trust and sed by automated processes, up from 62 per cent last year. 98 per cent of those videos were removed by TikTok before a user report, the spokesperson added.

The internal letter seen by MailOnline read: ‘I am writing to share some important but difficult news about our Trust and Safety Regional Operations teams.

TikTok confirmed the layoffs and said that several hundred employees were expected to be impacted globally as part of a wider plan to improve its moderation operations

TikTok confirmed the layoffs and said that several hundred employees were expected to be impacted globally as part of a wider plan to improve its moderation operations

TikTok's shift in policy is part of a broader attempt in the technology industry to provide more safeguards for AI usage

TikTok’s shift in policy is part of a broader attempt in the technology industry to provide more safeguards for AI usage

‘In recent years, our industry has faced increasing demands on our moderation efforts due to fluctuating volumes, growing complexity, and a wider range of harmful content and bad actors. Concurrently, the standards for content moderation have risen with technological advancements, and we remain steadfast in prioritising user safety.

‘To meet these evolving challenges, we are proposing to make some changes to our in-house moderation teams. The proposed changes would enable us to better utilise our multifaceted operating approaches. 

‘Firstly, it would allow us to further leverage advanced technology for greater accuracy, consistency, and scalability. Secondly, it would enhance our collaboration with our third-party partners, to manage fluctuating volumes and optimise resource planning. 

‘Last but not least, it would enable our Trust and Safety professionals to focus on more nuanced, complex and higher priority work moving forward.

‘Earlier today, we informed some team members whose roles are potentially impacted, and that consultation will be carried out in alignment with local legal requirements.

‘I understand that news like this can create uncertainty, and we are fully committed to supporting everyone during this challenging time. I want to take a moment to express my deepest gratitude to our teams for their dedication, hard work, and commitment throughout this journey.

In February Meta announced that it was working with industry partners on technical standards that will make it easier to identify images and eventually video and audio generated by artificial intelligence tools

In February Meta announced that it was working with industry partners on technical standards that will make it easier to identify images and eventually video and audio generated by artificial intelligence tools

‘I believe this restructuring would strengthen our regional operations, making us more efficient, effective, and resilient. It would enable us to take on more challenging work while continuing our mission to provide a safe environment where users can inspire creativity and bring joy.’

Earlier this year, TikTok began labelling content created using artificial intelligence when it’s uploaded from outside its own platform in an attempt to combat misinformation.

The company said in a statement: ‘AI enables incredible creative opportunities, but can confuse or mislead viewers if they don’t know content was AI-generated.

‘Labeling helps make that context clear—which is why we label AIGC made with TikTok AI effects, and have required creators to label realistic AIGC for over a year.’

TikTok’s shift in policy is part of a broader attempt in the technology industry to provide more safeguards for AI usage. 

In February Meta announced that it was working with industry partners on technical standards that will make it easier to identify images and eventually video and audio generated by artificial intelligence tools.

Users on Facebook and Instagram users would see labels on AI-generated images.

Google said last year that AI labels are coming to YouTube and its other platforms.

Latest article