TikTok removes 80 million under-age accounts per year, committee told

ireland
Tiktok Removes 80 Million Under-Age Accounts Per Year, Committee Told
Representatives from Meta, TikTok and X were told by TDs that ‘social media is a cesspit’. Photo: PA
Share this article

By Cillian Sherlock, PA

TikTok removes 80 million accounts of under-age users every year, an Oireachtas committee has been told.

The revelation came during a Children’s Committee hearing on child protection in the context of artificial intelligence.

Advertisement

Representatives from Meta, TikTok, and X (formerly Twitter) were told by TDs that “social media is a cesspit” and their companies were not doing enough to protect children.

One of the issues discussed at committee was age verification of users on apps to protect children.

Advertisement

 

Meta’s head of public policy in Ireland, Dualta O Broin, suggested a solution to concerns over age verification could be done at App Store level, taking the burden off individual apps – particularly newer companies that see rapid rises in users.

“That would be a step forward,” he said. “It would be a resolution of the age verification question. We would still have huge responsibilities to ensure that all of these users are then placed into an age-appropriate experience.”

He said other solutions included the process being done by telecommunications companies or by device.

Advertisement

The social media giant, which owns Facebook, WhatsApp and Instagram, said it dismantled 27 abusive networks and banned almost half a million accounts for child safety violations between 2020 and 2022.

Fine Gael senator Mary Seery Kearney raised concern about social media platforms’ “deliberate manipulation” of users and resultant “behaviour modification”.

She said the companies at the committee had a business model based on the capture of attention, adding that smartphones should be banned for young people.

Ms Seery Kearney said she wanted to see more time limits on app use, adding: “Social media needs to come with a mental health warning.”

Advertisement

TikTok’s public policy lead for child safety, Chloe Setter, said she “totally appreciates” the senator’s concerns, but added there is no agreement among experts on what amount of time is considered “good”.

 

Advertisement

She said TikTok had take-a-break reminders, usage limits and push alert cut-offs associated with age.

Meta’s director of safety policy, David Miles, told the politicians their concerns were justified and the company was working with safety experts.

He said the industry had seen a dramatic rise in the youth demographic and that “things need to change”.

Echoing recent comments from Tánaiste Micheal Martin, Fianna Fáil TD Jennifer Murnane O’Connor said the impact of social media on children is “the new public health crisis of our time”.

She said there would soon be funding for schools to support the banning of smartphones during class time.

Susan Moss, head of public policy at TikTok, replied: “I agree with you. Schools are a place for education. They’re not a place for smartphones and the internet.”

More generally, she said TikTok would invest two billion euro in trust and safety in 2024.

The committee was told that more than two million people in Ireland use the platform every month.

Ms Moss said TikTok “meticulously monitors” child sexual abuse material.

 

She said all content on the platform undergoes some form of moderation, including by automated systems, to detect harmful material.

Ms Setter: “To give you a sense of the effort we’re putting in, we remove on average 20 million suspected under-age accounts every quarter globally.”

The platform is designed for people aged 13 and over.

Fianna Fáil senator Erin McGreehan told the companies: “Social media is a cesspit and X is the worst.”

Her party colleague, senator Malcolm Byrne, said it was a “serious problem” that the social media company had more than halved the number of human moderators it employs under the ownership of Elon Musk, down from 5,500.

He said young people told him the content on X was “far more gratuitous, far more violent, and far more sexual” than other platforms.

He added: “You see far more trolls and bots and misinformation and disinformation on your platform and your AI model is not picking it up.

“And… not picking it up for the wider population, not picking it up for children and teenagers, is really dangerous.”

Claire Dile, X’s director for government affairs in Europe, said the company could make more effort to detect and remove harmful content as quickly as possible.

She said the company had launched a new moderation centre and had begun to rehire moderators as part of a variety of enforcement actions, including AI processes.

She also said fighting child sexual exploitation material, including AI-generated images, is the company’s “number one priority”.

Read More

Message submitting... Thank you for waiting.

Want us to email you top stories each lunch time?

Download our Apps
© BreakingNews.ie 2024, developed by Square1 and powered by PublisherPlus.com