Concerns raised after social media giants approve anti-LGBTQ+ adverts

ireland
Concerns Raised After Social Media Giants Approve Anti-Lgbtq+ Adverts
Global Witness submitted ads that used extreme and violent language to three social media companies, including Facebook owner Meta, for approval. Photo: PA
Share this article

By Gráinne Ní Aodha, PA

Concerns have been raised about the processes used by social media giants to block advertisements containing hateful language towards the LGBTQ+ community.

NGO Global Witness submitted ads that used extreme and violent language to three social media companies for approval.

Advertisement

Ten were submitted to Facebook, TikTok and Google, as part of the group’s investigation.

Both YouTube, which is owned by Google, and TikTok approved all 10 ads while Facebook rejected two.

Global Witness removed all the ads after they had been approved and before they were published.

Social media companies suggested that processes to screen content are constantly evolving and there are multiple steps to monitor and remove online content.

Advertisement

It is also possible for ads to be removed after they go live, as social media companies have reporting mechanisms that can trigger further scrutiny.

 

Advertisement

A spokesperson for Facebook owners Meta said: “Hate speech has no place on our platforms, and these types of ads should not be approved.

“That said, these ads never went live, and our ads review process has several layers of analysis and detection, both before and after an ad goes live.

“We continue to improve how we detect violating ads and behaviour and make changes based on trends in the ads ecosystem.”

A spokesperson for TikTok said: “Hate has no place on TikTok. Our advertising policies, alongside our community guidelines, prohibit ad content that contains hate speech or hateful behaviour.

Advertisement

“Ad content passes through multiple levels of verification before receiving approval and we remove violative content. We regularly review and improve our enforcement strategies.”

The concern comes as Minister for the Media, Catherine Martin signed ministerial orders on Wednesday to establish media regulator Coimisiun na Mean – which is hoped will reduce harmful content online.

The Department of Tourism, Culture, Arts, Gaeltacht, Sport and Media said in a statement to the PA news agency that the establishment of Coimisiun na Mean and the appointment of an online safety commissioner will mean there will be more pressure on social media companies to reduce hate content.

Advertisement

 

The online safety commissioner, along with other commissioners and the chair of the commission, are expected to be formally appointed on March 15th when the Coimisiun is expected to be established.

“Coimisiun na Mean will have a range of powers to monitor and enforce compliance with online safety codes,” the department said.

“For example, if a service is suspected to be non-compliant, An Coimisiun can appoint authorised officers to investigate and this may lead to the imposition of a financial sanction of up to €20 million or 10% of turnover.”

The Online Safety and Media Regulation (OSMR) Act provides the legal basis for the online safety commissioner to establish individual complaints schemes for online platforms.

This would allow individuals to submit complaints about the availability of suspected harmful online content.

The department said “it is not envisaged” that an individual complaints scheme would be established until systemic regulation, through online safety codes, has been allowed to “bed-in”.

No timeline has been given on how long this will take.

“The role of the commissioner will be to develop and enforce a regulatory framework for online safety for certain online services which host user-generated content,” it said.

“A key feature of the regulatory framework for online safety is the power of the online safety commissioner to create and apply obligations through binding online safety codes.

“These codes will require designated online services to take measures to tackle the availability of defined categories of harmful online content and can regulate commercial communications (advertising, sponsorship) made available on those services.

“These categories of harmful online content include online content linked to 42 existing offences, including those under the Harassment, Harmful Communications and Related Offences Act 2020 and the Prohibition of Incitement to Hatred Act 1989.”

Read More

Message submitting... Thank you for waiting.

Want us to email you top stories each lunch time?

Download our Apps
© BreakingNews.ie 2024, developed by Square1 and powered by PublisherPlus.com