Skip to main content
Nov 25, 2020

SASB to update internet content moderation standards in view of harmful content and privacy concerns

Mental health impact on moderators one of four key considerations under review

The Sustainability Accounting Standards Board (SASB) expects to have its revised standard for internet content moderation ready by the end of 2021, a complex project in a murky age of hate speech, election misinformation and anti-vax conspiracy theories.

SASB released its taxonomy, the foundation for its standard setting, in early November. It was the first step in a two-stage project that will incorporate input from investors, companies and subject matter experts.

‘The content moderation project is about getting our heads around: what are the exact social issues we’re talking about? How could we potentially categorize them?’ says Greg Waters, an analyst in SASB’s technology and communications sector. ‘And, from there, in what industries are we seeing evidence that these externalities or social issues might apply? Because once we have that framework, we can start look more specifically at standard setting.’

While content moderation is generally viewed as Twitter and other social media policing illegal content or misinformation, SASB is taking a broad approach. The board will also look at systems used to determine what internet users see on a platform – algorithms that recommend products to buy from online retailers, for example – and it will consider cloud services and internet service providers that make decisions involving content moderation or governance.

Moderation is not just within the purview of Facebook, YouTube, Snapchat and their social media competitors, though. PayPal reportedly bans far-right figures from using its platform to raise money, GoDaddy de-platformed neo-Nazi site The Daily Stormer and WordPress pulled down a site operated by white supremacist group Vanguard America, according to The Washington Post. While GoDaddy and WordPress didn’t respond to IR Magazine’s requests for comment, PayPal says its policy doesn’t allow for services to be used for activities that promote hate, violence or racial intolerance.

‘We base our reviews of accounts on these parameters, taking action when we deem that individuals or organizations have violated this policy,’ PayPal says in a statement. It declined to specify which former or current customers it has banned, citing confidentiality obligations.

The cost of doing business

Content moderation is often a secretive undertaking among social media operators that have been chased by governments, NGOs and platform users to be more open about strategies and enforcement measures. TikTok, for example, finally published its first transparency report in December 2019 and, in an update this year, revealed that moderators removed more than 104.5 mn videos globally in the first half of 2020 for violating its community guidelines or terms of service.

Content moderation is also a financially material issue for tech companies, not only because of regulatory risk and changes to intermediary liability law globally but also because of the huge cost involved in policing content. Facebook alone employs more than 15,000 moderators, part of a broader team of 35,000 people who focus on safety and security, and spends at least $3 bn every year on moderation.

‘We’re still evaluating the financial materiality piece, that’s the next part of SASB’s research project,’ Waters says. ‘But what I would say is that we are seeing strong signals that this is a financially material business issue [for] social platforms… If you think about it really broadly, this is important in terms of engaging and retaining your users and advertisers, as you have to have a platform that doesn’t have harmful content on it, and one that’s also free of more mundane stuff like spam.’

Health and safety

So far, SASB has broken down its framework into four categories for study and consultation in 2021:

  • Harmful content that may include everything from illegal activity such as posting child pornography to misinformation about coronavirus
  • Freedom of expression, including how and whether social media providers are limiting users’ free speech and where to draw the line
  • Privacy, security and the potential trade-offs between data security and allowing law enforcement access to user data
  • Worker health and safety, particularly with regard to staff whose job it is to moderate some of the worst content on the internet, and the mental health impact on that workforce.
     

SASB’s aim is to move the ambiguous issue of content moderation into a defined set of issues, business impacts and management practices. But it will also examine the human impact.

‘[Content moderation] is a really tough job and it is a necessary one right now,’ Waters says. ‘Those people are doing really important work protecting the rest of the population from having to see that stuff. But there’s clearly impacts in terms of the workforces.’

Facebook backlash

The secret life of Facebook’s moderators spilled out into the open last week with the release of a letter signed by more than 200 moderators demanding hazard pay for those returning to the office, improved healthcare and mental health support. The letter, addressed to CEO Mark Zuckerberg and executives at Facebook contractors Accenture and CPL, notes that an Accenture content moderator in Texas earns $18 an hour whereas Zuckerberg’s wealth is estimated at more than $100 bn.

In response, Facebook tells IR Magazine that it prioritizes the health and safety of content reviewers: ‘While we believe in having an open internal dialogue, these discussions need to be honest. The majority of these 15,000 global content reviewers have been working from home and will continue to do so for the duration of the pandemic.’

Facebook adds that moderators have access to healthcare and confidential wellbeing resources from their first day of employment, and that the company exceeds health guidance on keeping facilities safe for in-office work. For now, Facebook’s content moderation will continue to be conducted by people rather than artificial intelligence (AI).

‘While our investments in AI are helping us detect and remove violating content to keep people safe – and we will continue to make [those investments] – we still depend on people to review and train our technology on the most sensitive and high-priority materials,’ a Facebook company spokesperson says. ‘That’s exactly why we are taking these steps to help keep our content moderators safe, in and out of the office.’

It is that type of input that Waters will focus on next year as SASB speaks to social media giants and invites public comments before finalizing its revised standards.

Caroline Byrne

Caroline Byrne is an Irish-Canadian journalist who started her career in London at the Associated Press and Bloomberg. Over the last decade, she has worked both as a freelancer and staff journalist for various organizations including Euromoney , The...

Reporter
Clicky