Social media grapples with live audio moderation - GulfToday

Social media grapples with live audio moderation

Social-Media_750

Audio presents a fundamentally different set of challenges for moderation than text-based communication.

Elizabeth Culliford, Reuters

The explosive growth of Clubhouse, an audio-based social network buoyed by appearances from tech celebrities like Elon Musk and Mark Zuckerberg, has drawn scrutiny over how the app will handle problematic content, from hate speech to harassment and misinformation.  Moderating real-time discussion is a challenge for a crop of platforms using live voice chat, from video game-centric services like Discord to Twitter Inc’s new live-audio feature Spaces. Facebook is also reportedly dabbling with an offering.

“Audio presents a fundamentally different set of challenges for moderation than text-based communication. It’s more ephemeral and it’s harder to research and action,” said Discord’s chief legal officer, Clint Smith, in an interview.

Tools to detect problematic audio content lag behind those used to identify text, and transcribing and examining recorded voice chats is a more cumbersome process for people and machines. A lack of extra clues, like the visual signals of video or accompanying text comments, can also make it more challenging.

“Most of what you have in terms of the tools of content moderation are really built around text,” said Daniel Kelley, associate director of the Anti-Defamation League’s Center for Technology and Society.

Not all companies make or keep voice recordings to investigate reports of rule violations. While Twitter keeps Spaces audio for 30 days or longer if there is an incident, Clubhouse says it deletes its recording if a live session ends without an immediate user report, and Discord does not record at all. Instead, Discord, which has faced pressure to curb toxic content like harassment and white supremacist material in text and voice chats, gives users controls to mute or block people and relies on them to flag problematic audio.

Such community models can be empowering for users but may be easily abused and subject to biases. Clubhouse, which has similarly introduced user controls, has drawn scrutiny over whether actions like blocking, which can prevent users from joining certain rooms, can be employed to harass or exclude users.

The challenges of moderating live audio are set against the broader, global battle over content moderation on big social media platforms, which are criticized for their power and opacity, and have drawn complaints from both the right and left as either too restrictive or dangerously permissive.  Online platforms have also long struggled with curbing harmful or graphic live content on their sites. In 2020, a live video of a suicide on Facebook Inc spread across multiple sites. In 2019, a shooting in a German synagogue was live-streamed on Amazon Inc-owned gaming site Twitch. Last Sunday, during the company’s public town hall, Clubhouse co-founder Paul Davison presented a vision for how the currently invite-only app would play a bigger role in people’s lives — hosting everything from political rallies to company all-hands meetings. Rooms, currently capped at 8,000 people, would scale “up to infinity” and participants could make money from “tips” paid by the audience.

The San Francisco-based company’s latest round of financing in January valued it at $1 billion, according to a source familiar with the matter. The funding was led by Andreessen Horowitz, a leading Silicon Valley venture capital firm.  Asked how Clubhouse was working to detect dangerous content as the service expanded, Davison said the tiny startup has been staffing up its trust and safety team to handle issues in multiple languages and quickly investigate incidents. The app, which said it has 10 million weekly active users, has a full-time staff that only recently reached double digits. A spokeswoman said it uses both in-house reviewers and third-party services to moderate content and has engaged advisors on the issue, but would not comment on review or detection methods.

Twitter, which has long faced criticism for its ability to curb abuse, is currently testing Spaces with 1,000 users.

Related articles