Audio presents a fundamentally different set of challenges for moderation than text-based communication.
Elizabeth Culliford, Reuters
The explosive growth of Clubhouse, an audio-based social network buoyed by appearances from tech celebrities like Elon Musk and Mark Zuckerberg, has drawn scrutiny over how the app will handle problematic content, from hate speech to harassment and misinformation. Moderating real-time discussion is a challenge for a crop of platforms using live voice chat, from video game-centric services like Discord to Twitter Inc’s new live-audio feature Spaces. Facebook is also reportedly dabbling with an offering.
“Audio presents a fundamentally different set of challenges for moderation than text-based communication. It’s more ephemeral and it’s harder to research and action,” said Discord’s chief legal officer, Clint Smith, in an interview.
Tools to detect problematic audio content lag behind those used to identify text, and transcribing and examining recorded voice chats is a more cumbersome process for people and machines. A lack of extra clues, like the visual signals of video or accompanying text comments, can also make it more challenging.
“Most of what you have in terms of the tools of content moderation are really built around text,” said Daniel Kelley, associate director of the Anti-Defamation League’s Center for Technology and Society.
Not all companies make or keep voice recordings to investigate reports of rule violations. While Twitter keeps Spaces audio for 30 days or longer if there is an incident, Clubhouse says it deletes its recording if a live session ends without an immediate user report, and Discord does not record at all. Instead, Discord, which has faced pressure to curb toxic content like harassment and white supremacist material in text and voice chats, gives users controls to mute or block people and relies on them to flag problematic audio.
Such community models can be empowering for users but may be easily abused and subject to biases. Clubhouse, which has similarly introduced user controls, has drawn scrutiny over whether actions like blocking, which can prevent users from joining certain rooms, can be employed to harass or exclude users.
The challenges of moderating live audio are set against the broader, global battle over content moderation on big social media platforms, which are criticized for their power and opacity, and have drawn complaints from both the right and left as either too restrictive or dangerously permissive. Online platforms have also long struggled with curbing harmful or graphic live content on their sites. In 2020, a live video of a suicide on Facebook Inc spread across multiple sites. In 2019, a shooting in a German synagogue was live-streamed on Amazon Inc-owned gaming site Twitch. Last Sunday, during the company’s public town hall, Clubhouse co-founder Paul Davison presented a vision for how the currently invite-only app would play a bigger role in people’s lives — hosting everything from political rallies to company all-hands meetings. Rooms, currently capped at 8,000 people, would scale “up to infinity” and participants could make money from “tips” paid by the audience.
The San Francisco-based company’s latest round of financing in January valued it at $1 billion, according to a source familiar with the matter. The funding was led by Andreessen Horowitz, a leading Silicon Valley venture capital firm. Asked how Clubhouse was working to detect dangerous content as the service expanded, Davison said the tiny startup has been staffing up its trust and safety team to handle issues in multiple languages and quickly investigate incidents. The app, which said it has 10 million weekly active users, has a full-time staff that only recently reached double digits. A spokeswoman said it uses both in-house reviewers and third-party services to moderate content and has engaged advisors on the issue, but would not comment on review or detection methods.
Twitter, which has long faced criticism for its ability to curb abuse, is currently testing Spaces with 1,000 users.
Since the rise of the internet, there have been concerns that the dominance of a relatively small number of internet service providers could potentially threaten its open nature. I sought to prevent that outcome during my time in Congress by writing principles of net neutrality into law.
“Fake News” phenomenon is as old as human, not related to a specific time but related to the development of communication tools between people across the ages. Fake News has long been used in wars and conflicts between nations, and even among different groups within one country or a city.
It is true that the technology Goliaths have monopolised the internet and taken over our lives, but there are many who will admit that our lives are also easier for the same. Recommendations,
Adventures in space give both, those taking part in them and those following them a frisson of thrill unmatched perhaps in any other field. In both cases, it shoots up the adrenalin and trigger waves of delight that keep overwhelming the senses. The excitement could be equal to, or perhaps
Enrique Yglesias left Cuba two years ago for a better life. From Uruguay, he trekked to Guyana, across the Amazonian jungle and Central America to Mexico and the US border, where he asked for asylum. He just arrived in Cutler Bay in South Miami-Dade after his release from detention, so he
You’d have to say that, with the benefit of hindsight, it does seem kind of obvious. The more we breathe on each other, the easier it is for nasty things to spread. I’m sure we knew this, didn’t we? Strange, then, that it took a pandemic and 127,000 deaths in the UK for us to do something about it.
China is shoring up ties with autocratic partners like Russia and Iran, as well as economically dependent regional countries, while using sanctions and threats to try to fracture the alliances the United States is building against it. Worryingly for Beijing, diplomats and analysts say, the Biden administration