Skip to content

[ICWSM 2024] Decentralised Moderation for Pleroma and the Fediverse

License

Notifications You must be signed in to change notification settings

vibhor98/decentralised-moderation-pleroma

Repository files navigation

Decentralised Moderation for Interoperable Social Networks: A Conversation-based Approach for Pleroma and the Fediverse

Vibhor Agarwal, Aravindh Raman, Nishanth Sastry, Ahmed M. Abdelmoniem, Gareth Tyson, Ignacio Castro

The International AAAI Conference on Web and Social Media (ICWSM), 2024.

Abstract

The recent development of decentralised and interoperable social networks (such as the "fediverse") creates new challenges for content moderators. This is because millions of posts generated on one server can easily "spread" to another, even if the recipient server has very different mod- eration policies. An obvious solution would be to leverage moderation tools to automatically tag (and filter) posts that contravene moderation policies, e.g. related to toxic speech. Recent work has exploited the conversational context of a post to improve this automatic tagging, e.g. using the replies to a post to help classify if it contains toxic speech. This has shown particular potential in environments with large training sets that contain complete conversations. This, however, creates challenges in a decentralised context, as a single conversation may be fragmented across multiple servers. Thus, each server only has a partial view of an entire conversation because conversations are often federated across servers in a non-synchronized fashion. To address this, we propose a decentralised conversation-aware content moderation approach suitable for the fediverse. Our approach employs a graph deep learning model (GraphNLI) trained locally on each server. The model exploits local data to train a model that combines post and conversational information captured through random walks to detect toxicity. We evaluate our approach with data from Pleroma, a major decentralised and interoperable micro-blogging network containing 2 million conversations. Our model effectively detects toxicity on larger instances, exclusively trained using their local post information (0.8837 macro-F1). Yet, we show that this approach does not perform well on smaller instances that do not possess sufficient local training data. Thus, in cases where a server contains insufficient data, we strategically retrieve information (posts or model parameters) from other servers to reconstruct larger conversations and improve results. With this, we show that we can attain a macro-F1 of 0.8826. Our approach has considerable scope to improve moderation in decentralised and interoperable social networks such as Pleroma or Mastodon.

Releases

No releases published

Packages

No packages published

Languages