Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enforcement Guidelines do not address spam (problem of easy/quick disruption vs involved response) #1159

Open
TimidRobot opened this issue Dec 12, 2022 · 1 comment

Comments

@TimidRobot
Copy link

Problem

Regarding version 2.1 of the Contributor Covenant, I don't think the Enforcement Guidelines addresses the issue of spam. Moderators need to be able address spam without becoming overwhelmed.

If it is easy to spam, then the cost of disruption is far far less than the cost of enforcing the guidelines (progressing through 1. Correction, 2. Warning, 3. Temporary Ban, and 4. Permanent Ban). This is especially true due to the tracking/documentation required to progress through the steps.

However, if a moderator responds to spam by immediately jumping to 4. Permanent Ban, there is the risk that they have incorrectly identified the content as spam (or that they're abusing a "spam" designation to avoid accountability, etc.). Unfortunately, this is exacerbated by poor moderation tools (ex. software that does not allow a message to be sent with an enforcement action).

Potential Solutions

  • Explicitly address spam in Enforcement Guidelines
  • Talk about investment of resources in enforcement being, at a minimum, proportional to investment of resources by community member facing potential enforcement action
    • (a convention participant has invested far more resources than a bot created account)
  • Explicitly address disruption in Our Standards
  • Explicitly note relationship to potential Terms of Service (optional example of unacceptable behavior for communities with a terms of service?)
@musingvirtual
Copy link

In my experience this is totally a legitimate problem. When I was working on a large community and enforcing the CoC we had to make exceptions to strict process for this issue because the amount of labor involved was so large.

My standard was to go directly to ban if you're able to send a message with a "let us know here if we've made a mistake" and if you're dealing with a fairly obvious bot with no other interactions with the community, and use a streamlined process if you're dealing with someone who isn't an obvious bot. Process in this case can be streamlined with stock messages, etc.

I think we can certainly write this kind of thing up in the guidelines as part of CC3. I'm also interested in addressing disruption and platform manipulation as you mentioned, because those tend to be less likely to be bots and more likely to be human brigades.

@EthicalSource EthicalSource deleted a comment from trevpeace Mar 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants
@TimidRobot @musingvirtual and others