Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Explicit stated ban of explicit content or introduction of explicit accounts #50

Open
nielsk opened this issue Jun 1, 2013 · 26 comments

Comments

@nielsk
Copy link

nielsk commented Jun 1, 2013

In the past we have seen several accounts popping up that used hardcorn porn-images for avatars and backgrounds or even posted such images before the accounts got banned.

Since app.net allows sign-ups for minors, such content should be outright banned in the ToS. The same for showing of extreme violence in avatars, backgrounds or pictures that get embedded via oembed (and get embedded into the stream-views of several clients).

The problem are not only the minors but this content can suddenly make viewing the global stream or services like appneticus.com that shows the last 100 users NSFW or problematic when one reads app.net while children are present.

The banning or suspension of those accounts should also be happening faster, especially when a ban of such content is part of the ToS.

An alternative I'd like to propose, which I would prefer is the introduction of "explicit accounts" to app.net. My idea is the following:
There should be explicit and safe accounts. And users can set their stream to explicit or safe. If set to safe, accounts set to explicit are filtered out.

My suggestion is that explicit becomes an annotation to a post that gets set by the API depending on the account-setting.
In addition "safe" users shouldn't see the avatar and background image of an explicit account but only the default avatar and background and somewhere the mention that the account is explicit. That's especially useful for "safe" users who get followed by explicit accounts.

When a user signs up he gets the option to set the account to explicit (and of course can do so later in the settings).
Safe accounts see by default only safe content, people looking at app.net who are not logged in, see also only safe content and explicit accounts see by default both. Of course a safe account should be able to switch its account to see also explicit content.

If a user gets reported that he or she posts explicit content, the account should be set to explicit and cannot be set by the user to safe again. The past posts should of the user should be set to explicit, too. But the user should have the option to do an appeal. If the user gets set to safe again only the explicit posts should be set to explicit, while all other past posts get set to safe again.

If a user, that didn't get reported on decides to set her or his own account from explicit to safe, the user can do so. Since previous posts have the explicit-annotation, they will still be filtered out for safe users. (you could maybe show something like "explicit post" without the content - just like deleted posts are shown by some clients as deleted).

The second way would be my preference, with an addendum to the ToS about the explicit accounts, their meaning and the force-setting. The reasons I prefer them are the following:

  1. My intuition is that accounts that plan to be explicit anyway will set their account most of the time to explicit. That way their content get filtered out by default for users who only want to see safe content.
  2. Right now those accounts get banned, probably because there are enough people reporting those accounts under the ToS that they are disturbing the quality of the service for them. But there might also be users who want to see the explicit content. Most of the users are adults and should be able to see the content, if they want to.
  3. I don't like banning if it is not necessary. And I don't see that those accounts won't pop up again and again and need banning and it will take half a day or a whole day depending on the time zone the account signed up in until it is banned. And with the assumption of 1) those accounts won't turn up in "safe" streams anyways.
@kenleyneufeld
Copy link

I support moving this direction. Seems better to set a policy to allow a wide array of interests without banning and also allowing to protect certain categories of users.

@cgiffard
Copy link

cgiffard commented Jun 2, 2013

Hrm, what about people who are fine to consume NSFW content, but don't produce any of their own? Having to flag their account as 'eplicit' so they can see 'explicit' accounts might create an 'explicit-only ghetto' where people who publish (or have published) explicit content are wholesale unable to talk to those who don't, or it'll at least be a frustration - unless I'm misunderstanding where you're going with this.

I actually fully support this kind of flagging, but I feel like the flag should be able to be applied on a per-post level (perhaps 99.99% of my content is safe for kids, but there's just that one thing I wanted to share that happened to have drug references/sex/violence in it...) without having to toggle my account type all the time.

Here's how I see this working from a user perspective:

There should be three account tiers:

  1. I never publish explicit material
  2. I sometimes publish explicit material
  3. I often publish explicit material

If 2. is selected, the user is prompted to flag posts (maybe a check box, initially unchecked, below the post entry text box) as explicit on a post-by-post basis. If 3. is selected, posts are assumed explicit, and if 1. is selected, posts are assumed safe.

@MacLemon
Copy link

MacLemon commented Jun 2, 2013

Who gets to decide what is “explicit” under which jurisdiction or moral? What may offend one person may be totally fine for others. (Being offended by something is a person's own choice, not a technical property.) What may be legal in one country may be illegal in another. Which law gets to be applied? The viewer's or the poster's?
What is not-safe-for-work heavily depends on your work environment. The same content may be totally fine for your job or someone else's or it may just not be. No matter if you work in IT, software development, porn, at a lawyer, or anywhere else.

It is not the task of ADN to raise children or to act as a substitute for parental guidance. If you don't want certain content (to be specified by each individual on their own) to show up while somebody is watching you, then don't access that platform in that situation.

Regarding three account tiers of “explicitness”: Ask yourself the questions I stated above. Will your choice hold up against everybody else's view of your content? Are you to be held liable if it doesn't?

ADN is just a medium. It shall not be the medium's choice to censor the messages it transports.

@lomifeh
Copy link

lomifeh commented Jun 2, 2013

I think anything like this needs to be very carefully thought out. No one wants ADN becoming a content manager or filter. I'd say let people post what they want, if they violate the ToS let consequences occur.

@cgiffard
Copy link

cgiffard commented Jun 2, 2013

I don't think anybody is suggesting making ADN an arbiter of content 'morality', or whatever.

Whether a post is explicit or not is exclusively the decision of the person who posts it.

Secondly, I'm not after this because I'm offended easily. I'm interested in this because I am offensive, often. And I shouldn't have to risk account suspension because I accidentally pissed off some prude with nothing better to do - I should be able to ensure they never see my post in the first place.

@cgiffard
Copy link

cgiffard commented Jun 2, 2013

BTW this isn't censorship. This is opt-in content classification.

Vimeo does it quite effectively: https://vimeo.com/help/faq/content_ratings

@nielsk
Copy link
Author

nielsk commented Jun 3, 2013

@MacLemon The current ToS state: "Use the Service in any manner that could interfere with, disrupt, negatively affect or inhibit other users from fully enjoying the Service or that could damage, disable, overburden or impair the functioning of the Service;"

Which means that essentially enough reports from users about a certain user should ban him because the user "negatively affect other users from fully enjoying the service".
Each account that has an avatar and background with a hardcore porn-image (which is btw. everywhere after some research I did either banned completely or at least not allowed to show to minors) got banned and my guess is that the reasoning is exactly that sentence. That's why I want a more clear definition of what is allowed and what not (apparently posting hardcore porn is not) or and that would be better imho an environment where people who do not want to see such content at all don't get to see it.
I can actively choose not to go to a porn-site. But part of the app.net-experience is to use global or sites that show me the latest 100 users. When I know that there will be posted hardcore porn-pictures, those parts are not usable anymore for me (and apparently other users, too judging from recent discussions on app.net about the aforementioned accounts)

@nielsk
Copy link
Author

nielsk commented Jun 3, 2013

@MacLemon btw. as @cgiffard mentioned: this is not censorship. It would be censorship if app.net would check your posts after you've written them but before they are posted wether they should go to global or not.
I ask for a system that people can classify their posts to increase the user experience for all and reduce banning of accounts.

@neonichu
Copy link

neonichu commented Jun 3, 2013

In my opinion, ADN should stay out of regulating content except for spam and stuff that's illegal under whatever jurisdiction applies to them.

Self-classification sounds like a good thing until you think about the people who are not classifying their stuff correctly. At this point, we are back to square one, because someone at ADN has to classify the content for them or you got at least some "unsafe" content back in global.

@cgiffard
Copy link

cgiffard commented Jun 3, 2013

@neonichu Sure, most people will leave their content unclassified. Which should be an option, and that's totally fine. But people who want to post explicit content should be afforded the option to post it and tag it as such, and avoid being banned from the service for being too offensive.

Because of the way people consume the global feed (which isn't necessarily a bad thing) we need a way to simultaneously satisfy their requirements for a relatively safe feed (normal caveats apply, of course) and to enable people to post more explicit material.

@neonichu
Copy link

neonichu commented Jun 3, 2013

@cgiffard maybe it could be turned upside down: instead of banning, accounts can be marked as "unsafe" by ADN staff. By default, "unsafe" accounts are invisible to users. Let users opt-in to also see those accounts.

@cgiffard
Copy link

cgiffard commented Jun 3, 2013

Well, it's a ToS violation to post explicit content anyway. So you're giving people a way to do it, and relaxing the ToS to say "well if you tag it like this we'll let you post it".

@cgiffard
Copy link

cgiffard commented Jun 3, 2013

@neonichu I think on some level, that makes sense. There are people who are all about the explicit content, and that's fine for them. However, once in a blue moon, I might want to post something explicit. Under the 'unsafe' model, I can't - because my account will be flagged and then nobody will be able to see any of the other totally safe things I've been saying (which is most of them.)

@neonichu
Copy link

neonichu commented Jun 3, 2013

@cgiffard could be augmented by giving users a way to self-classify individual posts as "unsafe", too. If they don't do it properly, mark their whole account as "unsafe".

@cgiffard
Copy link

cgiffard commented Jun 3, 2013

@neonichu Isn't that what I've proposed?

@nielsk
Copy link
Author

nielsk commented Jun 3, 2013

@neonichu it is part of my suggestion: don't ban accounts that are wrongly labeled but mark them as explicit and let them appeal etc

@cgiffard where in the ToS stands that explicit content is not allowed? I can't find that anywhere

And yes a core annotation for explicit content would be great (also part of my suggestion). If a "safe" user (an explicit user gets it attached all the time anyways) could set it on a per-post basis that would be really good.

@neonichu
Copy link

neonichu commented Jun 3, 2013

@cgiffard It's a merge of your proposal and mine :).

@cgiffard
Copy link

cgiffard commented Jun 3, 2013

@nielsk Sorry, that was a bad assumption on my part. Either way, there must have been some rationale for banning the pornographic accounts, or they were removed for debatably capricious reasons.

@neonichu
Copy link

neonichu commented Jun 3, 2013

@nielsk It does. Sorry, I missed that paragraph in your proposal.

@cgiffard
Copy link

cgiffard commented Jun 3, 2013

I think you should be able to voluntarily set your account as unsafe, or appeal to be set back to a "safe" content type should you have been moved there by an ADN staffer.

@nielsk
Copy link
Author

nielsk commented Jun 3, 2013

@cgiffard that's what I proposed ;)

@cgiffard
Copy link

cgiffard commented Jun 3, 2013

Also the setting for what content you want to consume should be independent from the flag on your account. A safe user who never posts explicit content should be able to follow users who do.

@tollerkerl
Copy link

I like the idea with "safe streams", the default setup after signing up should be that the new user can only view the safe one and if he or she likes can change that setting to "show me safe content, too".

@nielsk
Copy link
Author

nielsk commented Jun 3, 2013

@cgiffard also part of what I proposed ;)

@cgiffard
Copy link

cgiffard commented Jun 3, 2013

Just as a point of comparison, ADN already has account classification settings: https://account.app.net/settings/

You can classify your account as a human, feed, or bot.

(Not saying that solves this issue - just interesting to note the parallels.)

@cgiffard
Copy link

cgiffard commented Jun 3, 2013

@nielsk Are we on the same page? I can't find mention of that in the issue description...? :S

This is my concern:

If set to safe, accounts set to explicit are filtered out.

I think that should be a separate setting - I shouldn't have to indicate to everybody (severely restricting my own available pool of interactions) that my account produces explicit content in order to see the explicit content of others.


Edit Sorry, totally missed that! I think you need to make the distinction between an initial setting determined based on another and the two settings being the same. :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants