sorv

sample of randomized voters

Home

About

Understanding the "pile-on lottery"

Advantages of the sorv system

How to implement scalable fact-checking

Why transparency at the algorithm level is not enough

Using sorv to fight social-media-induced depression

With sorv, government wouldn't need special privileges to report rules violations

Why sorv would make it easier for content creators to accept criticism

The problem with criticism the way it is usually delivered, is that it's perfectly rational for a content creator to believe there are reasons why it might not be valid. In particular, these two cases:

  • Suppose that one person gives you a piece of criticism -- it's possible that that is just that one person's opinion, and most other people in your target audience would not share that belief.
  • Suppose instead that a group is evaluating your work, and one person speaks a piece of criticism out loud, and everyone else in the group subsequently agrees. It's possible that the others in the group might not have independently reached the same conclusion, but that the first person in the group influenced everybody else (either consciously -- "That's my boss, I better agree with her" -- or subconsciously, if the first speaker had particularly persuasive face and voice).

However, if you show the content to a random sample of your target audience, who evaluate the content without communicating with each other, and most people in the sample independently come up with the same criticism, then most of the target audience will probably share the same opinion.

It might be naively optimistic to think that even in such circumstances, most users would graciously accept criticism. However, the best you can do is to present evidence that a representative random sample of their target audience, independently looked at the content and most of them formed the same opinion. A defensive content creator could still respond by saying:

  • that the random sample doesn't count because "You only sampled 20 people; there are still 10,000 other people in the category that I'm trying to reach." (Occasionally you see comments like this in the wild from people who don't understand why random sampling is predictive.)
  • that the content is "ahead of its time", or otherwise that the target audience is not in a position to appreciate it. This could be true or false; we have no way of testing whether future generations might appreciate the content more. But it does mean that if the creator was trying to create content that would appeal to the target audience today, then they failed.

This argument applies to the case where someone is submitting content to be rated based on quality, but the same argument applies if the user is submitting a "fact-check" rebuttal to be voted on by other fact-checkers, or submitting a terms-of-service violation to be voted on by other users who handle abuse reports -- it's possibly easier to accept rejection if multiple people independently vote that your submission is invalid. One difference is that when submitting a fact-check or a terms-of-service violation, if the other users vote it down, the original user can't use the "great art ahead of its time" excuse. Great art may, indeed, be ahead of its time, but there is no such thing as a fact-check or an abuse report that's ahead of its time -- fact-checks and abuse reports are based on the facts as they are understood today, or the rules as they would be understood by the average person.