sorv

sample of randomized voters

Home

About

Understanding the "pile-on lottery"

Advantages of the sorv system

How to implement scalable fact-checking

Why transparency at the algorithm level is not enough

Using sorv to fight social-media-induced depression

With sorv, government wouldn't need special privileges to report rules violations

With sorv, government wouldn't need special privileges to report rules violations

In 2024, Mark Zuckberberg revealed that the Biden administration had "pressured" them to censor certain content on Facebook, particularly content deemed to be misinformation about COVID-19.

It had always been Facebook's policy, of course, that they had the right to remove content that violated their Terms of Service, and (in more recent years) to apply fact-check labels to content that they determined to be misinformation; and any user had the right to report content in either of these categories. What was apparently controversial was that Facebook had been fast-tracking requests from Biden officials.

Whether this is defensible or not is probably a matter of opinion. Suppose a piece of content is obviously deserving of a fact-check label (e.g. a page claiming that the COVID virus itself is a complete hoax), and a Biden official reports it to Facebook and gets a "fact-check: false" label applied to it. Is that good, because it is false and the label got applied accurately? Or bad, because government officials got special privileges? Under their existing system, Facebook doesn't have the resources to give that kind of fast-track reporting privileges to everybody. So, should they give them to nobody at all? Or, is it OK to give this privileges to a small subset of users (government officials) who are probably more educated than average, but might abuse the privilege? Difficult questions.

However, all of this becomes moot if the social media site uses sorv for abuse reports (and for fact-checks), because all user-submitted reports will be adjudicated with a quick turnaround, whether submitted by the government or not.

Recall how abuse reports would work using sorv:

  1. Some subset of users on the site opt in as "jurors", to adjudicate abuse reports.
  2. When a user (not necessarily one of the opt-in "jurors" in step 1) submits an abuse report, it gets reviewed by a small random sample of users (say, 10 users) from the "jury pool" who are currently online, and who look at the content along with the rule that it was alleged to have violated.
  3. If more than some threshold of users (say, 7 out of 10) agree that the content violated the rule, then the content violation is treated as "valid" and the content is removed.
Depending on the implementation, there may be further options (e.g. the user whose content is removed could appeal, and then the appeal would be reviewed by a different random subset of users).

The key point is that in both cases, because the system pushes the abuse report to jurors who are currently online, and who can review the abuse report simultaneously, an abuse report can be adjudicated in a few minutes, sometimes in a few seconds. (If an image is flagged as hardcore porn, each juror can look at it and conclude in less than 5 seconds, "Yep, that breaks the rules", [CLICK], and after the votes are collected, the entire process could be completed in less than a minute. Sometimes, adjudicating an abuse report might take longer -- you might have to read several paragraphs to determine if a post qualifies as "Holocaust denial", for example -- but the turnaround would still be minutes, not hours or days.)