sorvsample of randomized voters |
|
Understanding the "pile-on lottery" How to implement scalable fact-checking Why transparency at the algorithm level is not enough Using sorv to fight social-media-induced depression With sorv, government wouldn't need special privileges to report rules violations |
With sorv, government wouldn't need special privileges to report rules violationsIn 2024, Mark Zuckberberg revealed that the Biden administration had "pressured" them to censor certain content on Facebook, particularly content deemed to be misinformation about COVID-19. It had always been Facebook's policy, of course, that they had the right to remove content that violated their Terms of Service, and (in more recent years) to apply fact-check labels to content that they determined to be misinformation; and any user had the right to report content in either of these categories. What was apparently controversial was that Facebook had been fast-tracking requests from Biden officials. Whether this is defensible or not is probably a matter of opinion. Suppose a piece of content is obviously deserving of a fact-check label (e.g. a page claiming that the COVID virus itself is a complete hoax), and a Biden official reports it to Facebook and gets a "fact-check: false" label applied to it. Is that good, because it is false and the label got applied accurately? Or bad, because government officials got special privileges? Under their existing system, Facebook doesn't have the resources to give that kind of fast-track reporting privileges to everybody. So, should they give them to nobody at all? Or, is it OK to give this privileges to a small subset of users (government officials) who are probably more educated than average, but might abuse the privilege? Difficult questions. However, all of this becomes moot if the social media site uses sorv for abuse reports (and for fact-checks), because all user-submitted reports will be adjudicated with a quick turnaround, whether submitted by the government or not. Recall how abuse reports would work using sorv:
The key point is that in both cases, because the system pushes the abuse report to jurors who are currently online, and who can review the abuse report simultaneously, an abuse report can be adjudicated in a few minutes, sometimes in a few seconds. (If an image is flagged as hardcore porn, each juror can look at it and conclude in less than 5 seconds, "Yep, that breaks the rules", [CLICK], and after the votes are collected, the entire process could be completed in less than a minute. Sometimes, adjudicating an abuse report might take longer -- you might have to read several paragraphs to determine if a post qualifies as "Holocaust denial", for example -- but the turnaround would still be minutes, not hours or days.) |