sorvsample of randomized voters |
|||||||||||||
Understanding the "pile-on lottery" How to implement scalable fact-checking Why transparency at the algorithm level is not enough Using sorv to fight social-media-induced depression With sorv, government wouldn't need special privileges to report rules violations |
A scalable, fair and transparent system for arbitrating user-submitted "fact-checks" would be:
From this point on, I'll discuss this hypothetical fact-check system as it would apply to Twitter/X specifically, because: (a) Twitter has already set the precedent of letting users (rather than company employees) submit Community Notes and vote on whether they are valid, but (b) existing evidence shows that this is not effective. But, with the changes described above, it could before. First, the existing countermeasures against misinformation do not work. Any user can simply post a reply to something that is obviously false or misleading, and I don't know of any systematic studies of this, but anyone who has tried this against a high-profile account has probably seen anecdotal evidence that it's ineffective:
Since replying to misinformation tweets by itself does not work, what about submissions to Community Notes? I applied for the Community Notes program in October 2023 but never received a response (other users have reported similar experiences). After some searching, I found an article by someone who had been admitted as a Community Notes volunteer (who is remaining anonymous for the purpose of this write-up), so I messaged them and asked them if they could submit a Community Note for three tweets that I had found:
None of the proposed Community Notes resulted in any action being taken. The volunteer told me that if a submitted note doesn't happen to get any attention, it "just sits there" without getting any more votes. This echoes the experience of Slate editor Nitish Pahwa as a Community Notes volunteer -- where he reported that the most common response to the Notes he submitted was not that they were rejected, but that they got no engagement at all from other volunteers. This, in turn, mirrors the "pile-on lottery" effect of social media generally, including the example I go to most often: I post a lot of silly jokes on Reddit, and once in a while one of them blows up to 50,000 upvotes, but most of them "just sit there" because they don't gain "traction", and it has nothing to do with the post quality. And it's easy to see why some of the same fallacies which would cause people to believe that social media "works" meritocratically, would also cause them to believe that Community Notes works meritocratically as well:
Solution: Peer-ratio-enforced fact-checkingBut these problems (in particular, the problem of a fact-check "just sitting there" with nobody acting on it) go away if you enforce a rule that every user has to participate in 10 votes on other fact-checks, for every fact-check that they submit of their own. As long as the user base is relatively stable (or growing), this ensures that there will be enough people available to vote on the fact check. (Whether or not this guarantees people actually will vote on the fact-check, depends on the implementation. If all newly submitted fact-checks simply go into a pool of "unadjudicated" fact-checks, and if users are free to browse that list and choose the ones they want to vote on, then this risks users ignoring the more boring-looking fact-check in favor of the ones that look more interesting. My recommendation would be to require voters to handle submitted fact-checks in a First-In-First-Out fashion -- you are automatically assigned to vote on the most-recently-submitted fact-check that doesn't have enough votes yet. Voting on incorrect or badly written fact-checks is part of the job of being a volunteer; that way, the people submitting those fact-checks receive feedback that they have room for improvement.) Another advantage of peer-ratio-enforced fact-checking is that you can "spend" your fact-checks on any post or comment that you want, even a low-traffic post from a user with very few followers. By contrast, in the existing system, suppose for the sake of argument that you're a Community Notes volunteer that has identified a post from a low-profile user that is clearly wrong. On the other hand, you feel that the time of the other volunteers is a precious and limited resource, and you wouldn't want to "waste" it on a low-profile post that won't do a lot of harm anyway. A peer-ratio-enforced fact-check system mitigates that dilemma, because by participating in 10 votes for other people's fact-checks, you have "earned" the right to submit a fact-check of your own, and you can "spend" it however you want. (That is still generating labor for other people, of course -- they have to read your fact-check and vote on it -- but you can feel less guilty because you've traded your labor for their labor.) Further, you could implement a system that would provide real-time online fact-checking -- ensuring that when you submit a fact-check, the system pings users who are online at that moment, and requires (or strongly encourages) them to adjudicate the fact-check in the next five minutes. While this sounds at first like a costly imposition on other users, you can "earn" the right to this service by providing it to others, and then "spend" it when you want to. In other words: you can log in and mark yourself as being available for real-time fact-checks. During that time, if the system pings you and asks you to vote on a fact-check, you have to complete it in the next five minutes. After you have done that 10 times, you have earned the right to a "real-time fact-check". Then if you ever submit a real-time fact-check, the system will ping 10 other users who submit their votes within five minutes, and you'll have your result in five minutes as well. I maintain this would be the first social media fact-checking system that meets all of these criteria:
ADDENDUM: A way to test "scalable fact-checking" in an offsite system, so you can submit fact-checks to both the real Twitter and the offsite system at the same time, and see which system produces better results -- all without changing a single line of code in Twitter itself. |