sorv

sample of randomized voters

Home

About

Understanding the "pile-on lottery"

Advantages of the sorv system

How to implement scalable fact-checking

Why transparency at the algorithm level is not enough

Using sorv to fight social-media-induced depression

With sorv, government wouldn't need special privileges to report rules violations

The "sorv project" is intended to persuade social media sites to implement some version of the sorv algorithm, for (1) rating the quality of new content, (2) adjudicating fact-checks, and/or (3) handling abuse reports.

The plan is to make this happen by (1) persuading individual people that it is a good idea; (2) getting them to persuade other people that it's a good idea; and (3) through some combination of (1) and (2), eventually persuading people at the major social media companies that this is a good idea.

Change at social media sites can happen in three ways:

  1. Persuading existing social media sites to implement the changes.
  2. Injecting the idea into the zeitgeist to the point where the next generation of social media sites will incorporate these changes.
  3. As an absolute last resort, attempting to build a new social media site implementing these changes. (We already have a page making the case why attempting to "go and build it" is a bad idea -- in sum, getting a new social media site to "take off" requires getting extremely lucky (otherwise people would be doing it every week), and since it probably won't happen -- that's what "extremely lucky" means -- it would be viewed as a referendum on the merits of the sorv algorithm itself.)

It is also possible to run a parallel experiment to test how well the sorv algorithm would work on a given social media site, without changing a line of code on the social media site itself. (See for example, a recommendation for how to do an off-site experiment using sorv for fact-checking on Twitter, and then comparing the results to see if it scores better than the existing Community Notes system.)

This is a not-for-profit project (but with no 501(c)(3) or similar status since we have no financial "assets" to speak of). I (Bennett Haselton) run the project for now. My role consists mostly of talking to people about why the sorv algorithm would work and hoping that they agree. (My experience has been that either people get it right away -- you can see the glint in their eyes as they realize this would be a game-changer in terms of making processes fair and meritocratic -- or they don't.)

I do this because I believe that sorv is the simplest way to give people the most fair experience possible when trying to make a positive contribution to the world. In particular:

  • If you have a favorite eggs benedict recipe and you want to share it with other people on YouTube, you should be able to film the recipe, publish it, and be assured that if your content is useful and other people are looking for it, they'll be able to find it, and you might even get positive comments from people who appreciate your contribution. You shouldn't have to worry about waiting for a lucky break from the algorithm, or relying on an existing high-profile user to boost your content, or feel like you have to spend months or weeks submitting content for no reason except that the algorithm favors "high-volume contributors".

  • If you see someone posting something online that violates the website's Terms of Service in a way that is truly harmful (violent threats, racial hatred, etc.), you can report that content and have a reasonable expectation that the site will handle it correctly. (I do not believe that "hateful" content should be illegal, and in the United States, it is protected under the First Amendment, but private companies should be able to remove such content if they want to. Usually when hate speech doesn't get removed by private companies in response to a complaint, it's not "out of respect for the First Amendment", it's because they can't scale up to handle the complaints properly.)

  • If you see a high-profile user post something that you think is factually incorrect, you should be able to submit a fact-check, and have confidence that the fact-check will be adjudicated reasonably fairly -- without giving any advantage to the high-profile user just because they have more followers.

And, if people decide that a sorv algorithm produces fair outcomes in these cases, then greater awareness might lead to more "sorv-like" processes being implemented in other situations off of social media. If your school runs an essay contest, organizers should intuitively understand that the fairest way to run the contest is to have multiple judges evaluate each essay, and rate the essays independently of each other, and have the essay's final score be the average of ratings from each judge.

We all grew up hearing that "life's not fair." But that's not an argument for why we can't help make some processes more fair.