sorv

sample of randomized voters

Home

About

Understanding the "pile-on lottery"

Advantages of the sorv system

How to implement scalable fact-checking

Why transparency at the algorithm level is not enough

Using sorv to fight social-media-induced depression

With sorv, government wouldn't need special privileges to report rules violations

Using sorv to fight social-media-induced depression

The sorv algorithm was not explicitly intended to solve the problem of social media and depression. However, there are multiple ways in which sorv could solve problems that contribute to social media depression, where these solutions can be grouped into three high-level categories:

  • Fairness (that is, likes and views in proportion to quality, instead of rewarding people who get lucky or game the system).
  • Encouraging people to get off the phone (for example, prioritizing "how-to" directions based on how well people complete them successfully, instead of showing a series of eye-candy high-speed recipe videos that the average user never actually attempts).
  • De-prioritizing quasi-sexual content where the people in the videos display no skill except showing skin on camera.

Some of these changes will be harder to persuade app makers to go along with than others. For example, if you reward posts and videos based on "merit" instead of letting people game the system, then this results in a perceived increase in quality by the users -- which results in more usage, which app makers should be happy about. On the other hand, if you want app makers to reward content that encourages people to log off -- for example, by showing them a recipe video that they can actually follow in their kitchen, instead of a series of high-gloss recipe "videos" designed to just keep the user scrolling -- then the app maker might consider this to be against their interests, if they would rather keep the user glued to their phone scrolling through more recipe videos!

To meet app makers' objections that the changes do not serve their interests, one option would be to try and persuade them that "practical" content is a good long-term investment -- if the "recipe videos" you publish are just made to look glossy but the steps don't actually work, users will eventually ignore you, but if you publish recipes that actually do work, that builds long-term user loyalty. Or for kids in particular, parents might enforce that kids have to learn something practical from all that scrolling. ("For every 15 minutes you spend scrolling at your phone, you have to find at least one recipe that you want to make. And then you can teach it to us, and we'll make it together as a family and -- quit rolling your eyes young lady, it's going to be FUN!") And that means that in order for kids to keep their phone privileges, app makers have to provide recipes that actually work. Finally, to the general question of how to persuade app makers to make changes they don't want to make, lawmakers might decide that social media depression has reached such a crisis point that -- rightly or wrongly -- they more or less order app makers to adopt a merit-based system (which has other benefits, such as combating misinformation).

For now, we table the discussion of how to incentivize app makers to actually make these changes, and focus on whether the changes would be beneficial. Below are some common arguments for why social media causes depression (in young people in particular), and the ways in which a merit-based algorithm like sorv could partially solve them.

  • Boosting of stupid content. In a non-merit based system, a person's Facebook/Twitter/etc. feed can easily be filled with content that is either explicitly "engagement bait" (blatantly stupid content to bait people into interacting with it to call it false, which just fools the algorithm into displaying it more prominently), or may simply be amateurishly written content that was boosted by a glitch in the algorithm. It's plausible that this contributes to depression for several reasons:

    1. Readers see the low-quality posts and are saddened to think that people's intelligence has degraded to the point that this is what the public actually likes.

    2. People see lies being promoted on social media, and even if they know the claims are false, it's impossible to stop them from propagating, and depressing to think of all the people who will be fooled. (This is not the same as the "obviously stupid content" problem. A lie can be a lie without being "obviously stupid" -- and thus it's not an indictment of human nature that people are taken in by it -- but it's still frustrating for a reader who knows that it's wrong.) Even if they are replies calling out the lie, the replies often have impression counts less than 1% of the original lie.

    3. People who have tried creating their own content, unless they got lucky and it went viral, are depressed that they are getting outscored 10,000-to-1 by low-quality dreck.

    A merit-based sorv-type voting system would partially solve all of these problems. If the "stupid content" is obviously stupid, then it won't get enough votes in the initial random sample to get promoted to everyone else. And if the stupid content is non-obviously stupid -- that is, if it's based on a subtle lie that can be fact-checked -- then you can submit a fact-check and have it adjudicated by other users. (And there's a good chance someone would have shot it down with a fact-check already before you even see it.)

  • Envy of users getting 1M+ views for good content. Distinct from the problem of users being depressed by bad content garnering millions of views, there is the potential depressive effect of seeing good content that is nonetheless getting views that would seem wildly out of proportion to the quality. A user could view a group's dance video with 3 million views and figure -- correctly! -- that even if they uploaded their best video of their own dance troupe, they probably couldn't come close to hitting 3 million views.

    And they would be right, but that's because of the "pile-on lottery effect" enabled by social media, not because the 3 million views was a reflection of the troupe's skill. I post a lot of silly jokes and observations on Reddit, and most of them get between 0-5 upvotes while once in a while one of them will get 50,000; nobody thinks that the ones in the latter category are "10,000 times better". It's simply an attribute of existing systems that the "pile-on effect" produces highly unequal and highly random results.

    Using the sorv algorithm means that "bad content" hardly gets any views at all, but it also means that "good content" divides up views more evenly, roughly in proportion to their quality (as measured by user votes/ratings). In a merit-based system, you and your friends' dance moves might not get as many views as the video that inspired you, but you're more likely to get within striking distance (maybe their video gets 30,000 views to your 10,000).

  • "Doomscrolling." A person could come to believe that civilization or even the human species is in danger of being wiped out by global warming, racism, homophobia, or war.

    The reason a merit-based algorithm can mitigate this is that most of the worst predictions are probably wrong. See Enlightenment Now: The Case for Reason, Science, Humanism, and Progress by Steven Pinker for a book-length argument making this point, but simply: things used to be much worse, they're getting better, and there's no reason to think that the upward trajectory will stop. And this means that excessively pessimistic posts can be corrected by a merit-based system that adds fact-checks or adds contexts to a misleading post.

    If a post says that the vast majority of scientists believe that global warming is real and caused by humans, that should survive in a robust fact-checking system -- it's true. On the other hand, if a post argues that the situation is irreversible and hopeless, that can be rebutted with a fact-check -- the majority of scientists also believe that climate change can be dealt with through feasible actions, and the rational conclusion is: don't give up, just stay informed.

    (A factually accurate picture of things might even inspire a person to get offline and do something, with the caveat that most local actions do not measurably impact an issue like global warming.)

  • Seeing unfair content takedown decisions. Everyone on social media has seen friends posting about how a completely innocuous picture or post got removed for "violating our community standards". While calling this a "cause of depression" would be a stretch, it does reinforce a feeling of unfairness (especially if it feels like the removals are targeting a particular group, like "Republicans" or "Palestine supporters"). And, of course, it stings even worse when it happens to you.

    The solution would be a merit-based system like sorv for adjudicating complaints -- which means that if a particular piece of content is removed, a note can be attached saying that (for example) 8 out of 10 reviewers believed that it violated rule XYZ. This means less content would be unjustly removed in the first place, and if it was, the users who voted for its removal could add comments explaining their reasoning. (So you might still not agree with the decision, but at least it wouldn't be a blatantly stupid decision made by a faceless "algorithm".)

    (Note that this would not solve the problem if the user is frustrated because they believe the rule itself is unfair. If exposed female nipples violate Instagram's rules, and a post is removed because 8 out of 10 reviewers agreed that a nipple was visible, then the user may feel that's a stupid rule, and I'd agree, but a merit-based adjudication doesn't solve the stupid-rule problem.)

  • Watching pictures/videos of glamorous locations with no feasible plan to get there. A typical Instagram feed might contain videos of a gorgeous sunset in Fiji or a deserted beach in Greece. That may inspire envy (and hence depression) in the viewer, but even if it doesn't, in virtually all cases the users just enjoys the view and keeps scrolling. This may not be "depressing" in the short term, but it keeps the user from getting off the phone, and as long as content creators are rewarded only for "engagement" -- views and likes -- there is no reason for content creators to create anything else.

    A solution would be a merit-based system for "location" submissions, but where "merit" is defined as the usefulness of the post for the purpose of going out and enjoying it. Specifically: (a) the default would be to show users "location" posts close to where they live; and (b) people can rate/upvote "location" posts actually they've actually visited the location and confirmed that the information in the post was accurate and useful.

    This is a "high-cost voting" scenario -- to evaluate the "merit" of a location suggestion, you have to get there, do the activity (a nice hike can take all day), possibly pay money, etc. We would not normally expect people to go out of their way to evaluate a location or an outdoor activity just to submit a rating; rather, we hope people would submit ratings afterwards for activities that they were already going to do anyway. This is what people already do, after all, when they leave reviews for parks and hikes on sites like Washington Trails Association. The difference is that in a merit-based rating system, a gorgeous Instagram-style reel could show up in the user's video feed as a "hook", but it would be paired with practical directions (how to get there, what to bring, etc.) and if the user only submits a rating after visiting the spot, then the videos which get the highest ratings will be the ones that give the user useful advice on how to get off their phone and actually do the thing, rather than showing a series of waterfalls around the world that most people will never visit.

    And all of this is just taking into account "experiences" that are already there in fixed supply (e.g. rivers and mountains). Once a merit-based system is in place, people also have incentives to create their own events and experiences, even something as simple as a recurring jam session in the park.

  • How-to directions that are more about dazzling the user than giving practical advice. These are the pseudo-"instructional" videos on Instagram/TikTok feeds that are more about providing video content than giving the user something to do, similar to the reels that show users a series of waterfalls that they're never going to travel to -- the system rewards creators who create content that keeps people glued to their phone, instead of content that the user can practically act on.

    And similarly, a solution would be to let users rate instructions based on successfully completing them, rather than just liking and sharing a video because it looks cool. This is another "high cost voting" scenario (you have to buy ingredients or equipment, and spend minutes or hours building a project or carrying out the how-to task). We could hope that users might beta-test experiments and leave their ratings just for fun; if not enough people volunteer as beta testers, then the how-to authors can explicitly offer to pay people to beta-test and rate their instructions. (Why would an author pay anybody to beta-test their directions? In a merit-based system, if your directions are rated highly enough by the beta testers, and if that causes your how-to directions to be featured prominently on the site, then in an ad-revenue-sharing program with the site, you could make back more in ad-share revenue than you spent paying the beta testers.)

    In a merit-based system, where "merit" is defined as "people have confirmed that the directions actually work", then users can take highly-rated recipe videos and other how-to directions and try to put them into practice with some confidence that the directions are actually doable. A merit-based system would also give users greater incentive to create and share their own how-to videos if they wanted to.

  • Sexualized content being rewarded above legitimate talent. As of this writing in 2024, Mark Zuckerberg is under fire for Instagram allowing parents to monetize pictures of their young daughters in swimsuits. This is an extreme example, but even casual social media users can see how both adults and minors (usually female) can get tens of thousands of views by modeling revealing clothing without showing any discernible skill.

    This would be depressing to anyone, but especially for young girls themselves who might think, "No video of me playing the violin is ever going to get as many views as those girls playing beach volleyball" (which, under the current system, would probably be correct), or even, "Maybe the way for me to get views is to start my own 'modeling' channel!"

    The trouble is that a merit-based solution -- if you define "merit" as "appealing to users" -- does not solve this problem. If the "merit" of a video is measured by (for example) the percentage of people who watch a video all the way through, then videos of women in revealing clothing are still going to do well, and a system that blindly sorts videos according to this "merit" score will push those videos to the top.

    A merit-based system can prevent the "pile-on lottery effect", but that only means that instead of 1 girl getting 1 million views for her "modeling" video, you'd have 20 girls each getting 50,000 views for their "modeling" videos. But as long as these videos are still getting far more views than a video of a talented kid playing the violin, it's still going to be obvious to everyone what's going on.

    The most straightforward solution is to boost videos based on public votes from verified users, or from users who have a large numbers of followers of their own (and hence, probably a reputation to protect), even if they are not verified. (Some sites like Twitter and Facebook show the list of users who have "liked" a post or video; other sites like Reddit do not.) A video of a girl sitting at the beach in a bathing suit might get viewed in full by a high percentage of users who see it, but if none of the verified users in that sample are willing to publicly admit to "liking" it, then the site should not boost it to other users.

Collectively, this is a strong argument for implementing sorv or some other merit-based algorithm for dealing with the problems listed here. There is virtually no chance that adopting a merit-based system would make anything worse, and a good chance that a merit-based system would make things better.