Brainstorming a better YouTube recommendation algorithm

This year, the public narrative around Facebook has switched – the company feels on the defensive in lots of ways. I think it deserves to be – with billions of users, it is long past time for them to spend their energy on reducing harm, rather than on more growth.

There’s a bit less talk about YouTube (owned by Google), and the problems with its recommendation algorithm.

The problems

Here are an article and a video which show the span of problems – from causing political radicalisation in every direction, to creating vast farms of weird, abusive videos targetted at children:

  1. YouTube, The Great Radicalizer by the excellent Zeynep Tufekci.
  2. The nightmare videos of childrens’ YouTube by the great James Bridle.

This is causing pain for content creators too. For example, a board games reviewer I like called Actulol gives some idea of the mental health issues caused by the algorithm in Why Actualol Went Quiet. You can find other examples – ask YouTubers what they think of the recommendation algorithm.

Why it happens

I wrote a Quora answer last year on How does YouTube’s recommenation algorithm work. If you’re technical, definitely read the full paper from Google.

I find two basic problems with the algorithm:

  1. Populist. It first of all uses a crude criteria to find videos watched by people similar to you. This means it is pre-filtering for the popular. This is the opposite of what has made Google search a success – where some users mine dozens of pages into search results, and Google uses that signal to slowly increase the rank of good sites. (We watched it find PDFTables and rank it highly by those means).
  2. Short-term. After getting those few hundred candidates, it then uses a proper super smart neural network algorithm with thousands of factors fed in to rank the videos based on which you’re most likely to watch for longest. This is extremely simplistic – the idea that the best videos for you to see are the ones you’ll watch the most. Naturally, things that appal or deceive, or that make you unhappy in the long term, will bubble to the top.

Other ideas

It seems that even at a basic level, Deep Mind (the division of Google’s parent company which made the algorithm, and whose offices are but a mile from where I sit now in London), could come up with a better criteria for the algorithm.

As well as a better criteria, Google would really have to want to use it – YouTube is vast in both size and speed of change, making it hard to run whatever algorithms the company wants over the corpus. That feels like an excuse though – they manage to do similar scale work in search. It feels possible, it just needs the budget allocating, so in turn needs the pressure from us.

At Newspeak House the other evening, the topic drifted to ideas for improving the recommendation algorithm. We were just coming up with criteria to train it – I’m sure the boffins at Deep Mind can come up with better ideas, and more interesting technical implementations of them. Here are two of them:

  • Feel happiest in 6 months time. It could explicitly ask you – do you feel happy / does using YouTube make you feel good or bad? And train the algorithm on whether that signal improves months later. As well as improving videos, this would be good for the long term brand of YouTube.
  • Become higher value to advertisers in the next year. This is interesting, as it sounds like it could make more money for Google. It would naturally tend to push aspirational videos – or videos that lead to aspirational videos – onto people. So they are more likely to get promoted, get a better job, want to buy more expensive things that are advertised on YouTube. It’s not clear this would be good overall but it would be interesting.

Some more radical ideas, going beyond just tweaking the training criteria:

  • Get rid of automated recommendations. Instead, they could be curated, perhaps by the person who makes each video. For a more AI based version of this, something like the Spotify model based on the crowd-curation of playlists could help. We used to watch TV stations curated by amazing people like David Attenborough (director of BBC2 in the 1960s). Could YouTube help me do that in a more modern way? If things get bad enough, we could regulate to simply ban robot recommendations outright, and see what innovation human curation leads to.
  • Agent on own phone I own and tweak to make recommendations. This is perhaps too demanding on the user, it feels like the way things are going though. The work done on TensorFlow Light to get the clipboard AI to run on Android phones shows that with the right engineering this kind of solution is possible. The end game is like the movie Her. If the customisation is too hard I could follow someone else’s meta-ruleset – maybe a famous brain training coach.
  • Split up YouTube. Create competition. Right now I have no choice of recommendation algorithm. All the videos are on YouTube, so I have to go there. I guess I could learn Chinese and move to China and find a different system, but that’s the limit of my choice. How to split up, or regulate the new generation of big tech companies isn’t clear yet – we could find a smart way to do it well.

What do you think, how would you improve YouTube’s recommendation algorithm? What would a smarter criteria be?

3 thoughts on “Brainstorming a better YouTube recommendation algorithm

  1. Fascinating stuff from Facebook:

    “Our research suggests that no matter where we draw the lines for what is allowed, as a piece of content gets close to that line, people will engage with it more on average — even when they tell us afterwards they don’t like the content.

    “This is a basic incentive problem that we can address by penalizing borderline content so it gets less distribution and engagement. By making the distribution curve look like the graph below where distribution declines as content gets more sensational, people are disincentivized from creating provocative content that is as close to the line as possible.”

    https://m.facebook.com/notes/mark-zuckerberg/a-blueprint-for-content-governance-and-enforcement/10156443129621634/

    Would love to see that on YouTube. Weight in negative penalty in the recommendation algorithm for being near to borders where they ban.

    This has lots of implications… But is probably better than status quo.

  2. Some great stuff about YouTube and the spread of the Flat Earth conspiracy.

    A write-up from a Flat Earth conference describing the general attitude and dominance of YouTube there:
    https://www.thedailybeast.com/inside-the-flat-earth-conference-where-the-worlds-oldest-conspiracy-theory-is-hot-again?ref=scroll

    A journalistic investigation of how YouTube spreads it:
    https://www.rawstory.com/2018/11/flat-earth-conference-attendees-explain-brainwashed-youtube-infowars/

    A Tweet thread by one of the algorithm designers:
    https://twitter.com/gchaslot/status/1064527592428986368

Leave a Reply to Francis Cancel reply

Your email address will not be published. Required fields are marked *