YouTube has instituted many changes over the past year to limit the problematic videos it recommends to viewers. A new study suggests the repairs have a way to go.
Software nonprofit Mozilla Foundation found that YouTube’s powerful recommendation engine continues to direct viewers to videos that they say showed false claims and sexualized content, with the platform’s algorithms suggesting 71% of the videos that participants found objectionable.
The study highlights the continuing challenge Alphabet Inc. subsidiary YouTube faces as it tries to police the user-generated content that turned it into the world’s leading video service. It is emblematic of the struggle roiling platforms from Facebook Inc. to Twitter Inc., which soared to prominence by encouraging people to share information but which now face regulatory and social pressure to police divisive, misleading and dangerous content without censoring diverse points of view.
For YouTube, it also shows gaps in its efforts to steer users to videos that should be of interest based on viewership pattern