featured image

Meanwhile, YouTube touts its transparency efforts, saying in 2019 that it “launched over 30 different changes to reduce recommendations of borderline content and harmful misinformation,” which resulted in “a 70 percent average drop in watch time of this content coming from nonsubscribed recommendations in the United States.” However, without any way to verify these statistics, users have no real transparency.

Just as polluters green-wash their products by bedecking their packaging with green imagery, major tech platforms are opting for style, not substance.

Platforms like Facebook, YouTube and TikTok have good reasons to withhold more complete forms of transparency. More and more internet platforms are relying on A.I. systems to recommend and curate content. And it’s clear that these systems can have negative consequences, like misinforming voters, radicalizing the vulnerable and polarizing large portions of the country. Mozilla’s YouTube research proves this. And we’re not alone: The Anti-Defamation League, The Washington Post, The New York Times and The Wall Street Journal have come to similar conclusions.

The dark side of A.I. systems may be harmful to users, but those systems are a gold mine for platforms. Rabbit holes and outrageous content keep users watching, and thus consuming advertising. By allowing researchers and lawmakers to poke around in the systems, these companies are starting down the path toward regulations and public pressure for more trustworthy — but potentially less lucrative — A.I. The platforms are also opening themselves up to fierce criticism; the problem most likely goes deeper than we know. After all, the investigations so far have been based on limited data sets.

As tech companies master fake transparency, regulators and civil society at large must not fall for it. We need to call out style masquerading as substance. And then we need to go one step further. We need to outline what real transparency looks like, and demand it.

What does real transparency look like? First, it should apply to parts of the internet ecosystem that most affect consumers, like A.I.-powered ads and recommendations. In the case of political advertising, platforms should meet researchers’ baseline requests by introducing databases with all relevant information that are easy to search and navigate. In the case of recommendation algorithms, platforms should share crucial data like which videos are being recommended and why, and also build recommendation simulation tools for researchers.

Transparency must also be designed to benefit everyday users, not just researchers. People should be able to easily identify why specific content is being recommended to them or who paid for that political ad in their feed.

The post TikTok, YouTube and Facebook Want to Appear Trustworthy. Don’t Be Fooled. appeared first on The News Amed.