Why using Facebook and YouTube should require a media literacy test

 Just like driving requires an exam, social media users should be required to take a 15-minute media literacy course, followed by a quiz, before using their platform of choice.


BY MARK SULLIVAN
We don’t let people begin operating motor vehicles until they’ve taken driver’s education and then a test for a very good reason: Vehicles are dangerous to drivers, passengers, and pedestrians. Social networks and the misleading and harmful content they circulate are dangerous for society too, so some amount of media literacy education—and a test—should be a condition of using them.

Social media companies like Facebook and Twitter would surely object to such an idea, calling it onerous and extreme. But they willfully misunderstand the enormity of the threat that misinformation poses to democratic societies.

The Capitol riot gave us a glimpse of the kind of America misinformation helped create—and illustrates why it is so dangerous. On January 6, the nation witnessed an unprecedented attack on our seat of government that resulted in seven deaths and lawmakers fearing for the lives. The rioters who caused this mayhem planned their march on the Capitol on social networks, including in Facebook Groups, and were stirred to violent action by months of disinformation and conspiracy theories about the presidential election, which they believed had been “stolen” from Donald Trump.

While the big social networks have made significant investments in countering misinformation, removing all of it or even most of it may be impossible. That’s why it’s time to shift the focus from efforts to curb misinformation and its spread to giving people tools to recognize and reject it.

Media literacy should certainly be taught in schools, but this type of training should also be made available at the place where people actually encounter misinformation—on social networks. Large social networks that distribute news and information should require users to take a short media literacy course, and then a quiz, before logging in. The social networks, if necessary, should be compelled to do this by force of law.

MODERATION IS HARD
So far we’ve relied on the big social networks to protect their users from misinformation. They use AI to locate and delete, label, or reduce the spread of the misleading content. The law even provides social networks protection from being sued for content moderation decisions they make.

But relying on social networks to control misinformation clearly isn’t enough.

First of all, the tech companies that run social networks often have a financial incentive to let misinformation remain. The content-serving algorithms they use favor hyper-partisan and often half-true or untrue content because it consistently gets the most engagement in the form of likes, shares, and comments by users. It creates ad views. It’s good for business.

Second, large social networks are being forced into an endless process of expanding censorship as propagandists and conspiracy theory believers find more ways to spread false content. Facebook and other companies (like Parler) have learned that taking a purist approach to free speech—i.e. allowing any speech that isn’t illegal under U.S. law—isn’t practical in digital spaces. Censorship of some kinds of content is responsible and good. In its latest capitulation, Facebook announced Monday it will bar any posts of debunked theories about vaccines (including ones for COVID-19), such as that they cause autism. But it’s impossible for even well-meaning censors to keep up with the endless ingenuity of disinformation’s purveyors.

There are logistical and technical reasons for that. Facebook relies on 15,000 (mostly contract) content moderators to police the posts of its 2.7 billion users worldwide. And it is increasingly turning to AI models to find and moderate harmful or false posts, but the company itself admits that these AI models can’t even comprehend some types of harmful speech, such as within memes or video.

That’s why it may be better to help consumers of social content detect and reject misinformation, and refrain from spreading it.

“I have recommended that the platforms do media literacy training directly, on their sites,” says disinformation and content moderation researcher Paul Barrett, deputy director of the New York University (NYU) Stern Center for Business and Human Rights. “There’s also the question of should there be a media literacy button on the site, staring you in the face, so that a user can access media literacy data at any time.”

(...)

See the full article at: https://www.fastcompany.com/90602689/facebook-youtube-media-literacy


Comentários

Postagens mais visitadas deste blog

Sites para baixar ou ler livros em espanhol

10 filmes que retratam a Alegoria da Caverna de Platão

O sonho de Talita