Our Kremlin Watch Program has just published a new report which reviews the state of play of policy debate on online platforms like Facebook or Google.
Here are our recommendations:
- Examine the use of algorithms by online platforms in order to reveal potential errors and biases; understand to what extent the algorithms are a conscious editorial choice and how this should effect platforms’ liability.
- Provide guidelines on the editorial and take-down practices of online platforms. Make sure they are transparent and in line with freedom of speech principles and human rights law. Install dedicated bodies to oversee and report on their conduct.
- Properly apply existing legislation on platforms, notably from the realms of copyright, audiovisual, and competition law.
- When proposing legislation about hate speech or fake news, develop definitions for these terms that are as specific as possible.
- Ensure that platforms install appropriate redress mechanisms that allow users to complain if their content had been unjustly removed.
- Be transparent about editorial practices and report them, especially when it comes to taking down content.
- Continue partnering with journalists and fact checkers.
- Graphically differentiate news content from other types of posts.
- Publically proclaim your intention to support media literacy and your trust in high-quality journalism.
- Fund media literacy classes, particularly in those parts of the world that have recently democratized and whose media market does not have an established tradition (e.g. Central and Eastern Europe).
Civil society and the private sector:
- Push online platforms toward being transparent about their editorial practices.
- Promote a discourse that views fake news and hate speech as “not cool,” like eating unhealthy food.
Article for the Atlantic Council: