Russia recently announced that due to social media platform Twitter’s failure to remove illegal content, the country is imposing restrictions on the platform. The Federal Communications, Information Technology, and Mass Communications Oversight Service (Roskomnadzor) announced on March 10 that it will slow down the speed of Twitter.
[bg_collapse expand_text=”Continue Reading” icon=”arrow” color=”white” view=”button-blue”]
The regulator said that it was taking the step to keep Russia’s citizens safe from content encouraging minors to commit suicide, drug use, and child pornography. It also stated that it could completely block the service if Twitter doesn’t respond as required. In a statement on its website, Roskomnadzor said that speeds on all mobile devices and 50% of non-mobile devices (computers and the like) will be reduced.
The communications watchdog stated that between 2017 and March 2021, it asked Twitter to remove publications and links more than 28,000 times. It also added that other social media platforms had been more co-operative than Twitter regarding taking down content that encourages minors to commit suicide.
A Twitter spokesperson said, “We have a zero-tolerance policy regarding child sexual exploitation, it is against the Twitter Rules to promote, glorify or encourage suicide and self-harm, and we do not allow the use of Twitter for any unlawful behavior or to further illegal activities, including the buying and selling of drugs.” They also added, “We remain committed to advocating for the Open Internet around the world and are deeply concerned by increased attempts to block and throttle online public conversation.”
Russia isn’t the first country to take action against Twitter—India and Turkey also threatened jail time for platform executives. Matt Navarra, a social media consultant, said the “threat of restricting, blocking or banning social media platforms appears to be a growing trend for countries notorious for harsher, less democratic regimes.”
Social media giants have been in a constant battle to keep inappropriate content off their platforms. YouTube, Facebook, Twitter, and TikTok use a combination of human moderators and software to police the content shared on their platforms. However, none of them have been entirely successful at content moderation. An example of this is the Christchurch shooter who live-streamed his mass murder on Facebook and other platforms.