Technology companies are coming under renewed pressure, this time
According to the EC, despite a recent increase in the amount of terrorist propaganda, xenophobic and racist speech online, technology and social media companies have not been acting quickly or proactively enough to detect it, remove it, or to stop it from appearing on their platforms.
Code of Conduct Not Enough
Back in May this year, the EC agreed a code of conduct with Facebook, Twitter, YouTube and Microsoft that included a series of commitments to combat the spread of illegal hate speech online in Europe. The code was aimed at guiding the activities of the signatory companies as well as sharing best practices with other internet companies, platforms and social media operators. As part of the code, tech / social media companies agreed to train staff to remove illegal hate-speech material within 24 hours of being alerted to it.
It appears, however, that many European governments feel that more needs to be done, more quickly, and that tougher standards, backed by legislative measures may prove to be a more effective motivator for tech companies.
The EC has, therefore, decided to introduce guidelines and standards for technology companies, backed by the threat of legislation, to ensure that faster and more effective action is taken to stop unwanted material being shared online. The standards include investing in automatic detection technologies, and developing automated tools that can prevent the re-posting of content that has already been removed.
Also included in the standards will be the need for tech companies to set up dedicated points of contact that will allow the authorities to flag up offending content, and the introduction of (as yet not specified) fixed deadlines for tech companies to remove content that could cause serious harm e.g. incitement to terrorist acts.
The standards also call for tech companies to publish transparency reports that will show the number and nature of notices they have received about illegal content.
Following Calls From PM
This latest pressure from the EC follows calls from UK Prime Minister Theresa May who said in a speech to the United nations last month that technology companies must go "further and faster" in removing extremist content, and that tech / social media companies should be aiming to remove hate / terror content from their platforms within a 2 hour-or-less timeframe.
Banning Groups Helps
Recent research by the Georgia Institute of Technology using the Reddit platform showed that banning hate groups from a social media platform causes them not only to abandon that platform (rather than just going elsewhere within it), but also to reduce the levels of hate speech in any group members who stay.
What Does This Mean For Your Business?
The business world works best when customers, investors and other stakeholders have confidence in companies, brands, products and services. Those businesses that supply platforms for, or enable the sharing / distribution free speech of any kind e.g. social media and web companies, have a common (and commercial) duty to provide a safe online environment for their users e.g. by removing hate speech promptly, and by making their part of the online environment particularly safe for children, young people, and the vulnerable.
Surprisingly, given the level of technological expertise and investment in large social media platforms e.g. Facebook and Twitter, they have always struggled to moderate their platforms effectively. Many commentators recognise the importance of free speech, and acknowledge that social media companies need to strike a balance which enables them to remove unwanted content, but not be seen to censor and curb free speech (characteristics of authority and governments), and thereby damage the value of their brands.
It is sadly the case that, in the business world, it often takes the threat of legislation to force businesses to act effectively on standards and guidelines, so this move by the EC may help speed things along.
Online hate speech / hate crimes and bullying are now being widely challenged e.g. Google, GoDaddy, and Cloudflare’s decision to stop serving a neo-Nazi site The Daily Stormer, and the UK Crown Prosecution Service’s move to treat online hate crime as seriously as offences carried out face to face with tougher penalties and sentences for online abuse on social media platforms.
Anything that contributes to a safer online environment can only really benefit businesses as well as society. Businesses and organisations of all kinds can also help the common purpose of minimising online hate crime through education of their staff / pupils / customers / users / stakeholders about their own policies for the treatment of those discovered to be using hate speech e.g. at work (online).
We, as individual members society, can all play our part in making the online environment safe for all by challenging reporting hate speech where we find it, and, although the stance of open rights / free speech organisations such as the ORG is important, so is ensuring that the Internet is a safe place for all.