Legislators in the European Union have asked tech companies to continue reporting on their efforts to combat the spread of vaccine disinformation on their platforms for another six months.
“The continuation of the monitoring programme is necessary as the vaccination campaigns throughout the EU is proceeding with a steady and increasing pace, and the upcoming months will be decisive to reach a high level of vaccination in Member States. It is key that in this important period vaccine hesitancy is not fuelled by harmful disinformation,” the Commission writes today.
As participants in the bloc’s (non-binding) Code of Practice on Disinformation, Facebook, Google, Microsoft, TikTok, and Twitter have agreed to submit monthly reports — however, they will convert to bi-monthly reporting in the hereafter.
The Commission said the tech giants have demonstrated that they are unable to police “dangerous lies” on their own, while continuing to express dissatisfaction with the quality and granularity of data that is being (voluntarily) provided by platforms in relation to how they are combating online disinformation in general.
“These reports show how important it is to be able to effectively monitor the measures put in place by the platforms to reduce disinformation,” said the EU’s VP for values and transparency, Věra Jourová, in a statement. “We decided to extend this programme, because the amount of dangerous lies continues to flood our information space and because it will inform the creation of the new generation Code against disinformation. We need a robust monitoring programme, and clearer indicators to measure impact of actions taken by platforms. They simply cannot police themselves alone.”
Last month, the Commission revealed plans to strengthen the voluntary Code, stating that it wants additional participants to join up, particularly from the adtech sector, to help de-monitize harmful nonsense.
The Code of Practice effort began in 2018, when concerns about the impact of “fake news” on democratic processes and public discussion were at an all-time high following significant political disinformation scandals. However, the COVID-19 public health disaster heightened public awareness about harmful disinformation spreading online, bringing it into clearer focus for politicians.
Legislators in the EU are still undecided about putting regional control of internet disinformation on a legal foundation, preferring to stick with a voluntary — and what the Commission calls “co-regulatory” — approach which encourages platforms to take action and deal with potentially damaging (but not illegal) content, such as enabling means for users to report problems and appeal takedowns, without the prospect of direct legal repercussions if they fail to keep their promises.
It will, however, have a new lever in the form of the Digital Services Act (DSA) to increase pressure on platforms. The regulation, which was introduced at the end of last year, will establish guidelines for how platforms should deal with illicit information.
Nevertheless, commissioners have indicated that platforms that actively participate in the EU’s disinformation Code are likely to be viewed favorably by the regulators who will be overseeing DSA compliance.
In a separate statement today, Thierry Breton, the EU’s Internal Market Commissioner, stated that the combination of the DSA and the strengthened Code will open “a new chapter in countering disinformation in the EU.”
“At this crucial phase of the vaccination campaign, I expect platforms to step up their efforts and deliver the strengthened Code of Practice as soon possible, in line with our Guidance,” he added.
Disinformation is a difficult topic for regulators to address since the value of internet content is extremely subjective, and any centralized command to remove information, no matter how foolish or absurd the content in issue may be, risks being accused of censorship.
The removal of COVID-19-related disinformation is clearly less contentious, given the evident public health consequences (such as from anti-vaccination messaging or the sale of defective PPE).
Although the Commission appears to be most interested in promoting pro-free speech measures taken by platforms, such as vaccine positive messaging and surfacing authoritative sources of information, noting in its press release how Facebook, for example, launched vaccine profile picture frames to encourage people to get vaccinated, and that Twitter introduced prompts that appeared on users’ home timelines during World Immunisation Week in 16 countries, and held vaccine-related conversations that received 5 million impressions.
There is also more detail on actual removals carried out in the two companies’ April reports.
Facebook, for example, claims to have taken down 47,000 pieces of content in the EU for infringing COVID-19 and vaccine disinformation regulations, a modest drop from the previous month, according to the Commission.
In April, Twitter reported challenging 2,779 accounts, suspending 260, and removing 5,091 pieces of content on the COVID-19 disinformation topic globally.
Meanwhile, Google reported taking action against 10,549 AdSense URLs, which the Commission describes as a “significant increase” over March (+1378).
Is the increase, however, good or bad? Increased removals of questionable COVID-19 ads may indicate improved Google enforcement – or a significant expansion of the COVID-19 disinformation problem on its ad network.
The difficulty for regulators attempting to tread a fine line on online disinformation is quantifying any of these internet giants’ efforts — and actually understanding their efficacy or impact — without defined reporting standards and complete access to platform data.
Rather than selective self-reporting, regulation would be required.