Is the Net a different medium, in terms of free speech standards - and limits?
And he argues that the related global standards for legitimate limitation of some kinds of speech apply equally to the online world.
So basically, speech is free unless it violates other rights (like right to dignity, safety). And any limitations of offensive speech have to be for a legitimate purpose, should be stated in law (and be non-arbitrary), be appeal-able, and have sanctions that are proportionate to particular violations.
This is the general gist of a report that La Rue has presented to the UN's Council on Human Rights.
Nevertheless, there is acknowledgement in his report of some differences between the digital and analogue universes:
Defamation: “because of the ability of the individual concerned to exercise his/her right of reply instantly to restore the harm caused, the types of sanctions that are applied to offline defamation may be unnecessary or disproportionate,” says La Rue. Child pornography: “the availability of software filters that parents and school authorities can use renders action by the Government such as blocking less necessary and difficult to justify,” he argues. The “limited frequencies” rationale for licensing broadcasters “cannot be justified in the case of the Internet” which has “an unlimited number of points of entry and an essentially unlimited number of users”.
Implicit here is that a lighter-touch limitation regime may be appropriate in regard to these particular aspects of freedom of speech online. Against this backdrop, La Rue proposes two further points:
No one should be held liable for content on the Internet of which they are not the author. State censorship measures should never be delegated to private entities, meaning therefore that intermediaries should only limit (or disclose) information through a court order or “a competent body which is independent of any political, commercial, or other unwarranted influences”.
One can see the logic as to why, for example, ISPs should not become censors or self-censors. But what the report underplays is the role of legitimate self-regulation.
According to the report, “to avoid infringing on the right to freedom of expression and the right to privacy of Internet users, the Special Rapporteur recommends intermediaries to: only implement restrictions to these rights after judicial intervention.” This suggests that unless the state intervenes with (legitimate) limitations, it might as well be a free-for-all out there.
But between these two extremes, there is already the common practice of self-regulation by entities. This may operate via individual terms of service conditions, and/or through an industry body that responds to complaints in terms of a wider code of conduct. In both these cases, limitations - such as taking down fraudulent or offensive content - happens independently of the law, as a form of corporate social responsibility.
What is relevant in how they do this is the Rapporteur’s view that limitations imposed by private bodies should:
Be transparent about measures are taken Forewarn users before the implementation of restrictive measures; Impact only on the particular content involved; Be subject to an independent appeal process.
All this is important background to assessing a case of the common practice of online newspapers moderating user comments.
In South Africa, during the 2008 xenophobic violence, the Mail & Guardian’s Thoughtleader moderators let through a number of comments that were illegal in terms of international standards that limit hate speech. Among their reasons were:
There was such a torrent of content they missed some points, and no users reported abuse. Some points were let through in the knowledge that they would be rebutted by anti-xenophobes in other comments as part of the debate on the site.
This case suggests, that in the real world, there are additional arguments to Frank la Rue’s for why a volume of freedom of speech should attract light-touch limitations when it is in an online environment.
Indeed, consider that censoring these comments would also have kept the site’s users ignorant of offensive views, and that the xenophobes would likely have found an echo-chamber online outlet elsewhere. Instead of the issue being debated, it would then have become concentrated amongst extremists shielded from challenge. Society would have been the loser.
So, here’s the point: info-plentitude, pluralism and interactivity on the Net makes some old style limitations of speech not only hard to enforce, but even inappropriate given the nature of the medium.
This directs us to a lesser role for state-enforced limitations and at the same time, a greater part to be played by private entities: online publishers, ISPs, search engines and other “intermediaries”. All of these groups, however, need systems which align with international protocols about criteria for legitimate limitations.
And where the state does have a major role is in protecting the public through education, rather than in an exclusive focus on preventing online content. In this regard, the Rapporteur calls for internet literacy at school, adding: "Training can also help individuals learn how to protect themselves against harmful content, and explain the potential consequences of revealing private information on the Internet”.
If we have to live with a free and fairly chaotic internet, with limited limitations of offensive content being left largely to private bodies to operate, then let’s build the capacity of audiences to debate and discern from the never-ending digital flood of news and views.
All this is what I'm saying in a presentation on 4 August to a conference on African Constitutionalism and the Media, convened by the Konrad Adenauer Stiftung and the University of Pretoria.
|KAS conference Berger.ppt||405 KB|