Who’s protecting your online reputation?

Defamation laws have protected the reputations of individuals for centuries. But those laws have yet to fully adapt to the changing way we publish and consume information online and the increased harm caused by online defamation.

Associate Professor Susan Corbett from the School of Accounting and Commercial law.

School of Accounting and Commercial Law Associate Professor Susan Corbett says, while it’s accepted that the writer and publisher of defamatory material—online or otherwise—are liable for defamation, the responsibility of search engine providers is unknown territory.

In a recent research paper, Susan proposes search engine providers are liable, but says there is a valid defence if they can prove they tried to remove or limit defamatory material from search results.

Search engine providers have argued that links to websites and their corresponding snippets are determined by algorithms, in response to a customer’s search. Because search engines rely on automated processes, providers say they are not ‘publishers’ of defamatory material.

Overseas, this defence has received a mixed response from courts, while the High Court of New Zealand has warned the issue requires further consideration.

Susan says the element of control should be the primary determining principle of whether a search engine provider is liable for defamation.

“An online intermediary that is able to remove postings from its platform must do so if it is aware of potentially defamatory material. Should it choose not to remove the material, it will be liable for publishing.”

She argues against search engine providers’ main line of defence by pointing out there are technologies available that enable them to moderate content.

“Behind every computer algorithm is a computer programmer who writes them. Therefore algorithms are not objective.

Although it might not be possible to moderate every decision made by an algorithm, they could be designed to recognise values such as hate speech and ridicule. Once websites with such content are highlighted, a human moderator could then intervene.”

She recognises these moderating technologies aren’t the perfect solution. “Although the search process would be slowed, it may at least offer some degree of protection to that most valuable of our properties—reputation.”