As part of a series into the rising global phenomenon of online harassment, the Guardian commissioned research into the 70 m commentaries left on its site since 2006 and discovered that of the ten most abused novelists eight are women, and the two men are black
Comments allow readers to respond to an article instantly, asking questions, pointing out mistakes, dedicating new leads. At their best, commentary threads are thoughtful, enlightening, funny: online communities where readers interact with journalists and others in ways that enrich the Guardians journalism.
But at their worst, they are something else entirely.
The Guardian was not the only news site to turn commentaries on , nor has it been the only one to find that some of what is written below the line is crude, bigoted or simply vile. On all news sites where commentaries appear, too often the situation is said to journalists and other readers that would be unimaginable face to face the Guardian is no exception.
New research into our own commentary threads provides the first quantitative evidence for what female journalists have long suspected: that articles written by women attract more abuse and dismissive trolling than those written by humen, regardless of what the article is about.
Although the majority of our regular sentiment novelists are white humen, we found that those who experienced the highest levels of abuse and dismissive trolling were not. The 10 regular novelists who got the most abuse were eight girls( four white and four non-white) and two black humen. Two of the women and one of “the mens” were gay. And of the eight women in the top 10, one was Muslim and one Jewish.
And the 10 regular novelists who got the least abuse? All humen.
How should digital news organisations respond to this? Some say it is simple Dont read specific comments or, better still, switch them off altogether. And many have done simply that, incapacitating their commentary threads for good because they became too taxing to bother with.
But in so many cases journalism is enriched by responses from its readers. So why disable all commentaries when only a small minority is a problem?
At the Guardian, we felt it was high time to examine the problem rather than turn away.
We decided to treat the 70 m commentaries that have been left on the Guardian and in particular specific comments that have been blocked by our moderators as a huge data set to be explored rather than a problem to be brushed under the carpet.
This is what we discovered.
We focused on gender in this research partly because we wanted to test the hypothesi that girls experience more abuse than humen. But both writers and moderators observe that ethnic and religion minorities, and LGBT people also appear to experience a disproportionate amount of abuse.
On the Guardian, commenters are asked to abide by our community standards, which aim to keep the conversation respectful and constructive those that fall fouled of those standards are blocked. The Guardians moderators dont block commentaries simply because they dont agree with them.
The Guardian also blocks commentaries for legal reasons, but this induces up a very small proportion of blocked commentaries. Spam is not blocked( ie replaced by a standard moderators message) but deleted, and is not included in our findings; neither are replies to blocked commentaries, which are themselves automatically deleted.
The vast majority of blocked commentaries, therefore, were blocked because they were considered abusive to some degree, or were otherwise disruptive to the conversation( they were off-topic, for example ). For the purposes of this research, therefore, we utilized blocked commentaries as an indicator of abuse and disruptive behaviour. Even may be required for human error, the large number of commentaries in this data set gave us confidence in the results.
But what do we mean by abuse and disruptive behaviour?
At its most extreme, online abuse takes the form of threats to kill, rape or mutilate. Thankfully, such abuse was extremely rare on the Guardian and when it did appear it was immediately blocked and the commenter banned.
Less extreme writer abuse demeaning and insulting speech targeted at the writer of the article or another commentary is much more common on all online news sites, and it formed a significant proportion of the comments that were blocked on the Guardian site, too.
Here are some examples: a female journalist reports on a demonstration outside an abortion clinic, and a reader answers, You are so ugly that if you got pregnant I would drive you to the abortion clinic myself; a British Muslim writes about her experiences of Islamophobia and is told to marry an ISIS fighter and then see how you like that !; a black correspondent is called a racist who detests white people where reference is reports the news that another black American has been shot by the police. We wouldnt tolerate such insults offline, and at the Guardian we dont tolerate it online either.
The Guardian also blocked ad hominem attacks( on both readers and journalists ): commentaries such as You are so unintelligent, Call yourself a journalist? or Do you get paid for writing this? are facile and add nothing of value to the debate.
Dismissive trolling was blocked too commentaries such as Calm down, dear, which taunted or otherwise dismissed the author or other readers rather than engaged with the piece itself.
We know that abuse online isnt always is targeted at people. Hate speech as defined by law was rarely seen on Guardian commentary threads( and when it did appear it was blocked and the commenter banned ). But xenophobia, racism, sexism and homophobia are always seen regularly. Take for example, some of specific comments left below an article on the mass drownings of migrant humen, women and children in the Mediterranean: These people contribute nothing to the countries they enter; The more corpses floating in the sea, the very best; LET THEM ALL DROWN! At the Guardian, commentaries like these are considered abusive and were blocked from appearing on the site.
The Guardian also blocked commentaries that would otherwise disrupt or derail the debate: whataboutery of various kinds, or remarks that are clearly off-topic. While not abusive in themselves, such comments serve to make a constructive debate impossible, and present a lack of respect to the journalist and to other commenters in the thread.
Sometimes moderation decisions are easy, other hours it can be difficult to know where to draw the line. All are based on the Guardians community standards , not moderators personal savors and sentiments.
At the Guardian, readers and journalists can report abusive or off-topic commentaries, and moderators will speedily block them if they transgress the community standards. Moderation minimises the damage to be undertaken by abuse that is positioned on the site.
But for journalists, abuse is rarely confined to the site on which their work appears, and on some sites and social media platforms it can be very hard to get abusive posts removed. So for them, the abuse they receive below the article they have written is not experienced in isolation: each snarky commentary, each spiteful tweet, is( as Zoe Quinn once set it) simply one snowflake in an avalanche.
And avalanches happen easily online. Anonymity disinhibits people, inducing some of them more likely to be abusive. Mobs can form speedily: once one abusive commentary is posted, others will often pile in, vying to find who can be the most cruel. This abuse can move across platforms at great speed from Twitter, to Facebook, to blogposts and it can be viewed on multiple devices the desktop at work, the mobile phone at home. To the person targeted, it can feel like the perpetrator is everywhere: at home, in the office, on the bus, in the street.
People who find themselves abused online are often told to ignore it its only words; it isnt real life. But in extreme cases, that distinction breaks down completely, such as when a person is doxed, or SWATed, when nude photos are posted of the person without consent, or when a stalker assumes the persons identity on an online dating site and a string of all-too-real humen appear at their door expecting sexuality. As one woman who had its own experience said: Virtual reality can become reality, and it ruins your life.
But in addition to the psychological and professional damage online abuse and harassment can cause to people, there are social harms, too. Recent research by the Pew Centre found that not only had 40% of adults experienced harassment online but 73% had witnessed others being harassed. This must surely have a chilling impact, stillness people who might otherwise contribute to public debates especially girls, LGBT people and people from racial or religion minorities, who see others like themselves being racially and sexually abused.
Is that the kind of culture we want to live in?
Is that the web we want?
Even five years ago, online abuse and harassment were rejected as no big deal. That is not true now. There is widespread public fear, and more support for anti-harassment proposals. But no one is feign that this is an easy problem to fix not on the Guardians commentary threads, where most commenters are respectful, and where there is already a high level of moderation, and certainly not elsewhere on the web as a whole, where there are sometimes no precautions at all.
The Guardian is dedicated to tackling the problem. This research is a part of that: an attempt to be open, and to share publicly what has been discovered. We hope to do more research to excavate deeper into the problem, and to discover not only what can cause online conversations to go awry, but also what media organisations can do to help stimulate those conversations better, and more all-inclusive.
The Guardian has already taken the decision to cut down the number of places where commentaries are open on tales relating to a few especially contentious subjects, such as migration and race. This allows moderators to maintain a closer watch on conversations that we know are more likely to attract abuse.
However, unlike many news sites, the Guardian has no plans to close commentaries altogether. For the most component, Guardian readers enrich the journalism. Merely 2% of commentaries are blocked( a further 2% are deleted because they are spam or replies to blocked commentaries ); the majority are respectful and many are wonderful. A good commentary thread is a elation to read and more common than the dont read specific comments detractors believe.
As Prof Danielle Keats Citron argues in her volume, Hate Crime in Cyberspace, abusive behaviour is neither normal nor inevitable. Where it exists, it is a culture problem that, collectively, we must try to solve employing all the means at our disposal: technological and social.
Which is where you come in. We want to hear from Guardian readers: when it comes to providing a space where everyone feelings able to participate, what is the Guardian doing right, and how could we improve? Please take a moment to tell us here.