News

We must learn more about who is making Internet unsafe

Avast Foundation supports students at The Oxford Internet Institute to explore ways to mitigate the damage of trolling behaviour.

As we launched the initial results of our Troll Free Future research, which profiles those engaged in abusive and damaging behaviour online, we also announced that we’ll be supporting students at The Oxford Internet Institute (OII) to explore ways to mitigate the damage of trolling behaviour and to develop preventative methodology. We spoke with Anna George, a Social Data Science doctoral student who uses computational approaches to study online communication and behaviour. Her research focuses on the narrative transmission of harmful communities such as hate groups, and she is leading extensive research into the motivation behind trolling behaviour - to better understand both the research and the longer term projects we will be working on together.

What is the significance of this research?

These findings provide a fresh look at who is abusing others online and why these abusers are doing so. While a lot of the research I have done examines the manifestation of the behaviour, this research tries to draw out the motivation of why others are harmful online. Without understanding that, it’s very hard to begin to address how to prevent the harm they do and make the online world a safer place.

Because it's hard to get trolls to admit to being trolls simply by asking, this Avast Foundation survey was designed in such a way as to gain information, without being confrontational, on why people are spreading abuse online. So the significance of it is being able to tap deeper into the motives and differences by age group and gender of those perpetrating online harms. 

My research takes a different methodological approach to study the same problem. I identify trolls by tracing the harms that are being done online and who is doing them.  Survey data depend on the honesty of those participants, which can be inaccurate as we like to see ourselves as good people and don’t always admit to the things we’ve done wrong. On the contrary, my work actually looks directly at the harmful messages that trolls are posting online. 

I have this goal along with others including the Avast Foundation to make the internet a safe place and in order to do that we have to look at those who are making it an unsafe place.

What does it tell us about the scale or severity of the trolling problem?

What I find the survey does tell us is about differences in offensive behaviour online by age group and genders and highlights that younger age groups admit to saying something intentionally offensive to people online more than older age groups. Two thirds of UK 16-23 year olds have engaged in offensive online behaviour. Looking into gender, men are far more likely to engage in this behaviour than women.  

"It makes me wonder how much of this is because society and online companies have been poor at policing and setting acceptable norms around online behaviours."  

What stood out most to you in terms of the behaviour/attitudes of those responsible?

What stood out to me most is that even though most people said they weren’t more aggressive online than offline, still about a quarter of respondents admitted to being more aggressive online than offline, and two thirds said they think that people are more aggressive online than in person. To me this says that perhaps people think they are allowed to be aggressive online and it makes me wonder how much of this is because society and online companies have been poor at policing and setting acceptable norms around online behaviours.  

What do you think drives this behaviour? 

What we know from previous research is that trolls can do this for a variety of reasons; such as those who do it for political reasons seek to attack marginalised communities to silence and intimidate them. Some engage in this behaviour for personal entertainment, who are deliberately provocative to inspire conflict and to derive validation from acting as groups against others. This survey showed that sometimes people are driven by anger or by “jumping on the bandwagon” to join others who are being harmful online. 

What is interesting about the survey is the result showing how many younger people admit to seeing people they know, even friends, as “fair game” to be harmful to online, and are quite matter of fact about being offensive to friends, for instance based on their appearance. This behaviour essentially amounts to bullying which is a quite different motivation.

What can we do about the problem as a society, as parents, as teachers etc?

I think it needs to be multi-pronged and some of that is recognising that there are different reasons for the abuse: that sometimes it is coordinated and organised in a forum and sometimes spontaneously inspired by anger. There are situations that lead people to engage in online hate that otherwise wouldn’t, so if we understand the situations that lead to that then we can try and prevent it. But if it's more co-ordinated and thought-through that takes a different approach. For the latter, the platforms are best suited to respond to the coordinated attack. 

I don’t think one single intervention can help stop the cause - it takes multiple interventions from platforms, police, and policy makers. For instance I would say that there need to be processes put in place for victims of abuse to seek legal repercussions, just as they can in real life around making complaints about abuse or harassment. 

Parents and teachers play their role as well but I would place less of the focus on them. For instance they can have a role through education similar to what is done around bullying to talk about online hate and how it isn't OK. 

“The main response should not be for victims to have to safeguard themselves.” 

What can victims of this behaviour do to protect themselves from the impact of this behaviour?

The main response should not be for victims to have to safeguard themselves. 

What we are trying to do is stop the behavior from happening and focus on those that inflict the harm. But what we have learnt is that simply being online can make someone a target and we can’t just tell people to stay offline because being online is so well integrated into society. But there do need to be more easily available and accessible support responses. 

As the issue intensifies, we need to ask ourselves if social media users and platform owners are positioned to deal with this, how we can support victims, and how online abuse can be controlled. Action must be taken to make sure we have an internet that is safe for all.

Avast Foundation is now part of Giving@Gen

On September 12, 2022, Avast merged with NortonLifeLock, Inc., and a successor company, Gen, was launched. Gen is a global company powering Digital Freedom through consumer brands including Norton, Avast, LifeLock, Avira, AVG, ReputationDefender, and CCleaner. Gen’s vision is to big vision to power Digital Freedom by protecting consumers and giving them control of their digital lives. Gen’s philanthropy and corporate responsibility program, Giving@Gen, is a big part of that mission, and will draw on the legacy of Avast Foundation and NortonLifeLock Cares programs.

To learn more about Giving@Gen, please visit Gen’s corporate responsibility website.

Learn more