What is the problem with the social networks we inhabit online?
It’s this: the spaces where so many of us communicate, collaborate, and stay in touch with friends are also riddled with harassment. This ranges from extensive, organised campaigns, to more personal attacks directed at people of colour and female public figures. And we know that these can escalate, becoming something much more violent and menacing in nature.
We could not have imagined that the likes of Facebook, Twitter and Instagram, would have become such a dominant part of our lives, and so quickly. Facebook, in particular, has a bigger ‘population’ than any country to have ever existed – and it grew far faster than any imperialistic nation-state we have known.
But this rapid growth was accompanied by a proliferation of hoaxes, misinformation and disinformation designed purposefully to mislead. These interconnected problems are the inevitable by-products of a flawed system.
The creators of our social networks did not imagine the myriad ways in which we would use them or the extent to which we would come to rely on them. When it was first launched, Twitter didn’t allow users to reply to tweets or to engage in conversations. The main functionality was simply posting. Facebook wasn’t created to become the biggest social network in the world – it was designed to help college students keep in touch.
To understand the challenges facing social networks, try comparing them to cities. What happens when a city which was designed for 50,000 inhabitants becomes a home for millions? And alongside the unanticipated scale is the sheer speed of growth – Facebook went from creation to one billion users in less than a decade. If that kind of growth happened where you live, can you imagine how the infrastructure around you might have fared?
The social networks that most of us recognise – Facebook, Instagram, and Twitter – allow (and encourage) us to combine the personal and professional. We often use our real names in these spaces (in fact, for a period of time, Facebook required users to use their real names), and we have come to rely on such platforms for our professional and social connections, our business interests, and for sustaining many a friendship or relationship.
However, there are other social networks in the internet’s more anonymous spaces. There are digital spaces that use pseudonyms, and focus on meme-style content being posted with threaded replies; places like 4chan, 8chan, and Reddit. Whether you engage with these platforms or not, they exist on the same internet you use, and their discussions and activities can shift frictionlessly from one platform to another.
We see this with political movements being planned and coordinated in one space, then proliferating beyond. The internet is an ecosystem. Much like a city, something which takes place in an adjacent neighbourhood can end up affecting your own. An electrical shortage or gas leak can have an impact on surrounding areas, and even on the entire city. If one road is blocked, traffic spills out into the streets around it.
I started studying political movements online in 2012, analysing the role that Twitter played in how information was shared during the Tahrir Square protests. In 2014, my research shifted to online harassment, focusing on Gamergate – a large-scale harassment campaign targeting women in games. It was discussed and planned across 4chan, 8chan and Reddit, and then initiated and amplified on Twitter and Facebook. Gamergate had some of the hallmarks of a protest campaign – planned coordination amongst users, a rallying cry (for Gamergate, around a hashtag), and content shared across many social networks. The difference is that Gamergate was harassing innocent people, as opposed to fighting political injustice. And what my research revealed was that the very same infrastructure which was designed for good, was being deliberately and systematically subverted.
4chan is an important example of harm by design. For years, the ethnographer, Whitney Phillips and I have been discussing how humour can be weaponised as a means of disguising harassment in digital spaces. We can’t see or hear someone when they type something online, therefore much of the intentionality of a conversation is obscured. Phillips wrote ‘This is Why We Can’t Have Nice Things’, which highlights how the design of early 4chan, with its offensive humour and rhetoric, laid the cultural foundations for the contemporary culture of harassment and trolling in so many online spaces.
4chan’s design as a social network centres on anonymous content being posted and replied to; the more extreme and offensive the humour, the more it’s replied to and quoted elsewhere. In this way, so-called ‘memes’ are born and circulated. Phillips describes this as the 'myopic gaze of LOLs' where ‘channers’ would take 'pieces of a story, and laugh at those pieces, and not engage with the full, political context...There’s a ton of behaviour that ends up being harmful that doesn’t intend to be harmful.' This kind of myopia is a large part of the design of the social infrastructure of 4chan. Is a rape threat a joke, for example? Or is the joking or meme-ing of a terrorist attack also a joke? It is difficult to draw this line from a policy standpoint if you’re designing a social network, but it’s perhaps even more important to ask a deeper question of: what is the impact of this violent rhetoric when it spreads as a ‘humorous’ meme?
Which brings us to today. What is happening right now within our internet? Social networks can exist as many things: tools, platforms, weapons, amplifiers. Our social networks allow for activism as well as harassment; if a system allows for coordination of a movement like #MeToo, it also allows for the coordination of #Gamergate. Can we protect one kind of activity whilst curtailing the other? Should we? Misinformation, protest, and harassment campaigns use social networks in similar ways for good or for ill because of how constrained social networks are by design: from the technical infrastructure to the policy framework to the social culture. These networks are about one thing: posting and responding to content, at scale. Their core design hasn’t changed in years.
Remember that the platforms we use were originally designed for small, specific, and incredibly mono-cultural communities. While we’re considering the origins of the social web, it’s important to point out just how homogenous the internet was in the early 1990s, and even the early 2000s. It was a predominately white and male space, with users who had deep technical knowledge. It wasn’t designed for everyone.
Much like our cities, social networks possess layers of infrastructure which shape how we use them: content policies, code frameworks, the social bonds which people organically create in their own communities. And there are some things missing: privacy tools and filters, robust mitigation or moderation on content and action – and their absence leads to a social environment which can easily facilitate organised harassment, sustained campaigns that are targeted, and contextual. Information travels so easily across social networks because they exist to make connections through content. Viral sharing on social networks is a consequence of a design decision.
And there comes a point when these decisions need to be revisited wholesale. Tinkering is no longer enough – eventually, it may need to be rebuilt. If we care about accessibility, we need a city with wider streets, sidewalks, wheelchair ramps, and elevators. And in the same way, the infrastructure which currently underpins our social networks may need to reflect a new set of priorities. What would social networks look like if user safety, data privacy, ethics, or actually, human care, were the main focus? A gas supplier or rail network is not allowed to put profit ahead of customer safety. What if similar notions of human care were considered a non-negotiable in the designs of social networks? What would Facebook, Twitter, or Instagram look like then?
It’s becoming clear that social networks should be designed to allow for user safety and user agency. Algorithmic intervention or filtering – ‘coding out toxicity’ – won’t solve online harassment. After all, tech companies cannot by themselves fix the bad or messy aspects of human behaviour. And perhaps we shouldn’t give that much control over our activities to social networks anyway. What we need is a delicate, intentional and designed balancing act – creating enough openness for users to make their own decisions, offset by a sufficient focus on privacy and augmentation. Design needs to give users the proper tools to mitigate harassment when they come into contact with it.
by Caroline Sinders
About the writer
Caroline Sinders is a machine learning designer/user researcher, artist, and digital anthropologist. She has been examining the intersections of natural language processing, artificial intelligence, abuse, online harassment, and politics in digital, conversational spaces.
For Anyone//Anywhere: The web at 30, the British Council is proud to be collaborating with the Barbican on a series of essays from leading international writers and thinkers whose work explores the impact of technology on our lives. Find out more about the Barbican’s Life Rewired season, which throughout 2019 explores what it means to be human when technology is changing everything.