Update, April 2023: please see our Open Letter on the Online Safety Bill.
Update, April 2024: follow-up piece about influencing the Online Safety Bill, and the new Criminal Justice Bill.
Content warning: This blog post discusses self-harm and suicide, including methods, wounds, and deaths.
Everyone deserves to feel safe online. People should have choices about what content they view, and companies should be responsible and accountable for the content that is allowed to circulate on their platforms. This is the primary basis of the Online Safety Bill, a piece of legislation that is currently being negotiated within the House of Commons. The bill deals with various issues – from revenge porn and death threats, to far-right propaganda and cyberbullying. More recently, the government has pledged to extend the bill to criminalise content that is seen to ‘encourage’ self-harm – largely based on concerns around the impact of self-harm related content on young people.
In this article we hope to give readers an overview of the history and issues relating to this new crime of ‘encouragement’. We write as people with experience of self-harm, researchers in the field of self-harm and suicide studies, and as Directors and facilitators at Make Space, a user-led collective supporting people with experience of self-harm.
People viewing self-harm content online should be protected and supported in that experience, and those posting content should have access to appropriate and affirming care. We do not believe the bill will achieve either of these things. Instead, the bill is a worrying legal development with the potential not only to criminalise distress but also to make content aimed at peer support or harm reduction illegal.
We agree that there are conversations to be had about self-harm content online, but criminalising distress is not the answer. If legislators and campaigners are concerned about the safety of people with experience of self-harm, they should not make their experiences illegal. Instead, they would be better off ensuring there is safe and affirming care around self-harm, as well as holding social media platforms to account when they prioritise profit and engagement over the safety and consent of their users.
History of the bill and inclusion of self-harm
The Online Safety Bill includes a section on suicide related content – specifically, extending offences included in Section 2 of the Suicide Act 1961 to the online landscape. Section 2 of the Suicide Act makes people criminally liable for another’s death if they are found to have intentionally encouraged or assisted their suicide, or attempted suicide. The maximum penalty for this offence is 14 years imprisonment. The Online Safety Bill would make it illegal to encourage suicide online and also places onuses on platforms and regulators to identify, remove, and otherwise restrict the circulation of this content.
The inclusion of suicide related content was welcomed by various national mental health charities and campaigners. However, many people raised concerns that the bill does not go far enough and suggested that it must also include self-harm related content. Parliament has since pledged to do so – this was motivated firstly by recommendations from the Law Commission on the ‘glorification’ of self-harm online and recent inquest reports from the deaths of young people who engaged with self-harm content online and later died by suicide.
We have not yet seen the specific wording of this new crime, but it is expected it will be very similar to the crime of encouragement in the Suicide Act. This would make it illegal for people to share online content seen to ‘encourage’ or ‘assist’ self-harm and create legal avenues for platforms to face consequences if they do not curb, restrict, or otherwise ‘protect’ people from it.
Encouragement or algorithm?
We hope that the bill may help to curb the negative aspects of algorithms that social media sites rely on to achieve and maintain high levels of user engagement. These algorithms are designed to keep us online, sending us more content we may like, and tailored adverts compelling us to buy things. When it comes to hobbies or interests, algorithms are creepy and capitalistic – and dubious in terms of consent – but they are generally quite banal. But not everyone searches for videos of cats being ridiculous or for one-pot pasta recipes. Sometimes people search for content on suicide, self-harm, and distress more broadly. In these latter instances, the work of the algorithm is not just creepy – it is dangerous.
While a lot of posts on self-harm come with content warnings from the platform itself, and are sometimes removed for breaching community guidelines, they are generally easy to access. The algorithm then does the same thing with self-harm content as it does for dog videos – it curates a person’s feed in a way that makes it almost impossible to get away from. We do not see anything inherently wrong about searching for content on self-harm or creating it. It is, however, non-consensual and dangerous for people to end up in cycles of self-harm related content consumption promoted by algorithms. People can find themselves caught in endless webs of content and in turn, become victims of social media’s desire for profit over the safety, wellbeing, or consent of its users.
Concerns about the bill
The Online Safety Bill extends its focus significantly beyond algorithms or the responsibility of platforms. It is here that we begin to take issue with the bill. In particular we are concerned with the ambiguity of what it means to ‘encourage’ self-harm, and the potential for this to criminalise people who are simply trying to share their own experiences. We think that several key aspects of online self-harm content, related to malicious intent, peer support, and the nature of sharing one’s own experiences, have been under-explored and inadequately considered in the bill’s drafting.
Malicious content
In some instances, trolls comment on people’s posts encouraging them to kill or hurt themselves. They may maliciously suggest methods or non-consensually send people self-harm or suicide-related content. While we don’t believe that the criminal ‘justice’ system is very good at accountability or healing, we agree this kind of action is wrong and cruel. But not everyone posting self-harm related content does so maliciously. Some are trying to help people; some want to be part of something. Others aren’t trying to influence other people at all – they are just sharing their lives online, which includes their self-harm. In their response to the Law Commission’s consultation on online harms, the Samaritans recommended the offence should require ‘malicious intent… be demonstrated’ to ensure that people who post about their own experiences online are not criminalised. However, no such limit has been included within the proposed wording of the bill. Instead, we are left with the word ‘encouraging’ – one that is broad and worryingly vague. While many groups raise concerns about overreach of the bill, they do little to undermine (and sometimes explicitly endorse) the idea that some self-harm related content is or ought to be illegal (e.g. this article from the Samaritans). Far from rejecting or placing firm boundaries on the criminalisation of distress, both the bill and those critiquing it leave a vagueness we fear will only ever be interpreted to the detriment of vulnerable individuals.
In public discourse around the bill, the idea of criminalising the ‘encouragement’ of self-harm has become synonymous with curbing ‘abhorrent trolls‘ – as Michelle Donelan, Culture Secretary put it. But the high-profile cases that sparked the inclusion of the crime of self-harm promotion did not include bullying or direct incitement to violence that has become popular in public speak about the bill. Rather, these people were victims to algorithms which ‘push’ or ‘promote’ content which an individual has previously searched for and engaged with. The content they found and were fed by algorithms, while upsetting, was not deliberately posted to ‘encourage’ others to self-harm. In many cases, it seems that the young people in question were not maliciously targeted, trolled, or bullied. Instead, they had encountered other people sharing their experiences of self-harm (often hundreds, or thousands of instances of it) and either been repeatedly ‘fed’ such content by a profit-driven algorithm, or indeed had chosen to continue engaging with this material. With a huge lack of support around self-harm available outside of online communities (as we discuss below), it is no wonder that these instances left young people with nowhere to turn.
Excluding peer support
It is unlikely – or at least we hope it is unlikely – that peer support will be deliberately included in the crime of encouragement. We hope this not least because it would be absurd but also because we are not the only people to bring these concerns. In the Law Commission’s report – where the idea of including the new crime arose – many high-power groups argue peer support around self-harm should be excluded from the law’s remit. Such groups include Mind, the Samaritans, and the Association of Police and Crime Commissioners. They name specific resources aimed at supporting people that could be misinterpreted as encouragement. In particular, they name Self-Injury Support’s resource on Harm Minimisation, which links to a harm-reduction resource from the National Self-Harm Network offering information on physiology in relation to self-harm. These resources offer people information and tools to help them make choices about their self-harm and how to do so in safer ways. This peer support is invaluable, and should never become a crime.
More worrying is the shakiness of the line between what constitutes ‘legitimate’ and ‘illegitimate’ peer support. People find various forms of support helpful. For some, it might be resources on cessation or harm reduction. For others, it may be that knowing other people are self-harming too, is helpful. Our own viewing of self-harm related content, at various points in our lives, was not about gaining support or inspiration as much as it was feeling part of something in what is otherwise an extremely isolating experience. What was different about our viewing of this content, though, was that we could do so free from the influence or ‘push’ of the algorithm designed to feed us more content.
Furthermore, the Law Commission report details how national mental health charity Mind encourages the bill but called for careful exemptions around ‘legitimate and valuable’ peer support. Also in the report, The Samaritans’ CEO makes a similar distinction between ‘safe, supportive spaces online’ and self-harm content that is ‘clearly harmful’. These comments distinguish between forms of peer support deemed acceptable and ‘helpful’ and those which are not – yet we are far from certain that such a distinction is straightforward to establish. The Samaritans list examples of peer support that may not be helpful – like sharing information on methods of self-harm, concealing evidence and injuries, and which serve to potentially ‘inculcate the belief that self-harm is an effective coping strategy’. We find such a list worrying. Through our work we have heard from many people who have genuinely found self-harm to be a helpful tool in their lives, sometimes even serving to prevent thoughts of or acts toward suicide. Many live with shame about their wounds/scars, and face cruel and violent actions when their self-harm is ‘discovered’ – this shame and cruelty often goes on to cause more self-harm. Having information about how to cover scars or maintain boundaries of who does or does not know about your self-harm, then, is effective peer support. It seems that what makes this kind of ‘informal’ peer support on social media illegitimate is the absence of professionalised monitoring or oversight.
Moreover, it might be valuable to consider how such distinctions frame ‘harm’ as enacted solely through the presence of self-harm content online. This leaves un-noticed the reasons why someone might seek support online in the first place, including the shocking dearth of in-person support and care with regards to self-harm. It disconnects the desire of people to engage with online content from a social context in which self-harm is highly stigmatised, in which healthcare is often difficult or impossible to access, in which waiting lists for treatment stretch to over a year, in which responses to self-harm (even/often from healthcare professionals) are often cruel and punitive. In such a context what is the political impact of placing the blame for the deaths of young people on internet content created by people who self-harm? How does such a framing allow the current government’s complete failure to provide adequate support and care to young people, to adequately fund the overstretched and overburdened NHS and education sector, to slip under the radar? It is easier, by far, to simply criminalise the spaces where people go to share their despair than to act to mitigate or respond appropriately to that distress.
Sharing or glorifying: How we frame our own experiences
Beyond questions of peer support, the general fear around self-harm related content is that it serves to ‘glamourise’ or ‘glorify’ it. The discourse around ‘glorification’ is also featured in the Law Commission’s report, in part because of (pre-existing) use in legal frameworks around glorification of harmful content (e.g. around terrorism and extremism). Similarly the Samaritans’ submission discussed ‘graphic, glamourising images’ of self-harm shared online. There is, perhaps, a nuanced discussion to be had about trends in self-harm content online, about how self-harm, suicide, and self-destruction are aestheticised, and about the complex ethics of sharing experiences and feelings of intense, overwhelming distress. Yet this conversation must surely be separate from the criminal justice system – unless we also intend to criminalise art and literature’s long history of communicating pain and distress in aesthetic ways. The aestheticisation, glamourisation, and even glorification of pain, suffering, or self-destruction more broadly is one with a long and complex cultural history. To what purpose do we criminalise, demonise, or even criticise the many internet users (often young people) who aestheticise their self-harm without any malicious or cruel intent?
These posts are not designed to encourage people to self-harm. Instead, they are indicative of a broader trend of people sharing their distress and suffering online. Sharing your suffering is not and should not be a crime. Some consultees to the Law Commissions’ report argued there should be a defence of vulnerability – namely that people should be absolved of responsibility for the crime of ‘encouragement’ if they are a vulnerable person sharing their experiences. No such exception is currently included in the Bill. Moreover, there is a long and complicated history of who gets to count as ‘vulnerable’ . The vulnerability, sensitivity, and genuine need of people is less likely to be acknowledged or granted to minoritized groups. The law’s determination of who gets to count as vulnerable is, over and over, steeped in racist, homophobic, transphobic, ableist, nationalist, and classist (to name a few) ideas of robustness, personhood, and deservedness of care – often with fatal consequences.
Furthermore, there is a long and complicated history – and present – of making people criminally liable for their suffering. Existing public statements on the bill fail to discuss how self-harm is already often criminalised, and people who self-harm are punished rather than offered care. Consider, for example, the highly criticised Serenity Integrated Mentoring (SIM) program led by the High-Intensity Network, which began in London in 2018 and has since been rolled out across various NHS Trusts in the UK. As the Stop SIM Coalition highlighted, the program advocated for police involvement in the care of people in frequent contact with emergency services for their mental health, and for colder or even punitive responses to help-seeking by those who are in crisis and have often self-harmed. Under the SIM program, ‘repeat offenders’ can find themselves landed with fines or even threats of arrest if they reappear at A&E. This is cruel – as is the treatment of many people with experience of self-harm shown in recent undercover investigations exposing the inhumane treatment in psychiatric inpatient units. These treatments/attitudes from individuals and systems intended to provide ‘care’ are not unfamiliar to those of us with experience of self-harm. An empty reassurance that the Bill would not intend to criminalise those who self-harm would be of no use to us – the experiences of people who self-harm provides ample evidence that if further criminalisation is made possible then it will occur, regardless of the Bill’s intent.
‘Incidental’ content
Finally, it is important to note that many in the user-led movement have raised concerns about their content being removed or criminalised because they just happen to have self-harm scars or wounds. For example, someone might share a picture of them enjoying a meal with friends, which includes their self-harm scars. This is not criminal activity. In these instances, people are not trying to talk about self-harm at all, they are just existing.
Misapplication of the law in this regard is particularly likely because content moderation already operates under similar principles. That is to say, there are widespread reports, within communities of those who self-harm, of platforms such as Instagram removing or adding warning to pictures in which users do not discuss or depict self-harm, but are simply present, living their lives, in bodies bearing marks of previous self-harm. This can be – and has been – hurtful. What does it mean that individuals are told their very existence is dangerous to see?
We agree people should have choices about the content they consume, including those that reference/depict self-harm only incidentally, but this requires careful consideration. Any decisions about how content is to be meaningfully limited without hurting, shaming, or making wrong people going about their everyday lives must be developed in consultation with people whose content is most likely to be reported/withdrawn.
Conclusion
We agree people should feel safe online, and platforms have a responsibility to curb dangerous content. When it comes to proposed changes to the Online Safety Bill in relation to self-harm, we fear this is not what the legislation will achieve. We are concerned the bill has potential to criminalise peer-support and, more worryingly, people who are self-harming. Distress is not a crime; creating avenues to punish people in crisis is cruel and inhumane.
What would it mean for campaigners and organisations – instead of criminalising people in distress – to fund, resource, and support genuinely helpful care around self-harm? What would it mean for social media platforms to meaningfully engage with people who self-harm on restrictions or safety measures that might make the internet feel like a safer place? What if platforms were held accountable for the harms done by their algorithms, that promote sales and user-engagement over the safety and consent of users? There are certainly questions to be asked about the value, appropriateness, and potential harms of self-harm content online, but this is not to be arbitrarily decided by parliament or the judiciary.
We argue that the concerns outlined above have gone un-addressed by the bill because of the failure to thoroughly (or at all) involve people with experience of self-harm in its design. In our lives and our work at Make Space, we call every day for greater nuance, care, and generosity in conversations around self-harm. If major mental health charities and members of the UK Parliament cannot achieve this, they have no business speaking about self-harm.