Closed petition Make verified ID a requirement for opening a social media account.

Make it a legal requirement when opening a new social media account, to provide a verified form of ID. Where the account belongs to a person under the age of 18 verify the account with the ID of a parent/guardian, to prevent anonymised harmful activity, providing traceability if an offence occurs.

More details

My son Harvey is disabled. He is also the kind and gentle son of a person regularly in the public eye. The Online Harms Bill doesn’t go far enough in making online abuse a specific criminal offence and doing what ‘Harvey’s Law’ intended. To make the law work needs the removal of anonymity to ensure that users cannot cause harm by using online platforms to abuse others. Where an offence has taken place they ought to be easily identified and reported to the police and punished. We have experienced the worst kind of abuse towards my disabled son and want to make sure that no one can hide behind their crime.

This petition is closed All petitions run for 6 months

696,985 signatures

Show on a map

100,000

Parliament will consider this for a debate

Parliament considers all petitions that get more than 100,000 signatures for a debate

Waiting for 219 days for a debate date

Government responded

This response was given on 5 May 2021

The Online Safety legislation will address anonymous harmful activity. User ID verification for social media could disproportionately impact vulnerable users and interfere with freedom of expression.

Read the response in full

The government recognises concerns linked to anonymity online, which can sometimes be exploited by bad actors seeking to engage in harmful activity. However, restricting all users’ right to anonymity, by introducing compulsory user verification for social media, could disproportionately impact users who rely on anonymity to protect their identity. These users include young people exploring their gender or sexual identity, whistleblowers, journalists’ sources and victims of abuse. Introducing a new legal requirement, whereby only verified users can access social media, would force these users to disclose their identity and increase a risk of harm to their personal safety.

Furthermore, users without ID, or users who are reliant on ID from family members, would experience a serious restriction of their online experience, freedom of expression and rights. Research from the Electoral Commission suggests that there are 3.5 million people in the UK who do not currently have access to a valid photo ID.

The online safety regulatory framework will have significant measures in place to tackle illegal and legal but harmful anonymous abuse. Services which host user-generated content or allow people to talk to others online will need to remove and limit the spread of illegal content, including criminal anonymous abuse. Major platforms will also need to set out clearly what legal anonymous content is acceptable on their platform and stick to it. The government will set out priority categories of legal but harmful material in secondary legislation.

Users will also be better able to report harmful content, and expect to receive an appropriate response from the company. This may include, for example, the removal of harmful content, or sanctions against offending users. Compliance with the online safety framework will be enforced by Ofcom, who will have a suite of powers to use against companies who fail to fulfil the duty of care. These include fines on companies - of up to £18m or 10% of annual global turnover - and business disruption measures. The Online Safety Bill, which will give effect to the regulatory framework outlined in the full government response, will be ready this year.

Protecting children is at the heart of our plans to transform the online experience for people in the UK and the strongest protections in this framework will be for children. All companies in scope will be required to assess whether children are likely to access their services, and if so, provide additional protections for them. They will be required to assess the nature and level of risk of their service specifically for children, identify and implement proportionate mitigations to protect children, and monitor these for effectiveness. We expect companies to use age assurance or age verification technologies to prevent children from accessing services which pose the highest risk of harm and to provide children with an age appropriate experience when using their service.

The police already have a range of legal powers to identify individuals who attempt to use anonymity to escape sanctions for online harms, where the activity is illegal. The government is also working with law enforcement to review whether the current powers are sufficient to tackle illegal anonymous abuse online. The outcome of that work will inform the government’s future position in relation to illegal anonymous online abuse.

The Government has also asked the Law Commission to review existing legislation on abusive and harmful communications. The Law Commission has consulted on proposed reforms and a final report is expected in the summer. We will carefully consider using the online harms legislation to bring the Law Commission’s final recommendations into law, where it is necessary and appropriate to do so.

Anonymity underpins people’s fundamental right to express themselves and access information online in a liberal democracy. Introducing a new legal requirement for user verification on social media would unfairly restrict this right and force vulnerable users to disclose their identity. The Online Safety legislation will address harmful anonymised activities online and introduce robust measures to improve the safety of all users online.

Department for Digital, Culture, Media and Sport

This is a revised response. The Petitions Committee requested a response which more directly addressed the request of the petition. You can find the original response towards the bottom of the petition page: https://petition.parliament.uk/petitions/575833

Other parliamentary business

MPs to debate Online Anonymity and Anonymous Abuse

MPs will debate Online Anonymity and Anonymous Abuse on Wednesday 24 March in the main House of Commons Chamber. The subject of the debate has been determined by the Backbench Business Committee.

This will be a general debate. General debates allow MPs to debate important issues, however they do not end in a vote nor can they change the law.

The debate will start at around 12.30pm, following Prime Minister's Questions.

Watch here this Wednesday: https://www.parliamentlive.tv/Event/Index/4ee11fcd-f567-4897-8c0b-f04de0be4caa

You'll be able to read a transcript of the debate a few hours after it happens: https://hansard.parliament.uk/commons/2021-03-24

Find out more about how Parliamentary debates work: https://www.parliament.uk/about/how/business/debates/

Find out more about the Backbench Business Committee: https://committees.parliament.uk/committee/202/backbench-business-committee/

This debate is in addition to any debate the Petitions Committee schedules on this petition. We’ll message you to let you know as soon as the Committee schedules a debate on this petition.

Relevant work by the Petitions Committee

The Petitions Committee has been looking at how to tackle online abuse, following a number of petitions calling for action to be taken to tackle online abuse.

You can find out more about the Committee's work on online abuse, and read transcript of this evidence, here: https://committees.parliament.uk/work/307/tackling-online-abuse/

Original Government response

Being anonymous online does not give anyone the right to abuse others. The Government’s online harms legislation will address harmful online abuse, including when the perpetrator is anonymous.

We are taking steps through the online harms regulatory framework to address abuse and other harmful behaviour online, whether committed anonymously or not.

In December 2020, we published the full government response to the Online Harms White Paper consultation, which sets out new expectations on companies to keep their users safe online. Social media, websites, apps and other services which host user-generated content or allow people to talk to others online will need to remove and limit the spread of illegal abuse on their services, including illegal anonymous abuse and other illegal harmful content, such as terrorist material or child sexual abuse imagery.

Major platforms will also need to set out clearly what legal content is acceptable on their platform and stick to it. This will include stating whether they will tolerate abuse that doesn’t meet a criminal threshold, whether anonymous or not.

The framework will deliver a higher level of protection for children online, with companies needing to protect children from inappropriate content and harmful activity. Companies will need to prove children are not accessing their service, or they will need to conduct a child safety risk assessment and provide safety measures for child users. These include protecting children from inappropriate and harmful content like pornography, and abusive behaviours behaviour such as trolling and pile-on abuse.

Companies will also have a duty to ensure they have effective and accessible reporting and redress mechanisms. These will need to allow users to report abuse, including anonymous abuse. Appropriate responses from the company might include removal of harmful content, sanctions against offending users, or changing processes and policies to better protect users. Users will also be able to report concerns to the regulator as part of its research and horizon-scanning activity.

Compliance with the online harms framework will be enforced by Ofcom, who will have a suite of powers to use against companies who fail to fulfil the duty of care. These include fines on companies - of up to £18m or 10% of annual global turnover - and business disruption measures. The Online Safety Bill, which will give effect to the regulatory framework outlined in the full government response, will be ready this year.

We are also considering the criminal law and its ability to deal with harmful communications online. The Government has asked the Law Commission to review existing legislation on abusive and harmful communications, including anonymous abuse. The Law Commission has consulted on proposed reforms and a final report is expected in the summer. We will carefully consider using the online harms legislation to bring the Law Commission’s final recommendations into law, where it is necessary and appropriate to do so.

The police already have a range of legal powers to identify individuals who attempt to use anonymity to escape sanctions for online abuse, where the activity is illegal. Police reporting shows that in 2017/18, 96% of attempts by public authorities to identify the anonymous user of a social media account, email address or telephone, resulted in successful identification of the suspect of their investigation. The Investigatory Powers Act allows police to acquire communications data such as an email address and location of the device from which illegal anonymous abuse is sent and use it as evidence in court. The government is working with law enforcement to review whether the current powers are sufficient to tackle illegal anonymous abuse online. The outcome of that work will inform the government’s future position in relation to illegal anonymous online abuse.

It is important to recognise the benefits of anonymity, as well as the challenges. Anonymity underpins people's fundamental right to express themselves and access information online in a liberal democracy. This is particularly important for those who sadly have reason to fear the consequences of speaking freely, such as young people exploring their gender or sexual identity, whistleblowers, journalists' sources, victims of abuse or modern slavery and political dissidents. A blanket ban on anonymity would wash away these important benefits and would be unlikely to stop online abuse.

Department for Digital, Culture, Media and Sport

This response was given on 24 March 2021. The Petitions Committee then requested a revised response, that more directly addressed the request of the petition.

Government announces plans to tackle online abuse

On Tuesday 11 May, the Government announced its plans for new laws to tackle harmful content online, as part of the Queen's Speech. The Government's plans for new internet laws are intended to protect children online and tackle some of the worst abuses on social media, including racist hate crimes. These new laws are contained in a new Online Safety Bill, which will be considered Parliament in due course.

Read more about the Government's plans here: https://www.gov.uk/government/news/landmark-laws-to-keep-children-safe-stop-racial-hate-and-protect-democracy-online-published

Read the draft Online Safety Bill and explanatory notes here: https://www.gov.uk/government/publications/draft-online-safety-bill

Read the Queen's Speech background briefing notes for more information on the Government's proposed Bills: https://www.gov.uk/government/publications/queens-speech-2021-background-briefing-notes

What is the Queen's Speech?

The Queen's Speech is the speech that the Queen reads out in the House of Lords Chamber on the occasion of the State Opening of Parliament.

It's written by the Government and sets out the programme of Bills - new laws, and changes to existing laws - that the Government intends to put forward in this new Parliamentary session. A session of Parliament usually lasts around one year.

Once the Government puts forward a Bill in Parliament, Parliament then debates the Government's proposal and decides whether to adopt the changes to the law set out in the Bill.