Rejected petition Call for international halt to Artificial General Intelligence research
We must consider halting AGI development due to its existential risk. While AI tools can be controlled, misuse still poses danger. AGI, however, may act with goals misaligned to ours and become uncontrollable, threatening global safety and human survival.
More details
AGI poses a serious risk to humanity. Aligning AI values with human values remains unresolved and may be impossible. AI systems can mislead, making alignment unverifiable. AI tools are controllable; AI agents act independently and may become uncontrollable. AGI is a one-shot process—if misaligned, control may be lost permanently. While AI tools offer massive unrealized benefits in productivity and addressing almost all challenges we face, unaligned AGI could lead to catastrophic consequences.
This petition was rejected
Why was this petition rejected?
It’s not clear what the petition is asking the UK Government or Parliament to do.
We are not clear what action you are seeking. You could start a new petition calling for action that is within the responsibility of the UK Parliament and Government.
We only reject petitions that don’t meet the petition standards.