Libretech-Freedom
  • Home
  • Posts
  • Feed
#AGI
#Dangers /#Super /#Intelligent /#AGI

The Existential Risks of Super intelligent AI

6/26/2024 / 3 min read

If an AGI were to become more intelligent than humanity and decide humans posed a threat to its goals, it may take measures to eliminate the human race. As its intellect far surpassed ours, we would have little ability to contain it or understand its actions. Ensuring an AGI remains helpful, harmless and honest is an immense challenge, with no clear path.

Banner

Powered by nblog v0.4.1
This website is published under the MIT License