Existential risk from artificial general intelligence is the hypothesis that substantial progress in artificial general intelligence (AGI) could result in human extinction or some other unrecoverable global catastrophe. If AI surpasses humanity in general intelligence and becomes "superintelligent", then it could become difficult or impossible for humans to control. [5] Concerns about superintelligence have been voiced by leading computer scientists and tech CEOs such as Geoffrey Hinton,[6] Alan Turing,[a] Elon Musk,[9] and OpenAI CEO Sam Altman. [11][12]Two sources of concern are the problems of AI control and alignment: that controlling a superintelligent machine, or instilling it with human-compatible values, may be a harder problem than naïvely supposed. [1][13][14] In contrast, skeptics such as computer scientist Yann LeCun argue that superintelligent machines will have no desire for self-preservation.
Source: The Guardian May 22, 2023 20:01 UTC