And in that standard model, we create machines that pursue fixed objectives. We will eventually lose control if we build AI systems in the current standard model. The existential risk comes from losing control of AI systems that have sufficient capability of decision-making in the real world. AI systems will eventually be able to make better decisions than humans in the pursuit of their objectives. And for AI systems designed within the standard model, where the objective is fixed and known, this inevitably leads to conflict.
Source: National Post February 07, 2021 11:02 UTC