The former chief scientist of OpenAI and pioneer in the field of artificial intelligence research, Ilya Sutskever, has set his sights on a new project: creating safe superintelligence. The term “superintelligence” describes an imaginary future in which artificial intelligence (AI) completely eclipses human cognitive capacities. The strategy of Safe Superintelligence advocates a long-term dedication to ethical AI research, free from the demands of making quick money. Although there is definitely a long and difficult road ahead of us in the pursuit of safe superintelligence, Safe Superintelligence’s commitment to this goal is a step in the right direction. It remains to be seen if Safe Superintelligence will succeed in realizing its lofty objective of safe superintelligence.
Source: Economic Times June 20, 2024 13:13 UTC