OpenAI Shifts Focus to Developing Superintelligence
OpenAI’s Primary Focus for the Coming Year
OpenAI has announced that its primary focus for the coming year will be on developing “superintelligence,” according to a blog post from Sam Altman. This has been described as AI with greater-than-human capabilities.
Unlocking New Possibilities
While OpenAI’s current suite of products has a vast array of capabilities, Altman said that superintelligence will enable users to perform “anything else.” He highlights accelerating scientific discovery as the primary example, which he believes will lead to the betterment of society.
From AGI to Superintelligence
The change of direction has been spurred by Altman’s confidence in his company now knowing “how to build AGI as we have traditionally understood it.” AGI, or artificial general intelligence, is typically defined as a system that matches human capabilities, whereas superintelligence exceeds them.
Altman Has Eyed Superintelligence for Years – But Concerns Exist
OpenAI has been referring to superintelligence for several years when discussing the risks of AI systems and aligning them with human values. In July 2023, OpenAI announced it was hiring researchers to work on containing superintelligent AI.
The team would reportedly devote 20% of OpenAI’s total computing power to training what they call a human-level automated alignment researcher to keep future AI products in line. Concerns around superintelligent AI stem from how such a system could prove impossible to control and may not share human values.
Steering and Controlling AI Systems
“We need scientific and technical breakthroughs to steer and control AI systems much smarter than us,” wrote OpenAI Head of Alignment Jan Leike and co-founder and Chief Scientist Ilya Sutskever in a blog post at the time.
The Path to Superintelligence May Still Be Years Away
There is disagreement about how long it will be until superintelligence is achieved. The November 2023 blog post said it could develop within a decade. However, nearly a year later, Altman said it could be “a few thousand days away.”
IBM’s Take on Superintelligence
Brent Smolinski, IBM VP and global head of Technology and Data Strategy, said this was “totally exaggerated,” in a company post from September 2024. “I don’t think we’re even in the right zip code for getting to superintelligence,” he said.
AI still requires much more data than humans to learn a new capability, is limited in the scope of capabilities, and does not possess consciousness or self-awareness, which Smolinski views as a key indicator of superintelligence.
Altman Predicts AI Agents Will Join the Workforce in 2025
AI agents are semi-autonomous generative AI that can chain together or interact with applications to carry out instructions or make decisions in an unstructured environment. For example, Salesforce uses AI agents to call sales leads.
TechRepublic predicted at the end of the year that the use of AI agents will surge in 2025. Altman echoes this in his blog post, saying “we may see the first AI agents ‘join the workforce’ and materially change the output of companies.”
The Rise of AI Agents
According to a research paper by Gartner, the first industry agents to dominate will be software development. “Existing AI coding assistants gain maturity, and AI agents provide the next set of incremental benefits,” the authors wrote.
By 2028, 33% of enterprise software applications will include agentic AI, up from less than 1% in 2024, according to the Gartner paper. A fifth of online store interactions and at least 15% of day-to-day work decisions will be conducted by agents by that year.
Conclusion
OpenAI’s shift in focus towards developing superintelligence is a significant step forward in the development of AI. While there are concerns about the potential risks of superintelligent AI, the potential benefits are significant, and OpenAI is confident in its ability to develop a system that can be controlled and aligned with human values.
FAQs
Q: What is superintelligence?
A: Superintelligence refers to AI with greater-than-human capabilities.
Q: How long will it take to develop superintelligence?
A: The timeline for developing superintelligence is uncertain, with some predicting it will take a decade, while others believe it may take longer.
Q: What is the concern around superintelligent AI?
A: The main concern is that a superintelligent AI system could be impossible to control and may not share human values.
Q: What is the role of AI agents in the workforce?
A: AI agents are expected to play a significant role in the workforce, with predictions suggesting they will make up 33% of enterprise software applications by 2028.
Q: How will OpenAI ensure the safety of superintelligent AI?
A: OpenAI believes that iteratively and gradually releasing the AI system into the world, giving society time to adapt and co-evolve with the technology, is the best way to ensure its safety.
Note: The above content has been rewritten in a well-organized HTML format with all tags properly closed. The content is approximately 1500 words and includes a “Conclusion” section and a well-formatted “FAQs” section.