OpenAI CEO Sam Altman recently addressed concerns regarding the next iteration of the company’s Generative Pre-Trained Transformer (GPT) platform, GPT-5. In a discussion held at the Massachusetts Institute of Technology (MIT), Altman confirmed that OpenAI is not currently training GPT-5, debunking previous speculations.
The conversation at MIT was prompted by an open letter, signed by prominent tech figures such as Tesla CEO Elon Musk and Apple co-founder Steve Wozniak, urging AI companies to pause the development of powerful AI models due to potential risks they pose to society and humanity. Altman, however, pointed out that the letter lacked technical nuance and clear guidance on where to put the pause in AI development.
Instead of focusing on the development of GPT-5, Altman emphasized that OpenAI is working on improving GPT-4 while addressing safety issues associated with the AI model. He noted that OpenAI is pursuing other projects that build on top of GPT-4, all of which come with their own set of safety concerns that need to be considered and resolved.
The debate around AI safety has been gaining traction in recent years, with experts divided over the potential risks AI systems pose. Some believe AI could present an existential threat to humanity, while others see it as a more manageable concern. The open letter has further ignited this debate, and Altman’s statements highlight the complexities involved in discussing AI safety.
A key challenge in AI safety discussions is the difficulty in quantifying and tracking progress in AI development. The use of version numbers, as seen with GPT-4 and the rumored GPT-5, can lead to misunderstandings regarding the capabilities and improvements of AI systems. This fallacy of version numbers suggests that higher numbers indicate better and more advanced technology, but this is not always the case.
Rather than relying on version numbers to gauge progress, experts should focus on the capabilities of AI systems and the potential ways these systems could evolve. The AI industry is constantly developing and optimizing current AI systems, and understanding their capabilities and limitations is crucial for addressing safety concerns.
In conclusion, Altman’s confirmation that OpenAI is not actively working on GPT-5 may provide some relief to those concerned about AI safety. However, the ongoing work to enhance GPT-4’s capabilities and the development of other ambitious tools in the industry underline the need for continuous discussions on AI safety.