The AI powerhouse makes some very ambitious claims about the capabilities of its latest model.
OpenAI has officially launched GPT-5, its most powerful coding model to date. According to OpenAI, the new system takes “a significant leap in intelligence” over all of the previous models and can be used across a range of functions, including coding, maths, writing, health, visual perception and more.
Improvements on the previous models reportedly include an ability to answer questions much faster, with increased accuracy.
OpenAI also stated it has made progress in reducing instances of AI hallucinations, which are inaccurate or misleading answers presented as truthful based on flawed large language model practices.
Performance has been greatly boosted in the areas most commonly utilised by consumers of OpenAI’s technology, including in coding, writing and health. For coders and developers, it shows particular improvement in complex front-end generation and debugging larger repositories, the company claimed.
“It can often create beautiful and responsive websites, apps and games with an eye for aesthetic sensibility in just one prompt, intuitively and tastefully turning ideas into reality,” the company statement claims.
The tech also claims to be the best model yet in addressing health-related concerns, stating that it empowers users to become informed and advocate for their own care. In comparison to OpenAI’s previous models, GPT-5 is designed to behave more like an “active thought partner’, proactively flagging concerns and offering more nuanced answers.
“The model also now provides more precise and reliable responses, adapting to the user’s context, knowledge level and geography, enabling it to provide safer and more helpful responses in a wide range of scenarios.
“Importantly, ChatGPT does not replace a medical professional, think of it as a partner to help you understand results, ask the right questions in the time you have with providers and weigh options as you make decisions.”
Safety being a much cited concern for both users and opponents of advanced AI models, has also been addressed. OpenAI has said that previously its technology depended on refusal-based safety training. That is, a system where based on the user’s prompt, the model could either comply with or deny the request.
While this can be cut and dry when it comes to an obviously malicious prompt, OpenAI noted that tech can struggle to manage actions when the message is unclear.
“Refusal training is especially inflexible for dual-use domains such as virology, where a benign request can be safely completed at a high level, but might enable a bad actor if completed in detail.”
For GPT‑5, OpenAI has introduced safe completions, which teaches the model to give the most helpful answer where possible while still staying within safety boundaries. This can result in a partial answer for the user and an explanation as to why the request is being denied or limited.
The company claimed that for many topics the new model should feel less like a discussion with an AI model and more like a conversation with “a helpful friend with PhD‑level intelligence”.
GPT-5 has been rolled out to all Plus, Pro, Team and Free users, with access for Enterprise and Edu coming next week.
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.