OpenAI has officially announced GPT-5, the latest iteration of its flagship large language model, introducing two headline capabilities that could fundamentally change how people interact with AI assistants: native real-time video comprehension and persistent long-term memory across sessions.
What's New in GPT-5
According to the technical report released alongside the launch, GPT-5 can process live video streams — not just uploaded clips — at up to 30 frames per second, enabling real-time sports commentary, live medical imaging analysis, and instant accessibility descriptions for visually impaired users.
The long-term memory system allows the model to maintain a structured knowledge graph about each user across unlimited sessions, remembering preferences, ongoing projects, important life events, and previously established facts — without requiring users to repeat context each time.
Safety and Privacy Concerns
Critics and researchers have immediately raised concerns about the privacy implications of persistent memory. The Electronic Frontier Foundation issued a statement calling on OpenAI to provide "full, granular, easy-to-use controls" for users to audit, edit, and completely delete stored memories.
OpenAI says all memories are encrypted, stored locally on-device in the consumer app, and never used to train future models without explicit user consent.
Availability
GPT-5 will begin rolling out to ChatGPT Plus and Teams subscribers this week, with broader availability — including the API — expected within 30 days. Enterprise pricing has not yet been announced.
Comments (2)
The memory feature is both exciting and terrifying. Really curious about the privacy controls — hope they're genuinely robust, not just marketing.
Real-time video is going to be a game-changer for accessibility tools. Finally a practical use case that directly helps people.
Leave a Comment