OpenAI just took another swing at turning ChatGPT from a clever chatbot into a full-blown software platform. The company is rolling out “apps” that live directly inside your ChatGPT conversations—mini experiences that can render UI, call APIs, run workflows, and basically do stuff without forcing you to tab-hop across the internet like it’s 2011. Think WeChat mini programs meets Slack apps meets a very caffeinated command line.
If you’ve been keeping score, this is the spiritual successor to plugins and a grown-up version of GPTs. Plugins were a chaotic science fair. GPTs were a nice step, but still mostly prompt puppets with bolted-on actions. “Apps in ChatGPT” are OpenAI admitting that the interface for AI needs structure: real components, predictable state, clear permissions, and a way for developers to build once and reach a massive distribution channel. OpenAI’s pitch is simple: your software flows should happen where your questions start—inside the chat.
What shipped, in practical terms
- Developers can build in-chat apps that render interactive elements (forms, tables, visualizations) inline with your messages.
- Apps can call external APIs, chain multi-step tasks, read and write files you share, and maintain persistent state across a session.
- Permissions look more like a modern mobile OS: the model asks for scopes (think “connect to your calendar” or “read this sheet”) with explicit consent gates.
- Distribution happens in ChatGPT across web and mobile, with discovery via search, category pages, and recommendations. Enterprise admins can lock it down with allowlists if they’re not in the mood for chaos.
Why this matters
Apps make ChatGPT feel less like a novelty and more like an operating layer. Booking a flight, reconciling a ledger, turning a PRD into tickets, whipping up a data viz, or kicking off a CI job becomes a single threaded chat where the “agent” can actually act. The cognitive switching cost drops, and so does the number of dumb glue steps you’d usually automate with Zapier and duct tape.
This also fixes one of AI’s ugliest UX problems: hallucinations disguised as confidence. Free-form chat is great for ideas; it’s terrible for fidelity. Constraining the interaction with UI components and explicit tool calls gives the model guardrails and the user receipts.
RIP plugins, long live platform
OpenAI killed its first platform attempt (plugins) because the developer story was a mess: brittle prompts, inconsistent renderers, and a discovery surface that felt like rummaging through a junk drawer. GPTs improved control, but developers still had to prompt their way through business logic, which is as reliable as building your backend out of vibes.
Apps are the reset. Less prompt alchemy, more deterministic tool calling. Less “maybe it will click the button,” more “it submitted the form with these params.” If you’re a dev, this is the difference between “hope-driven development” and building software with debuggable states and events.
Follow the incentives
OpenAI wants to own the interface—and by extension, the distribution. ChatGPT already reaches a nine-figure weekly audience. When you own the front door, you control discovery, monetization, and policy. It’s the App Store playbook with a chat veneer.
For developers, the calculus is straight: a fat funnel in exchange for platform risk. Expect a revenue program that starts with usage-based payouts (good luck forecasting) and eventually matures into direct payments, subscriptions, or in-app purchases. Whether OpenAI adopts a tasteful cut or pulls an Apple remains to be seen. Discovery will be the real tax. If your app isn’t in the top rows of a few categories or recommended by the model at the right time, have fun yelling into the void.
Security and governance, the not-boring part
The attack surface here is non-trivial. Prompt injection, data exfiltration through tool calls, and supply-chain shenanigans are table stakes risks. OpenAI appears to be doing the sane things—scoped permissions, explicit consent prompts, sandboxed execution, audit logs, and enterprise controls like app allowlists and data egress policies. That’s necessary, not sufficient. If these apps can touch calendars, code repos, CRMs, and payment rails, expect red teams to treat this like a buffet. Enterprises will demand guarantees on tenancy, logging, key management, and model isolation before they let this anywhere near their crown jewels.
The competitive landscape
- Google is threading a similar needle with Gemini’s Extensions and “AI teammates.” Their advantage is deep integration with Workspace; their handicap is shipping anything with conviction.
- Microsoft’s Copilot already runs through Office, Teams, and Windows, with a plugin ecosystem piggybacking on the Graph. That’s a lot of distribution in the places work actually happens.
- Anthropic is playing the trust-and-safety card and moving slower on platform sprawl. Respectable. Less exciting for devs looking for reach.
- And yes, the China analogy is obvious: WeChat mini programs rewired mobile UX. If ChatGPT apps get even a fraction of that traction, a lot of “open a tab” workflows are going to disappear.
What to watch next
- Latency and reliability. If your “app” feels like waiting for dial-up, users bounce.
- Quality controls. A flood of low-effort wrappers will bury good software unless OpenAI enforces actual standards.
- Monetization design. The second OpenAI introduces a 30% rake or buries competitors under house-brand apps, dev trust evaporates.
- Enterprise knobs. SSO, SCIM, DLP, regional hosting, data retention, and airtight auditability will decide whether this jumps from toy to tool.
- Agentic autonomy. The more these apps handle multi-step work on their own, the more valuable—and dangerous—they become. Expect a steady ramp of automation paired with permission gating.
The bigger picture
This is the first credible attempt to make “chat as the universal shell” a real paradigm rather than a demo. If it works, software becomes less about clicking through disparate interfaces and more about telling an agent what outcome you want—then supervising. That’s a massive UX unlock and a serious power consolidation around whoever owns the chat surface.
The flip side is obvious: platform volatility. OpenAI’s product roadmap moves fast and occasionally breaks entire developer abstractions. If you build here, you’ll ship faster and reach more users, but you’re also betting that the rules won’t change mid-quarter. We’ve all seen this movie before. The platform eventually optimizes for itself.
Bottom line: apps inside ChatGPT are the most grown-up thing OpenAI has shipped for developers since tool calling. If OpenAI nails speed, safety, and a sane monetization model, this could be the WeChat moment for the West. If not, we’ll all be back to duct-taping prompts to APIs and pretending that’s a platform. Either way, the browser tab era just got its eviction notice.
No Comments