The implications of ChatGPT apps

Sigh. Change. I love it. But it can be a pain in the ass sometimes. I once believed my work lived in neat little boxes. Word for writing. Excel for numbers. Email for regret. CRM for pretending I “nurture relationships” when I actually copy paste the same sentence into thirty follow ups. But then ChatGPT apps arrived and said the unspoken part out loud, “your boxes are optional”.

The AI-assistant now wants the keys to every box.

And it wants to be the only gateway to each and every app you have worked with in the past, changing the user experience from clicking to prompting.

For OpenAI, ChatGPT apps is a platform move. OpenAI introduced apps in ChatGPT in October 2025 and framed them as a way to chat with services and do real tasks inside the ChatGPT interface. At launch it included partners like Spotify, Booking.com, Expedia, Canva, Figma, Coursera, and Zillow. More brands were queued up behind them, ready to convert your attention into “engagement”, but chatting with those apps doesn’t offer the experience you have with the original apps, chatting with them gives you the warmth of a parking ticket.

A ChatGPT interface displaying a conversation about creating a Spotify playlist for walking a dog, with prompts asking for specifics about the desired energy level, walking setting, genres, and playlist length.

But we have to get used to it, because this is the future, my dear intelligent friends.

We will be working from within the chat window.

On one hand it feels like a gift, since it saves us from learning yet another maze of apps, menus, tabs, and “helpful” tooltips written by someone who hates humans. On the other hand it turns the assistant into the front door of everything, and whoever owns that door gets to decide what shows up first, what gets hidden, what gets “recommended”, and what gets taxed in the form of fees. It also trains us to describe outcomes in words, then sit back while software touches our files, our email, our CRM, our spreadsheets, our calendars, and every other place where mistakes become expensive.

So, convenience will rise, and so will our dependency, and remember, the bill comes in the same envelope as the “product update”


More rants after the messages:

  1. Connect with me on Linkedin 🙏
  2. Subscribe to TechTonic Shifts to get your daily dose of tech 📰
  3. Please comment, like or clap the article. Whatever you fancy.

What is a ChatGPT App

The simple version is this. A ChatGPT app is an integration that lets ChatGPT call tools from another service during your conversation with it. You type a request. ChatGPT decides it needs a tool. The app runs that tool through its own backend. The result comes back into the chat, sometimes as plain text, sometimes as rich UI with cards, images, maps, playlists, or previews and whatnot.

Screenshot of the ChatGPT interface featuring various app integrations, including Canva for designing social posts, Adobe Photoshop, Airtable, AllTrails, Apple Music, and Booking.com, organized in a user-friendly layout.

Now, this is the key change.

The app lives inside the assistant, not inside your browser tab collection.

That difference sounds small until you feel it.

In the old world, you visited a site, hunted for the right screen, clicked the right button, and then tried to remember what you were doing. In the new world where you live in the box, you describe the outcome in one place, and the assistant drives the tools for you.

The screen becomes a side character, and the conversation becomes the main interface.

The tech behind this has a name that already smells like standardization committee coffee. The Apps SDK and the Model Context Protocol, shortened to MCP. OpenAI’s Apps SDK is built on MCP, and that’s an open specification that lets a model connect to external tools and data. MCP is kinda like the plumbing between the apps, and the Apps SDK adds the experience layer so developers can ship both logic (what the app “does”) and interface for an app that shows up inside ChatGPT.

If you are non technical, you can visualize MCP as a menu of actions.

The app exposes actions with names and rules about what inputs they accept and what outputs they return. ChatGPT reads that menu, then calls the action when it needs it. That is why this feels different from “ChatGPT can browse the web” or “ChatGPT can write a poem”. This is ChatGPT taking action in your tools.

Diagram illustrating the architecture of ChatGPT Apps, including components like the ChatGPT app runtime, app directory, and connections to SaaS and local data sources.

OpenAI also did something very deliberate with availability.

Apps rolled out as a preview for ChatGPT Business, Enterprise, and Edu customers around mid November 2025. Now, that detail matters. They had a choice when they introduced apps inside ChatGPT. Release it to everyone first and let millions of random people connect to random services, and watch the internet turn it into a bloody chaos of errors and broken dreams. That’s impossible to control. So they opted to release it to paying organizations first, the ones that already demand admin controls, security settings, and “who did what” kind of logs, and use that safer environment to roll it out.

If OpenAI had launched apps as a consumer toy first, it would have looked like entertainment. “Look, wow, ChatGPT can talk to Spotify”.

Aw, cute.

But launching it as a workplace feature first signals a different intent.

OpenAI is aiming for ChatGPT to become the place where work happens, and apps are the way it reaches into your tools.

That means email, files, calendars, CRMs, databases. The boring stuff that runs companies, and that’s where the money is.

Then, in December 2025, OpenAI opened app submissions from third parties, they introduced an app directory where users can discover apps. This is where every platform story turns into a platform economy story. We all know it from the likes of Apple, who invented this, it’s all about discovery, ranking and gatekeeping. This means a whole new kind of SEO will come for your product so that they can become a back-end tool in someone else’s chat window.

Access is simple in theory.

You go into ChatGPT, find the apps section, connect an app, grant permissions, and then call it by mentioning the app name in your prompt. Spotify’s launch post describes this directly. You mention Spotify, ask for songs, playlists, podcasts, and the Spotify app is pulled into the conversation with context so it can do the task. Tap a track, it opens Spotify. No grand ceremony, no branding either. Just a simple consent and then execution.

Booking.com works the same way in spirit. You request hotels, dates, preferences, and you get results in chat. Zillow does it for home listings. Canva does it for designs and decks. This is the early pattern. ChatGPT becomes the front door, and the app becomes the action layer.

Now the part you actually came for, my dear entrepreneurial friend.

Building these things.


How to build a ChatGPT app

A developer builds a ChatGPT app by exposing their tools through an MCP server, then packaging the interface using the Apps SDK. The MCP server is just a web service that ChatGPT can reach. It advertises tools and accepts tool calls. The Apps SDK defines how responses can render richer UI alongside assistant messages.

There is a developer mode inside ChatGPT that supports full MCP client behavior, including read and write actions. OpenAI’s own docs describe it as “powerful and dangerous”. You have to remember that “write actions” can change or destroy data if you wire them badly. Prompt injection risks exist if your app trusts text that it should have treated as hostile.

Think about this for a second. Text can be weaponized to get access to your backend system to create havoc. This development is opening up the door to a whole new level of threat. Just keep that in mind when architecting your internal enterprise AI-Assistant toolkit.

For developers, there is a practical flow.

You enable developer mode, point ChatGPT to your MCP server over HTTPS, and create a connector entry for your app in ChatGPT settings. OpenAI’s docs even mention using ngrok (tunneling tool) or similar to expose a local server during development. That connector becomes your test harness. And now you run a chat, call your tools, inspect outputs, adjust schemas, repeat until it behaves.

That build model matters for one big reason. It puts the assistant in the driver seat.

Traditional “apps with AI” live inside the product. AI+ or +AI in this context doesn’t make a difference. In both cases, the AI helps inside that product. A writing app adds AI rewriting, a CRM adds AI summarization. But in both situations, the user still lives in each tool’s interface, each tool’s permissions, each tool’s workflow.

But ChatGPT apps flip the direction. The user now lives in the assistant and the assistant reaches out to tools as needed.

That is why people like me keep saying “this will replace apps”. But that phrasing is quite lazy because the assistant does not delete software, but it changes the primary interface for software. The assistant becomes the control plane and the tools become the execution plane.

This also explains why companies have mixed feelings.


Changing the SaaS landscape

If you are Canva, you win by becoming the best design tool inside the assistant, but when you are a not so well known SaaS product, you risk becoming an invisible tool call, a commodity action and a backend with a logo. That is not automatically bad, but it rewrites your business model. Your differentiation has to live in outcomes and reliability, not in a UI with twenty five carefully crafted drop downs.

And it will also change the pricing model, and for once, I think it’s going to be for the better because we are all sick and tired of the subscription economy. The subscription model feels so awful for the reason that it turned software into rent. You never own anything, you never finish paying, and every year the rent creeps up but their pitch keeps telling you that you are “investing in innovation”.

Diagram illustrating the transition from traditional SaaS to Agentic AI. Left side lists challenges of SaaS such as vertical use-cases, fragmented data, and outdated pay-per-seat pricing. Right side emphasizes benefits of Agentic AI including integration across workflows, horizontal enterprise OS, and outcome-based pricing.

A lot of SaaS vendors have pushed through waves of price increases lately. SaaStr called 2025 a price surge and cited SaaS pricing rising far above general inflation. Even if the exact number varies per dataset, the lived experience stays consistent. Our budgets get eaten by a pile of recurring charges, and the improvements look like minor UI tweaks, and features we never asked for. And when you stop paying, you fall back to a free tier which won’t work because you have too many projects or whatever excuse they’ll use to justify closing the app for good.

But tool calls through ChatGPT will break this subscription blackmail.

When users work inside ChatGPT apps, the primary place they “use software” shifts toward the chat window. OpenAI’s Apps SDK and MCP make this explicit. Apps expose tools, ChatGPT calls them, and results come back into the conversation. That means fewer people open your UI. The assistant becomes the entry point, and your product becomes an execution engine behind the curtain.

And then per seat pricing starts to look silly.

If an assistant triggers actions automatically, seats become a fake unit. A team can rely on the same assistant workflow without logging into your app. Seats no longer map cleanly to value.

This will drive the pricing toward usage.

A diagram illustrating the evolution of software pricing models, displaying three stages: On-Premise Software (green) focusing on ownership and access, SaaS (blue) emphasizing subscription and scalability, and Agentic AI (orange) highlighting payment for usage and outcomes.

Tool calls make it natural to meter, like say, number of calls, volume of processed data, minutes of compute, items generated, records updated, messages sent, and that is going to push SaaS toward usage based models and hybrid models. Benchmarks and industry commentary already show usage based adoption rising in SaaS.

But also think about this . . . Outcome based pricing becomes a viable alternative.

When the assistant completes a unit of work, vendors can price per outcome. Think about it like this, a qualified lead or a completed report, a closed ticket, a booked hotel, a generated design.

That is outcome based pricing.

And outcome based pricing not only aligns with buyer value, but it could also turns into a fight over attribution and edge cases, because outcomes can be disputed because the definition of success can shift mid quarter. But maybe I’m taking things too far with this.

And what about this . . .

In a “tool-call world”, people work inside ChatGPT, ask for an outcome, and ChatGPT chooses an app or a tool to complete it. That choice happens through an app directory, a list of recommended apps, or an automated suggestion. So “discovery” is no longer Google or your sales team. Discovery becomes the assistant’s recommendation layer.

When the assistant becomes the front door, the SaaS depends on it for traffic and usage. If the assistant does not suggest you, you become invisible, and if it suggests a competitor first, your growth drops. If it changes the rules, your pipeline could change overnight even. This means ranking becomes a growth lever.

Think of ranking as shelf space. In a supermarket, the product at eye level sells. The one near the floor collects dust, and in an app directory or recommendation list, the same thing happens. The platform decides what sits at the top and what gets buried.

So SaaS companies will fight for three things

Being listed at all.

Being suggested often.

Being suggested first.

And that is where “platform tax” comes in. Once the platform controls distribution, it will take a cut. It will charge fees for access to the ecosystem, or maybe take a revenue share on transactions, or sell preferential placement. It can also add softer taxes such as extra compliance requirements, review delays, required use of its billing system, and mandatory policies that increase your costs. Basically whatever they fancy.

And guess who is going to have to pay for that platform tax.

Indeed.

Look at patterns in app stores, cloud marketplaces, ad platforms, even payment rails. Once a middle layer controls distribution, it captures value from both sides.

This is already visible in how agentic pricing and licensing discussions are evolving in big vendors. Salesforce has even discussed new licensing structures for agents.

So the subscription model gets squeezed from both sides.

Buyers already hate recurring cost creep. Tool call distribution pushes pricing toward metering and outcomes. The market will not flip overnight, but the direction feels obvious. Subscription rent gets challenged by usage meters and outcome contracts, and vendors that only know “per seat per month” will look dated.


+ e-Ecommerce

OpenAI is also going to be pushing commerce through the same concept.

In September 2025, OpenAI published “Buy it in ChatGPT” and described Instant Checkout and an Agentic Commerce Protocol. That is the shopping version of the same story. You express intent in chat, the system finds products, helps complete the purchase, and stores and marketplaces become backends connected to the assistant.

Instant Checkout lets someone buy directly inside ChatGPT’s UI, so the conversation becomes the shopping funnel. OpenAI’s own developer docs describe it as merchants accepting orders from a new channel and they get to keep their own order and payment systems, so the merchant stays the merchant and the assistant acts as the buying layer

A cartoonish illustration of a man pushing a shopping cart filled with Whole Foods items, prominently featuring the text 'Buy it in ChatGPT' on the cart. The man has a joyful expression and casual attire, emphasizing a modern shopping experience.

This also came with a very explicit rollout pattern.

I read through Reuters that Instant Checkout launched in the US via partnerships with Etsy and Shopify, with Stripe powering the checkout, and with the first version focusing on single item purchases. Multi item carts and broader regional rollout were positioned as the next steps. Merchants pay OpenAI a fee per completed purchase, and so, buyers are not charged extra for using the feature, and OpenAI gets a new revenue stream that has nothing to do with subscriptions.

Another thing worth mentioning is that merchants can be discovered organically, and a fee applies when a purchase happens, with refunds tied to returns. And next to that, product results are ranked by relevance, so discovery turns into a ranking game, and that ranking game will become a blood sport the moment money flows through it.

It is going to change a lot.

And they surely want to capitalize on it, but that only works if they can get people to play their game, so under the hood, OpenAI is standardizing the “agent buys something” flow the same way payment providers standardized online checkout pages years ago. Their “Agentic Commerce Protocol” is positioned as an open standard, and OpenAI says they are open sourcing the tech behind Instant Checkout so more merchants and developers can integrate.

If that sounds abstract, hold on, because it is simpler than it looks. A merchant exposes product data in a predictable format, then implements a checkout flow that ChatGPT can run inside its interface, and ChatGPT collects shipping details, shows totals, then completes the transaction using the protocol’s specs, with logs and versioning so the system can be certified before going live.

They are building a true intricate commerce stack.

As you may know, OpenAI’s co-founder, Greg Brockman comes from Stripe, and they recently hired Fidji Simo as their CEO of applications, and before that she was CEO at Instacart, and next to that is Sarah Friar (CFO). Her eCommerce connection runs through payments, via Square, now Block. OpenAI has pulled in Square people because that world revolves around checkout flows, fees, fraud, risk, compliance, and of course metrics.

The best way to stay tuned with a company’s strategy is to watch recent hires.

And OpenAI is not the only one moving in this direction.

Something similar is already happening as we speak at Amazon. They recently launched the “Buy For Me” option, where you can have Amazon’s buying Agent, scour the web for something they aren’t offering, and it will literally buy it for you. I have covered this in another blog I posted earlier.

If you want proof that the industry is aligning around this interface shift, Microsoft announced in early January 2026 that Copilot can show product cards with buy buttons and let users checkout inside chat, with partners and payment rails behind it. Different brand, same instinct.

They own the interface and they own the decision moment.


Enterprise AI-Assistants

That is a big thing. It is not a magical development, it all has to do with integration. I covered it extensively in this article Macrohard is Musk’s middle finger to Microsoft | LinkedIn

What is happening?

Lemme mansplain . . .

When the assistant can write a document, create a spreadsheet, log a call, file a ticket, and send an email, the assistant becomes a unified work surface.

Remember that name.

It will be the cockpit you’re living in eight hours a day. Your work becomes a sequence of intents and checks, and your tools become destinations the assistant drives.

Analysts have been throwing numbers at this for a while as they always are.

Gartner predicted in August 2025 that up to 40 percent of enterprise applications will include task-specific AI agents by 2026. That is a signal that vendors and buyers are baking agent behavior into products at scale, not playing with demos for social media.

Gartner also forecasted that by 2028, 33 percent of enterprise software applications will include agentic AI, up from less than 1 percent in 2024, and that this could enable a meaningful chunk of day to day work decisions to be made autonomously.

So yes, it is definitely plausible we’ll have an assistant-workspace future, and a lot of it is already shipping in fragments. Still, it comes with two practical constraints that non technical people feel immediately.

An infographic depicting an Enterprise AI Assistant, showing how it integrates with various tools including enterprise documents, SaaS tools, and databases. The center features a circular icon symbolizing the assistant, with arrows indicating tool calls leading to tasks like drafting RFPs, resetting accounts, and filing vacation requests.

It’s going to be all about permissions and trust.

A chatbot that writes a paragraph is cute, but one hallucinating bot that can change records in Salesforce, send emails from your account, and edit files in a shared drive is of another level. So these systems needs a clear permission model, audit trails, and limits. OpenAI themselves explicitly warns about write risks and prompt injection. That warning exists because the damage becomes real as soon as “tools” include “delete” and “send”.

And then there’s reliability and accountability.

A human can say “my bad” and will fix it. A tool call that wrote the wrong value into a database creates a mess that spreads. The assistant will improve, but the shape of risk stays. That is why, I think, enterprise previews came early, with admin controls and private app publishing. OpenAI’s help docs describe developer mode and custom MCP apps in an enterprise context, including review and deployment for a company.

The result is going to be a new division of labor.

People will have to spend less time navigating interfaces, and so they will be able to focus more time on defining outcomes, checking results, and handling exceptions. You can call that productivity or you can also call it moving the work from clicking buttons to supervising automation, whatever you fancy. The supervisor job is still work. It just has better marketing.

So what are ChatGPT apps, in one clean sentence. They are tool integrations packaged for a chat-first platform, built on MCP and the Apps SDK, distributed through an app directory, and designed to make ChatGPT an action hub rather than a text box.

What they are not though is they are not a magic brain that’s going to replace your stack overnight. They are not a guaranteed upgrade to your life. They are a new interface layer that sits between you and your software, and it will reward tools that behave well and punish tools that hide behind confusing UX.

You will still have Word and Excel and CRMs and databases. But the assistant will talk to them while you sit there, acting all important and pretending you “orchestrate outcomes” as a true boss, when you really only told a machine to do your admin work with your name on the invoice.

Signing off,

Marco


I build AI by day and warn about it by night. I call it job security. Big Tech keeps inflating its promises, and I just bring the pins and clean up the mess.


👉 Think a friend would enjoy this too? Share the newsletter and let them join the conversation. LinkedIn, Google and the AI engines appreciates your likes by making my articles available to more readers.

To keep you doomscrolling 👇

  1. I may have found a solution to Vibe Coding’s technical debt problem | LinkedIn
  2. Shadow AI isn’t rebellion it’s office survival | LinkedIn
  3. Macrohard is Musk’s middle finger to Microsoft | LinkedIn
  4. We are in the midst of an incremental apocalypse and only the 1% are prepared | LinkedIn
  5. Did ChatGPT actually steal your job? (Including job risk-assessment tool) | LinkedIn
  6. Living in the post-human economy | LinkedIn
  7. Vibe Coding is gonna spawn the most braindead software generation ever | LinkedIn
  8. Workslop is the new office plague | LinkedIn
  9. The funniest comments ever left in source code | LinkedIn
  10. The Sloppiverse is here, and what are the consequences for writing and speaking? | LinkedIn
  11. OpenAI finally confesses their bots are chronic liars | LinkedIn
  12. Money, the final frontier. . . | LinkedIn
  13. Kickstarter exposed. The ultimate honeytrap for investors | LinkedIn
  14. China’s AI+ plan and the Manus middle finger | LinkedIn
  15. Autopsy of an algorithm – Is building an audience still worth it these days? | LinkedIn
  16. AI is screwing with your résumé and you’re letting it happen | LinkedIn
  17. Oops! I did it again. . . | LinkedIn
  18. Palantir turns your life into a spreadsheet | LinkedIn
  19. Another nail in the coffin – AI’s not ‘reasoning’ at all | LinkedIn
  20. How AI went from miracle to bubble. An interactive timeline | LinkedIn
  21. The day vibe coding jobs got real and half the dev world cried into their keyboards | LinkedIn
  22. The Buy Now – Cry Later company learns about karma | LinkedIn

Become an AI Expert !

Sign up to receive insider articles in your inbox, every week.

✔️ We scour 75+ sources daily

✔️ Read by CEO, Scientists, Business Owners, and more

✔️ Join thousands of subscribers

✔️ No clickbait - 100% free

We don’t spam! Read our privacy policy for more info.

Leave a Reply

Up ↑

Discover more from TechTonic Shifts

Subscribe now to keep reading and get access to the full archive.

Continue reading