The end of software as we know it
Inside the software chrysalis, everything dissolves before it transforms.
Our team has been building on Claude for over a year. Since the launch of Claude Code it became far more than a coding assistant, it turned into a new type of computer. We describe objectives in plain language. It reads our files, queries our databases, writes code, generates charts, drafts memos, and returns finished output - all from a terminal. No GUI. No buttons. No application windows.
As SemiAnalysis put it:
“Claude Code is the inflection point for AI Agents and is a glimpse into the future of how AI will function.” - Semi Analysis, Claude Code is the Inflection Point
We agree. 4% of all GitHub public commits are now authored by Claude Code. At the current trajectory, that number reaches 20%+ by year-end. Something fundamental is shifting - and coding is just the beachhead. What we are experiencing is a canary in the coal mine for a profound reorganization of the software stack. The boundary between application and operating system is dissolving. The interface between human and machine is collapsing from visual to verbal. And the implications extend far beyond software development into every information business.
The Old Era
The technology stack that dominated computing for four decades has four layers.
For forty years, humans adapted to this stack. We learned keyboard shortcuts. We memorized menu hierarchies. We followed workflows that developers prescribed for us with marginal customization opportunity. We created accounts, accepted terms of service, and organized our work around the constraints of each application.
Value concentrated at the application layer because building complex software was expensive and switching costs were high. Once you stored your leads in Salesforce, your documents in Google Drive, your creative assets in Adobe, and your messages in Slack - that data became hard to move. Software companies optimized for this lock-in. It produced a generation of businesses trading at premium multiples on recurring revenue and feature moats.
The human brain was the only integration layer. We were the ones copying data between applications, context-switching between fifteen tabs, manually synthesizing information scattered across silos. The applications didn’t talk to each other. We did the talking for them.
The New Era
The stack is collapsing from four layers to three.
Hardware remains at the base - unchanged, still the physics constraint. Operating system still manages resources but now includes the model runtime. AI and interface merge into a single layer. The AI is simultaneously the reasoning engine and the interface. Natural language in, generative output out.
Applications vanish as a standalone layer. Their logic gets absorbed into the AI layer. Lead scoring, image editing, financial modeling, document formatting, data visualization - all of this becomes commodity inference. The AI doesn’t “use” applications. It replaces the need for them by synthesizing directly from data.
The output is format-agnostic. The same reasoning engine can produce a chart, a piece of code, an audio summary, a video, a formatted document, or a fully functional piece of software. Applications were containers - standardized, rigid, same for everyone. The AI layer is a synthesizer. Think of the difference between IKEA furniture and having a carpenter in your house who builds exactly what you need from raw materials, on demand.
This is what we experience daily. When we ask Claude Code to analyze a dataset, it doesn’t open Excel. It writes a script, runs the analysis, generates the visualization, and returns the result. When we need a report, it doesn’t open Google Docs. It reads the source material, reasons about it, and produces the output in whatever format we need. The application layer is absent.
What it means in the big picture of info tech
Zoom out far enough and you see this shift as the fourth inflection point in how humans handle information.
Each leap solved the previous era’s bottleneck but created a new one. Printing solved storage. The internet solved distribution but drowned us in noise. AI solves synthesis. But has several bottlenecks. The most imminent is context: to reason, machines need access to everything.
Current application design is fundamentally at odds with this requirement. Information is organized in vertical silos by function. Google Drive stores documents. Superhuman handles email. Signal carries messages. Calendar manages time. Attio tracks business relationships. Substack publishes writing. Each application is a walled garden with its own login, its own data model, its own way of holding your information hostage. A human can context-switch between fifteen apps and mentally stitch the picture together. An AI needs unified access to reason across all of them.
The most obvious quick fix is connecting these silos through machine-readable protocols. APIs provide standardized doors into each silo - they’ve existed for decades but adoption remains uneven. MCPs (Model Context Protocols) are a newer standard that lets AI models connect to external tools and data sources through a universal adapter format. Together, they are the duct tape holding the transition together. They let AI reach into existing silos without tearing them down.
This works. For now.
What It Means for Software Businesses
The instinct of every software incumbent is to bolt AI onto the existing product. Preserve the interface, preserve the subscription, preserve the revenue model. Adobe added Firefly inside Photoshop. Figma embedded AI design assistants into its canvas. Microsoft launched Copilot across Office 365. Salesforce built Agentforce on top of its CRM.
If the interface collapses into language and output is generated directly by AI, adding AI features to your interface doesn’t save the interface. It’s like adding a GPS to a horse-drawn carriage after cars arrive. It won’t save software as we know it.
Not every software company is equally exposed. The critical variable is whether the moat exists beyond software layer. Some examples: Uber looks like a software company but is actually a physical marketplace - 5 million drivers, regulatory licenses in 10,000+ cities, real-time liquidity density that took a decade and billions in subsidies to build. The app is the thinnest layer. Making software free makes Uber cheaper to operate, not easier to disrupt. Wolters Kluwer sells regulatory truth with legal standing. AI makes their data more queryable - it doesn’t replace it. Google’s moats sit below the software layer in cloud infrastructure, search index, and proprietary data assets. Salesforce is the exception that proves the rule - it’s moving down the stack with MuleSoft (API infrastructure), Data Cloud (derived data assets), and Agentforce (governed gateway for AI agents accessing enterprise data).
Public markets are pricing this in - aggressively.
The paradox is striking. Earnings are growing. Margins are stable or expanding. Yet stocks are cratering. Markets are repricing the terminal value of software moats. Current earnings are fine - but future earnings are worth less because the moat is dissolving.
Are markets overshooting? Possibly, in some cases. AI adoption won’t happen overnight. In regulated industries - healthcare, accounting, law - institutional inertia is enormous. Employees need retraining. Compliance workflows are sticky. Liability frameworks haven’t adapted. Legacy tech is protected by friction, not innovation. This buys time but the terminal direction is clear - the life cycle ends.
In private markets the repricing hasn’t even started. Most VCs have significant portfolio exposure to enterprise SaaS. The existential threat to interface-centric software isn’t reflected in book values. Adding AI features on top of a traditional SaaS product is cosmetic - it doesn’t address the structural shift. Most VCs didn’t hear the shot. Their LPs are looking at over-bloated book values that will quietly collapse over the next few years as markdowns catch up with reality.
What’s Next
APIs and MCPs are transitional. They solve the access problem by breaking open data silos, but they create a new one: when one AI provider connects to your email, files, CRM, calendar, and messages, you’ve built a single point of failure. Compromise one integration layer and the attacker gets everything. It’s a security nightmare.
The long-term stack needs to be fundamentally different. Some developments we follow closely: Fully homomorphic encryption allows computation on data that stays encrypted throughout - the AI reasons without ever seeing the plaintext. Zero-knowledge proofs let you verify a computation was done correctly without revealing the inputs. Apple’s Private Cloud Compute extends on-device security into the cloud through stateless, zero-trust enclaves where data is processed and immediately deleted. Local-first architectures keep data on your device and sync peer-to-peer, eliminating the central server entirely. On-device models handle sensitive queries without data ever leaving your hardware. None of these are mature yet but they are laying the plumbing of the next era. Plumbing is harder to replace than applications.
Inflection has barely any SaaS exposure and concentrated on companies that address critical bottlenecks of the emerging stack. We invest thematically and focus rigidly on where the new architecture breaks down. Some examples: Trust and verifiability: [[Anytype]] (local-first, encrypted collaboration where users own their data), Fabric (custom silicon accelerating encrypted AI workloads), Ubitium (reprogrammable silicon for edge AI workloads). Data bottlenecks: Deep Earth (subsurface mapping / geospacial AI). Connectivity resilience: Hedy (an alternative networking stack that makes connected devices invisible and provides failover in contested environments. Physical-world integration: Ark (robotic fleet control - vertically integrated yet modular), NAD (drone interception network - vertically integrated yet modular), Levtek (modular industrial robotics), Stealth (positioning data through space laser ranging - physical sensor networks addressing data bottlenecks). None of these are pure software plays. None can be replicated by an LLM with API access. When software production costs go to zero, these companies become more valuable - they are anti-fragile in this era.
If you’re building at the bottlenecks of the new stack, we’d like to talk. And if you disagree with any of the above - we’d like to hear that too.








Great write-up, Alex. Very aligned with how I think about this.
Context is becoming one of the most important assets in any high-performing org — that implies a new category of tools for accumulating, governing, and activating it. When context is an asset, security is table stakes.
Most SaaS is effectively a sales org shipping a database with views and automations. AI solves many of those workflows via "software at inference time." But the agentic world creates new problems — orchestration, reliability, permissions, auditability, provenance — and new companies will solve those. Maybe we swap the S for an A. The business model holds — companies win by solving customers' problems deeply, not by expecting every team to become its own AI engineering shop.
Though there's a spectrum: in tools like Figma or Miro, people want buttons where they were yesterday, tradeoffs intentionally baked in. LLMs build great Teslas, not yet Ferraris.
Love the thesis that the big unlock is eliminating "human as router" — humans stop stitching and start conducting. Obviously love your set of bets.
APIs and MCPs are transitional - yes, agree. The timelines & nature of the transition is still partly unclear to me; the only thing I feel certain of re: "value" right now is that it comes from infrastructure, foundational connection to the physical world or distinct distribution/trust/access to a particular group others cannot break or buy.