The Agentic Web: how AI systems will change websites

Let's share what we know

When Tim Berners-Lee created the World Wide Web, his goal was to foster collaboration between people. Even today, in the AI era, websites are still fundamentally a human construct. You create a website to share what you know, or to reach customers, or as a home base for you or your organization.

The difference today is that people will increasingly deploy AI agents to interact with our websites on their behalf. You might say we’re moving from a read/write web, where we can both browse and create websites, to an agentic web — where we can read and write the web with the help of agents.

As website publishers or operators, it’s up to us to adapt to this new agentic reality. While we will continue to publish our sites for our fellow humans, we have to recognize that how we reach those people will increasingly be through AI intermediaries. So, the way we build and deploy websites has to change accordingly.

Websites as capabilities

The shift we’re undergoing due to AI comes down to this: we’re moving from a world of websites as content, to a world of websites as capabilities.

Instead of just offering up information, websites will increasingly expose actions that AI systems can take. You might say, well isn’t that what APIs (Application Programming Interfaces) are for? In a sense, yes. APIs already expose a site’s capabilities, but they are designed for developers and typically require explicit integration. They define a fixed set of operations that another system can call in a structured way.

What’s changing is that AI systems can use those capabilities dynamically, rather than relying on predefined integrations. An agent can decide in real time which tools to use, how to combine them, and when to call them. This makes those capabilities far more accessible. Also, because AI systems use natural language, anyone can use them — so it’s not just developers who can take advantage of what your website offers.

You as the website owner are still in control of how AI systems — especially agents — can use your site.

To be clear, you as the website owner are still in control of how AI systems — especially agents — can use your site. In fact, that’s one of the benefits of the Agentic Web: there are emerging protocols and standards that give you that control.

In practice, this will increasingly be expressed through identity and permissions. Just as users today log into websites and grant apps access via OAuth, agents will need a way to act on a user’s behalf with clearly defined scopes. That means new patterns for authentication — where an agent can prove who it represents, what it’s allowed to do, and under what conditions.

Early work is already underway: from the World Wide Web Consortium’s AI Agent Protocol Community Group, which is exploring agent identity and inter-agent authentication, to emerging extensions of OAuth and OpenID Connect designed for delegated agent access. This is still early, but it will become a foundational layer of the agentic web.

What’s driving the Agentic Web

Let’s look at four of the key driving forces behind the Agentic Web.

1. Websites are exposing capabilities

The Model Context Protocol (MCP) has been the primary way websites have opened up their data, tools and services to AI systems. MCP was created by Anthropic and launched in November 2024; since then it’s become a leading method for AI agents and chatbots to trigger external actions.

While MCP is widely used to connect LLMs to web apps, there are more specific protocols and standards emerging that are more suited to websites in the browser.

WebMCP, which I have implemented on this very website, is a model where web pages expose MCP tools directly to the browser so that AI assistants running there can interact with them. WebMCP is still early — your users either need a browser extension (MCP-B) or an experimental version of Chrome to make use of it. But, eventually, WebMCP will become a native browser capability; and likely much easier to use than it is now.

The most common way websites are exposing new capabilities in 2026 is via a custom chatbot. I built a chatbot called Ask Ricmac for my website. Here’s how it works: instead of searching my site manually, a visitor can ask a question and receive an AI-generated answer based on the articles I’ve written over the years. As with the general purpose chatbots we’ve become familiar with over the past few years, like ChatGPT and Claude, Ask Ricmac is connected to a cloud-based AI system to deliver its answers (in my case, Cloudflare Workers AI). The difference is that the AI is using my website content as a knowledge base, which it accesses via a vector database.

As these capabilities mature, they won’t just return information — they will increasingly execute transactions. Booking a trip, purchasing a product, subscribing to a service: these are all actions agents can perform on a user’s behalf. That introduces an economic layer to the agentic web, where websites are not just sources of content or tools, but endpoints for value exchange. Designing for this means thinking about payments, pricing, and how your services are consumed programmatically by agents.

2. User interfaces for websites are changing

We’re starting to see early agentic systems emerge — from tools like Claude, which can take actions on a user’s behalf, to more experimental projects like OpenClaw that explore multi-agent control of computers.

As website operators, we want agents like this to visit our websites and gather information or take actions. Although, of course, we’ll want to control that access.

The first generation of AI systems interacting with websites involved a lot of ‘dumb’ browsing, from so-called “headless browsers” like Playwright autonomously running tasks in the background. I say ‘dumb’ because often the AI systems used brute force tactics to surface the content they required (web scraping) or action a specific task (filling in a form, for example). Needless to say, website owners had little control over this process.

But AI systems ‘browsing’ your website is becoming smarter, which in turn is impacting how we as website operators think of user interfaces. A few different trends are driving this.

Agentic tools are becoming much smarter and more capable of interacting with websites.

Firstly, as explained above, the second generation of AI systems interacting with websites has a lot more guidance from website owners on where to find certain information or capabilities to run. Secondly, AI is getting baked into mainstream browsers like Chrome and Firefox — essentially making agentic functionality a native part of consumer web browsers. Thirdly, agentic tools are becoming much smarter and more capable of interacting with websites; tools like Anthropic’s Claude and OpenAI’s various apps.

As one example, Ghost CMS founder John O’Nolan recently blogged about his experience using Claude’s CLI (command-line interface) as a way to interact with websites. He remarked:

“We’ve spent 10+ years focusing on having a clean, well designed interface for Ghost. It’s something we care a lot about, and spend a lot of time on.

But within about ~1hr of using Ghost via Claude/CLI, it was hard to imagine going back to caveman-clicking around a browser to get something done. Particularly for complex or compound tasks that might require visiting several different areas of the app.”

O’Nolan ended up building a custom CLI tool for Ghost, called ghst.

I mentioned OpenAI’s apps, too — recently, the Wall St Journal reported that OpenAI is building a desktop ‘superapp’ that will combine ChatGPT, the Codex app and the Atlas browser. This suggests that “browsing” will become even more tied to natural language prompts than it is now.

This shift also changes how users discover websites. Instead of navigating via links or search results, users increasingly rely on AI systems to retrieve and synthesize information for them. That puts more emphasis on how your content is structured, how easily it can be extracted, and whether it is considered a reliable source. In effect, traditional SEO evolves into something closer to “AI retrieval optimization” — where the goal is not just to rank, but to be selected and cited by AI systems.

3. The browser is evolving

Even if traditional web browsers survive, they will undoubtedly have more and more AI baked into them. This means that the browser will effectively become an AI runtime.

A key trend to track here is the rise of on-device AI, or “local AI” — where an AI model, sometimes called a Small Language Model (SLM), runs on your computer or smartphone instead of in the cloud. This allows AI systems to run completely on your device, allowing your browser to be a full runtime for AI applications.

In my continuing Web AI experiments, I built an “article assistant” that can answer questions about a page using local AI in the browser — with a cloud fallback when needed. If you’re reading this post on my website, you can see this in action under the second paragraph.

From a user point of view, here’s how it works: I downloaded Google’s Gemini Nano model onto my laptop, which I can then use to run AI queries on websites that have enabled this functionality (like on ricmac.org). Now, as with WebMCP, this functionality is currently restricted to experimental versions of Chrome. So on-device AI is still very early in its evolution. But directionally, the trend is clear: increasingly you will run AI systems locally, especially for applications where privacy and speed are paramount.

Much of the evolution of the agentic web can be understood as a shift from browsing to tools.

Broadly speaking, there are two ways AI systems interact with websites today. The first is browsing mode — navigating pages, extracting content, and interacting with interfaces much like a human would. The second is tool mode — where the website exposes structured capabilities that an agent can call directly, without needing to parse the UI. Much of the evolution of the agentic web can be understood as a shift from browsing to tools.

4. Developer platforms are adapting

One of the most encouraging things about the AI era is that web technologies have become central to the technology stack. This makes it fundamentally different to other internet revolutions of our recent past, like smartphone apps (which run on non-web mobile environments, largely controlled by Apple and Google) and blockchain (which, despite all the talk of turning into an application platform, still barely uses the web). But the AI revolution is well and truly web-based.

All those vibe-coded apps people are building? They’re typically running as web apps on platforms like Cloudflare, Vercel and Netlify. Indeed, those companies have all experienced explosive growth due to AI applications. Vercel founding CEO Guillermo Rauch recently told Fortune that it has “seen a tremendous acceleration on deployments” and that “fundamentally, we want to become the infrastructure layer of this new generation of software.”

While agents aren’t necessarily web-based, they are typically using a web frontend.

As well as hosting apps, companies like Vercel are increasingly hosting AI agents. While agents aren’t necessarily web-based, they are typically using a web frontend — Vercel’s frontpage for agents highlights the use of Next.js (its open source React framework) to build the “user input” layer for agents. Vercel also notes that agents can connect to “external tools with MCP endpoints.”

All of the big AI companies are building developer platforms — which are either web-based or use web technologies as the foundation of the user interface layer.

One of the first such initiatives was MCP-UI, a 2025 MCP ecosystem project closely aligned with Anthropic and used by Shopify (amongst others). Basically, MCP-UI enabled developers to build Web UIs for their AI chatbots.

Soon after, OpenAI launched Apps SDK, along with AgentKit and other UI tooling. While these tools cover various development platforms — including the web, iOS and Android — OpenAI’s approach to UI has been similar to MCP-UI, in that it’s fundamentally based on browser technologies.

Towards the end of last year, MCP Apps was announced. It’s a proposed open standard “for interactive user interfaces in the Model Context Protocol” and is supported by both Anthropic and OpenAI. Shortly after, Google launched A2UI, an open source project to help developers build “agentic user interfaces.”

It’s also possible that some services that now exist as apps might become entirely agentic products and not have any human UI at all. Or at least, as a16z VC Andrew Chen put it, “the ui is just a debug layer for humans to peek into what the agents are doing.” That might become the case for some transactional services, such as travel apps. But even then, most of those services will still have a website for branding purposes.

For website operators, credibility is no longer just a brand concern — it becomes a technical property.

As agents take on a greater role in interacting with the web, trust becomes a critical factor. AI systems need to determine which sources are reliable, which actions are safe to execute, and which outputs can be verified. This is driving renewed interest in areas like content provenance, reputation signals, and verifiable data sources. For website operators, it means that credibility is no longer just a brand concern — it becomes a technical property that influences whether agents choose to engage with your site at all.

What the Agentic Web means for different roles

For product teams

The shift to the agentic web means you are no longer designing just for human users — you are designing for agents acting on behalf of users.

That changes the product surface in three key ways.

First, your product needs to expose clear, structured capabilities. It’s no longer enough to have a well-designed interface; you need to make actions legible to machines, whether via APIs, MCP tools, or other emerging standards.

Second, you need to think about agent experience (AX) alongside user experience (UX). How easily can an AI system discover what your product does, understand how to use it, and execute tasks reliably? In many cases, the “interface” your users rely on will not be your UI, but an agent translating intent into actions.

Control over AI systems becomes a core product concern.

Third, control becomes a core product concern. You will need to define what agents are allowed to do, under what conditions, and on whose authority. That includes authentication, permissions, rate limits, and safeguards — all of which will become part of the product surface.

In short: products become platforms for both humans and agents, and success will depend on how well you serve both.

For publishers

For publishers, the agentic web represents both a threat and an opportunity.

The threat is clear: fewer users will visit your site directly. Instead, AI systems will increasingly intermediate access to your content — summarizing it, extracting insights, and presenting it elsewhere.

But the opportunity is just as significant. Your website becomes the source of truth that feeds those systems.

That means your focus shifts from page views to:

  • being discoverable by AI systems
  • being trusted as a source, and
  • being structured in ways machines can understand

Practically, this means:

  • ensuring content is accessible (not locked behind barriers)
  • using structured data and clear information architecture
  • thinking about how your content will be retrieved, chunked and cited

Now, having said all that, there are currently major concerns about how publishers will be compensated in this new agentic era. How can we earn a living in a world in which less fellow humans visit our websites — a problem exacerbated by Google referral traffic drying up.

This is an existential crisis for the online media industry — and by extension the open web.

There is some hope: publisher-friendly companies like Cloudflare are trying to encourage creator compensation from AI vendors, plus there are emerging protocols like Really Simple Licensing (RSL) that aim to help publishers license their content. But it’s still early days and many publishers are rapidly downsizing (I myself recently lost my journalism job). So this is very much an existential crisis for the online media industry…and indeed the open web itself.

The Web as an open book
The Web as an open book; image via Tim Berners-Lee.

But back to the positives. The Agentic Web does open up new product possibilities for publishers, from site-specific assistants (like my own “Ask Ricmac” feature) to premium, agent-accessible knowledge services.

Directionally, a publisher’s role in the Agentic Web is starting to move upstream: from destination to data layer. But, to be very clear, it’s critical that the compensation layer adapts accordingly — otherwise where are the incentives for publishers?

For developers

For developers, the agentic web expands the scope of what it means to build for the web. You are no longer just building interfaces and APIs — you are building capabilities that agents can discover, reason about, and use autonomously.

That requires a shift in mindset.

You need to design systems that are machine-readable by default.

First, you need to design systems that are machine-readable by default — not just human-friendly. Clear schemas, predictable outputs, and well-defined actions become critical.

Second, you need to think in terms of orchestration, not just endpoints. Agents will combine multiple tools and services dynamically, so your systems need to behave reliably as part of a broader workflow.

Third, you will increasingly work across a hybrid stack:

  • browser-based AI (local models, Web APIs)
  • cloud-based inference
  • tool protocols like MCP, and
  • traditional web infrastructure

Finally, debugging changes. As Andrew Chen suggested, the UI may become “a debug layer” and developers will need new tools to understand what agents are doing, why they made decisions, and how to guide them.

In short: building for the web now means building for humans, agents, and the interactions between them.

What’s next on the Web

The Agentic Web won’t replace the web, but it will change how it is used.

Websites will continue to exist as places where people publish ideas, build products, and express identity. But increasingly, they will also function as endpoints for agents — systems that retrieve information, execute actions, and mediate how users interact with the web.

The Agentic Web is an evolution of the original Web.

In that sense, this is less a break from the original vision of the web than an evolution of it. Tim Berners-Lee imagined a system for sharing knowledge between people. What’s emerging now is a system where that knowledge can also be accessed, interpreted, and acted upon by machines on our behalf.

The key shift is not from humans to AI, but from interfaces to capabilities — from navigating websites to invoking what they can do.

For anyone building on the web today, the implication is clear: you’re no longer just designing pages or apps. You’re designing how your site will be understood and used by agents.

Feature image: Original WWW graphic by Robert Cailliau.

Consulting

Make your site AI-ready

I help publishers and tech companies adapt to the agentic web — from AI discoverability to on-site assistants and Web AI strategy.

Explore consulting →

Leave a comment