The Browser Is No Longer Neutral: AI Is Taking the Wheel
This is a column about technology. See my full ethics disclosure here.
Note to reader: This is very much a walk down memory lane and an editorial on where we’re headed.
The internet is relatively simple; at its core, it’s just a network of computers sharing information. Simple and elegant in its original design. A website used to be something like a document: readable and direct. But over the years, we’ve bloated that simplicity with unnecessary interactivity, performance-choking frameworks, and endless layers of engagement tactics masquerading as user experience. The truth is, most websites today are still doing what they were ten years ago, but with 600% more overhead in the form of dozens of third-party scripts, and autoplaying videos that exist more for ad sales than for the genuine user experience.
The web has shifted from an information-first experience to an attention economy. Sites are no longer designed just to inform; they’re designed to keep you entertained. “Time on site” is no longer just a diagnostic metric; it’s a business goal. We dress it up in design thinking and conversion rate optimization, but at its core, the modern web is increasingly hostile to those seeking a straightforward answer. And now that the internet lives in our pockets, on our wrists, and even in our appliances, that hostility follows us around all day. The novelty of being constantly connected has worn off. Now it’s just noise; it’s distracting, exhausting, and often manipulative.
The idea of “search” as a list of blue links is giving way to something closer to a digital assistant. This shift is subtle in daily use, but monumental in its implications.
As more of life has migrated online with daily tasks like scheduling appointments and checking kids’ school calendars, the stakes have risen, but the bar for an acceptable experience has hit rock bottom. The web is no longer optional. It’s the infrastructure of daily life. Despite the internet’s numerous capabilities, it often falls short of its original purpose: helping people find answers and communicate effectively. The ads are relentless, the UI patterns are increasingly deceptive, and the signal-to-noise ratio has never been worse.
Users are tired of it and are looking for something better.
For the better part of the last two decades, at least the web browser had stabilized and become boring in all the best ways. Chrome, Firefox, and Safari each have their quirks, but they’re generally aligned on how they parse and present the internet. Web developers can build with confidence that their code won’t break due to rendering oddities resulting from opinionated JavaScript implementations.
This wasn’t always the case. If you can recall the 1990s, you’ll recall the rollercoaster of the first browser war, originating with Microsoft’s Internet Explorer and Netscape Navigator. The belligerents left web developers exhausted and the web divided. Optimizing for one browser could mean breaking things in another. Some websites relied on ActiveX, others on Java applets or custom tags. There were no standards; we were just hoping for the best. That era introduced governance for CSS and JavaScript, but it also left behind a mess of compatibility issues and technical debt that took years to clean up (JavaScript time object anyone?).
Now, we’re seeing the casting call for the next chapter in this West Side Story, except this time it’s no longer a matter of style rendering or plugin support. It’s about the nature of the browser itself, what it is, what it does, and how we surf the web. This new era isn’t defined by visual differences or performance benchmarks. It’s going to be defined by intelligence and automation. The new turf war will be a dance-off between traditional and AI browsers.
AI is creeping its way into everything, so why not the browser? The idea is that the browser doesn’t just show you the web but actively interprets and automates it on your behalf. Of all the wacky ways I’ve seen AI being integrated into things (most of them it should remain out of) the browser seems pretty logical to me.
In response to the endless chore of clicking through pages, sifting through cooking recipes full of ads or longform articles, we’re increasingly comfortable typing a question like “Can my dog eat pineapple?” and getting a direct, synthesized answer. Google’s SGE, ChatGPT, Gemini, and Claude are already doing this, pulling content from across the web and delivering it in digestible, context-rich summaries. The idea of “search” as a list of blue links is giving way to something closer to a digital assistant. This shift is subtle in daily use, but monumental in its implications.
Right now, the leading players are Comet (Perplexity) and Dia (The Browser Company of New York). These are just the first two serious candidates to enter the market. I would expect to see more entrants like OpenAI soon. Word on the street is that Google Chrome has been considering this for some time now, and naturally, Apple will wait until the dust settles and then introduce something with smooth edges. Regardless of how we get there, what I do know is that three years from now, the browsing experience will be wildly different from what it is today. The intrinsic personalization of tools like ChatGPT and Gemini will deliver an internet mechanically crafted for each individual user, and agentic tooling will automate much of the mundane tasks we’re required to perform.
Traditional websites, as destinations, are beginning to show some strain. If the user never actually “visits” your site but instead gets a high-confidence answer derived from your content, what happens to your homepage? Your navigation bar? Your carefully honed call-to-action button? These artifacts of visual UX lose relevance when the browser becomes an agent — an interface layer that distills, redirects, and reinterprets information in real time.
Websites won’t become obsolete; they’ll become source material for an experience you’ll control less of.
I don’t think this marks the death of the website, as some of my peers claim; rather, the website will remain the central support pillar of every digital footprint because it is still the most authoritative source of first-party brand information. The website provides much of the raw material for generative systems to digest. Please remember that LLMs are text-based models, and they need to feed on a lot of it.
Large language models and agentic systems pull context & meaning, structure, and intent from the HTML itself. Technologies like the Model Context Protocol (MCP) are emerging to facilitate more advanced interactions. Think of it like schema.org on steroids; actionable, queryable, and native to LLM workflows.
The future of web design will have to account for both the human and AI experiences. Not only the human-to-AI experience but also the AI-to-machine experience (MUX). Visual UX will coexist alongside conversational experiences, meaning the website will grow from a collection of human-readable documents into an interoperable system for agentic AI.
This, without a doubt, is the most significant change to the internet since the release of JavaScript and dancing hamsters. Some might argue that it’s the most significant since the GUI replaced the command line, and there’s truth to that, especially since this is changing how we access and consume information. The big risk here is the loss of transparency. As users become accustomed to agentic summaries, the provenance of information becomes harder to verify. The source fades. The editorial judgment that went into creating the content is flattened into a personalized bullet point. There’s power in that, but also danger.
Today’s web developer is already a thing of the past. The next generation will need to focus on more than just front-end design and component libraries. They’re going to be architecting for dual audiences: one human and one agentic. They’ll need to be experts in building knowledge infrastructure.
What comes out of the other side of this is a web that is increasingly dual-natured: one that must be readable and explorable by humans, and one that must be parseable and trustworthy to machines. It’s a return to the early ethos of the internet, which was to be semantic, accessible, and decentralized.
The first browser war gave us CSS, JavaScript, and a blueprint for a media-rich web. It also left behind years of technical debt and bloat. The second time around will usher in a new kind of information reality.
But this time around will have consequences. Sites that are opaque or overly interactive may fall out of the agentic context window. Developers who ignore the AI transition and agent-readiness may find their platforms skipped entirely.
Ultimately, the high scorers of this new Web 4.0 won’t just render fast or look sleek, they will provide value to both humans and AI agents. This isn’t the death of the web. It’s just another inflection point.
The browser is dead. Long live the browser.