The post visual web

December 29, 2025 · 6 min read

There’s a quiet assumption baked into most web development: that humans will keep visiting websites with their eyes.

I’m not convinced that’s true for the next decade.

I think the dominant interface to the internet is shifting from visual browsing to natural language querying. And once that becomes the default, the most valuable websites won’t be the ones that look impressive. They’ll be the ones that are easy for machines to ingest, reason over, and cite.

In other words: the web becomes less of a showroom and more of a dataset.

LLMS.txt, MCP, and the new distribution layer

If you want a single heuristic for where things are going, it’s this: any framework that pairs an LLMS.txt with an MCP server pointed at its documentation is already living in the future.

Not because AI is trendy, but because it acknowledges a change in the primary consumer of docs. Historically, docs were written for humans. The ideal doc experience was visual: navigation, search, code blocks, examples, cross links.

In a natural language first world, the ideal doc experience is semantic: structured, stable, parseable, and fast to retrieve.

LLMS.txt is a signal. MCP is the pipe. Together they turn a documentation website into an interface that an agent can use as a reliable knowledge source, not just a page a human reads.

And once you see docs this way, it becomes obvious that it won’t stop at docs.

The real shift: from visiting to asking

Today, if I want a haircut, the old internet asks me to do something weird.

  • open a browser
  • search for Paul’s Barbers
  • click through a map pack
  • open a website
  • scan for prices, hours, location, reviews, booking

That workflow is visual because the interface is visual. But the intent isn’t visual. The intent is: Is Paul good, nearby, and available?

In the near future, I won’t visit Paul’s website first. I’ll ask my LLM: Is Paul’s Barbers good? What are the prices? Can I book for Friday? Is parking annoying? Do they do fades well?

The LLM will answer by synthesizing data from places it trusts: reviews, maps, socials, and, if Paul is smart in this future, Paul’s own website.

And here’s the punchline: Paul’s best website for that world is boring.

Static HTML. Minimal images. Minimal interaction. Clean headings. Clear prices. Clear hours. Clear location. Clear booking link. A page that loads instantly and reads like a structured fact sheet.

Perfect for bots. Great for humans too, but optimized for the actual distribution channel: the model.

Websites as semantic pages

I think websites will converge toward flat, semantic info with light CSS.

Not because design stops mattering, but because the job of the website changes.

Right now, websites do three things at once.

  • persuade humans
  • rank on Google
  • serve as a brand artifact

In an agent mediated world, websites start doing something else.

They become canonical sources of truth that machines can query.

That changes the architecture. It changes the incentives. It changes what good UX even means.

Good UX becomes: low latency, high clarity, and high parseability.

A beautiful parallax animation is irrelevant if the model can’t reliably extract your prices and hours without executing a JavaScript carnival.

JavaScript is not free anymore

Yes, bots can read and render JavaScript. But it’s slow.

And this is the subtle point a lot of people miss: the constraint isn’t whether a bot can do it. The constraint is whether it’s acceptable at scale, within latency budgets, while still producing a good response time.

If an agent is synthesizing an answer from 20 sources, and 8 of them require heavy client side rendering to extract basic facts, the agent either:

  • slows down
  • avoids those sources
  • uses partial data and becomes less accurate

None of those outcomes are good for you.

So JavaScript becomes something you spend like money. You use it when it buys you something real. Not because you can.

This pushes the web toward an older truth that we’ve collectively forgotten: native HTML is the fastest API.

My advice: build for machines without forgetting humans

If I were building a new framework today, or even evolving a design system, my goal would be simple.

Ship blazing fast, semantic HTML by default. Use CSS heavily. Use as little JavaScript as possible. Treat interactivity as a scarce resource.

This is why Astro is such a strong meta framework example. It’s opinionated about shipping less JavaScript. It makes the default outcome closer to what both humans and agents want: fast pages with clear structure.

If I were working on something like ShadCN or MUI, I’d orient the roadmap around the same principle: components that degrade gracefully into semantic markup, minimal client runtime, and predictable structure.

The aesthetic layer matters, but it should sit on top of a stable semantic core. The semantic core is what the model reads. The aesthetic layer is what the human enjoys if they still choose to visit.

The uncomfortable prediction

The uncomfortable implication is that the internet as we know it may become a secondary view.

Humans won’t stop using screens, but screens stop being the primary router of attention. Natural language becomes the front door. The website becomes the backing store.

You won’t win by building the prettiest site.

You’ll win by being the source your user’s model trusts when it answers: Should I go to Paul’s Barbers?

And in that world, boring is a feature.

A static page with clear, semantic information and light CSS is not a compromise. It’s an adaptation to a new distribution layer.

The web doesn’t become less human. It becomes less visual.

And the developers who internalize that early will build the frameworks everyone else eventually migrates to.