tomaskavka.dev
A personal site built as an experiment in dual-layer content — every page has a visual layer for humans and a structured AI view for LLMs, eliminating the copy-paste into Claude, ChatGPT and Perplexity.
The Problem
I kept copy-pasting web content into Claude to analyze, summarize, or work with it. Every time — select text, copy, switch tabs, paste, lose formatting. The web wasn't built for how we actually consume information now. What if pages could speak both languages natively?
The question
How will we consume information on the web in the future?
Today’s pattern is broken: you read a page, then copy-paste it into an AI to actually work with it. Summarize this article. Compare these two APIs. Extract the key decisions from this RFC. The web page is just an intermediary — a thing you screenshot or paste before the real work begins.
This site is an experiment in shortening that workflow. Not necessarily the final answer — but a way to explore the question.
The idea: dual-layer content
The concept is simple — every page renders two views from the same source:
- Visual layer — what you see. Typography, layout, device mockups, blueprint drawings
- AI view — an overlay where you can write a prompt or pick shortcuts like “Summarize” or “Extract key points,” see exactly what context will be sent, and choose whether to open the answer in Claude, ChatGPT, or Perplexity
No copying. No switching tabs. The headline says it all: “Skip copy-paste.”
How it evolved
Getting to the final UX took three iterations:
- Cyberpunk mode — a visual switch to a separate layer with raw Markdown for copying. Looked strange, and the user still had to copy manually
- Floating button — copied Markdown to clipboard and prompted the user to paste into a chatbot. Better, but the switching step remained
- Overlay with input — the version that stuck. Prompt field, predefined shortcuts, context preview, and direct links to AI tools
Under the hood, each page computes an aiMarkdown prop from the same content collections that power the visual rendering. No separate CMS, no sync issues — one source, two outputs.
What I learned
The dual-layer approach didn’t convince me that this is the future of the web. What it reinforced is that we often use the web today in a way it wasn’t originally designed for — as input context for AI tools, with the page itself being just an intermediary.
Maybe the more interesting direction is something like WebMCP — structured interfaces that AI agents can work with directly, instead of scraping HTML.
Open questions
This is version one. The experiment raised more questions than it answered:
- Is dual-layer content worth the effort, or will standards like WebMCP make it redundant?
- What interaction patterns emerge when content is natively machine-readable?
- Does this change what you write, knowing both humans and LLMs will read it?
What I Learned
The dual-layer approach is surprisingly cheap to implement. Astro's content collections already structure the data — the AI view is just a different serialization of the same source. The harder question is what to put in the AI layer: too little and it's useless, too much and you're duplicating the page. The sweet spot is structured context that the visual design conveys implicitly but LLMs need explicitly.