What makes the AI chatbots and agents feel light and clean, here and now in 2026? Is it an innate architectural resistance to advertising, to attention hacks, to adversarial crud? No — it’s that they are simply new! The language models in 2026 are Google in 1999, Twitter in 2009. Their vast conjoined industry of influence hasn’t yet arisen … though it is stirring.
Posts
Missed this news last week, but Microsoft is working on a content marketplace to help content creators license access to their content to AI providers. Now comes word from an AWS conference that Amazon is doing the same thing. This follows on the release of the “official” Really Simple Licensing spec which provides a way for content providers to communicate access and licensing terms for their content to bots. However, no bots I’m aware of have indicated they’ll support RSL. Google has made some noise about monetization tools, but I’m not aware of them pursuing something along these lines.
More reading:
- Microsoft says it’s building an app store for AI content licensing at The Verge 🔒
- A pay-to-scrape AI licensing standard is now official at The Verge 🔒
I tend to think that most fears about A.I. are best understood as fears about capitalism. And I think that this is actually true of most fears of technology, too.
Ted Chiang, via Kottke
Last Spring I decided I wanted to re-start this blog and realized I could use it as an opportunity to get more comfortable with the new generation of AI coding platforms. I wrote about the experience on the Viget blog. I’ve also included the post after the jump for posterity.
Catching up on some reading today and came across this:
[Claude Code] has the potential to transform all of tech. I also think we’re going to see a real split in the tech industry (and everywhere code is written) between people who are outcome-driven and are excited to get to the part where they can test their work with users faster, and people who are process-driven and get their meaning from the engineering itself and are upset about having that taken away. (This is not to say that there aren’t many issues with AI aside from these things, of course.)
It wouldn’t surprise me to see more artisanal teams, startups, and small businesses spring up to give that second set a home. But I think we’ll also see more startups and projects created by the first set, too.
That’s Ben Werdmuller (emphasis mine), responding to Simon Willison’s recap of the year in LLMs (Simon also published some predictions for 2026)
Going to start collecting different AI design metaphors. Drew Breunig described Gods, Interns, and Cogs. Geoffrey Litt is thinking about Copilots and HUDs.
Would love to see more documentation, like this March 2025 post from Google, educating content creators on how to live in the AI search future.
I love a good nerdy usability blog post (thx Baymard)
tl;dr: Dropdowns kinda suck except in very limited and specific circumstances.
Reuters made a little game about Cozy Games (via NiemanLab)
Related, I don’t think I would have survived 2020 nearly as well without the Untitled Goose Game
Learning about DeepSeek today …
- This great FAQ from Ben Thompson digs into why this is big deal, why it might be good in the long run, and also provides a nice background on different reinforcement learning approaches.
- You’d be right to think DeepSeek, an AI model from a Chinese company, doesn’t want to talk about some things.
- DeepSeek was surprisingly cheap to train and is very cheap to use (compared to similar models from OpenAI and Anthropic).
After plenty of discussions and tons of exploration, I think we can simplify the world of AI use cases into three simple, distinct buckets:
- Gods: Super-intelligent, artificial entities that do things autonomously.
- Interns: Supervised copilots that collaborate with experts, focusing on grunt work.
- Cogs: Functions optimized to perform a single task extremely well, usually as part of a pipeline or interface.
