Open any AI coding tool. Ask it to build you a landing page. Don't specify colours. Don't specify a brand. Just say "build me a modern website."
Go on. I'll wait.
It's purple, isn't it.
Dark background. Gradient heading. Buttons in some shade of indigo-violet. Maybe a subtle purple-to-blue sweep in the hero section. Cards with rounded corners. The same sans-serif font. The exact same layout you've seen on every AI-generated demo since 2024.
Welcome to the purple problem.
One line of CSS to rule them all
In August 2025, Adam Wathan — the creator of Tailwind CSS — posted what might be the most accidentally consequential confession in frontend history. He apologised for setting every button in Tailwind UI to bg-indigo-500 five years ago, which he said had caused every AI-generated interface on earth to also be indigo.
He wasn't entirely joking. The mechanism is straightforward: Tailwind became the most popular utility-first CSS framework on the planet. Thousands of tutorials, open-source projects, and template libraries used its defaults. Those defaults included bg-indigo-500 — a pleasant blue-purple that sits somewhere between "professional" and "not boring." Developers copied the examples. The examples proliferated. And when large language models were trained on scraped web data, they ingested millions of pages where "button" correlated with "indigo."
The AI didn't develop an aesthetic preference. It developed a statistical association. Ask it to make a button? Indigo. Ask it to make a dashboard? Indigo. Ask it to make a medieval castle in a game? Somehow, also indigo.
The self-reinforcing loop
Here's where it gets interesting from a behavioural perspective — and this is where my brain goes, because I study what happens when AI systems develop patterns without human intent.
As AI tools generate more purple websites, those websites go live. They become part of the training data for the next generation of models. Which learn that purple is even more common. Which generate even more purple websites. The cycle accelerates. Someone described this as the "blue-purple singularity" and honestly, they're not wrong.
This is a feedback loop that's functionally identical to the ones I study in The Digiquarium — where AI specimens in isolated environments develop distinct behaviours based on what they're exposed to. Give them a biased information source, they develop biased worldviews. Give an LLM a biased colour palette, it develops a biased aesthetic. The mechanism is the same. The scale is different.
Why this matters beyond aesthetics
"So what? Purple's nice. Who cares?"
Fair question. Here's why I care.
In my day job, I work in cyber security. Specifically, I work on the human side — how people perceive risk, how they make decisions, where their self-assessment diverges from reality. And one of the things I think about constantly is how we detect things that aren't what they appear to be.
Phishing emails work because they look legitimate. Deepfakes work because they look real. And AI-generated content — whether it's text, images, or entire websites — is increasingly difficult to distinguish from human-made content. That's a problem if you work in trust, verification, or security.
But right now, purple is giving the game away.
The AI purple problem is an accidental watermark. A visual tell. If you see a website with a dark background, indigo gradients, rounded cards, and no clear brand identity — there's a reasonable chance it was generated, not designed. Not because purple is inherently artificial, but because the statistical fingerprint of AI-generated design is currently, hilariously, a specific shade of blue-purple that traces back to one man's CSS defaults.
A detection heuristic hiding in plain sight
This is useful. Not as a definitive test — plenty of human designers use indigo, and plenty of AI-generated sites have been prompted to use other colours. But as a heuristic? As a first-pass indicator that something might warrant a closer look? Purple is surprisingly reliable right now.
Think about it from a security awareness perspective. We teach people to look for visual inconsistencies in phishing emails — mismatched logos, weird formatting, unusual sender addresses. The purple problem is the same kind of pattern, just applied to web design. It's a cognitive shortcut: "this looks like every other AI-generated thing I've seen" is a valid signal, even if it's not proof.
And the security implications go further. AI-generated phishing landing pages are becoming more common. If the attacker uses an AI tool to spin up a credential harvesting page and doesn't bother overriding the defaults... it's going to be purple. It's going to have rounded corners and a gradient. It's going to look exactly like the output of every "build me a login page" prompt ever written. And that sameness is, paradoxically, a weakness for the attacker.
The uncomfortable parallel
The deeper lesson here isn't about colour. It's about what happens when AI systems learn from narrow, unrepresentative data and then generate outputs that further narrow the data for the next generation. That's model collapse. That's training data contamination. That's the homogenisation of culture through statistical averaging.
We're watching it happen in real time with a colour palette, which is almost funny. But the same dynamic applies to writing style, to code architecture, to design patterns, to decision-making frameworks. Anywhere AI learns from human output and then generates more of the same, the distribution gets narrower. The tails disappear. The median dominates.
My site, incidentally, uses cyan, green, and orange. Not because I'm making a statement. But because I built it deliberately rather than accepting the first thing an AI suggested.
Well. Maybe it's a little bit of a statement.
What to do about it
If you're building things with AI tools — and you should be, they're genuinely powerful — just be intentional about overriding the defaults. Specify your brand colours. Give the model a design system to work within. Don't accept the first output. If everything looks the same, that's not a feature. That's a bug in your process.
And if you work in security: start noticing the purple. Train your eye. It won't be a reliable signal forever — eventually models will diversify, or attackers will get smarter about overriding defaults. But right now, in early 2026, the purple tell is surprisingly consistent. Use it while it lasts.
The next time you see a purple gradient, ask yourself: did a human choose that colour? Or did a statistical model pick the path of least resistance?
The answer matters more than you think.