The post
Here's a LinkedIn post. Anthropic published it last week — 81,000 people interviewed about their hopes and fears for AI. Carousel, two links. You've scrolled past a hundred like it.
Now imagine handing this to an AI with zero context and saying: "write something like this." What would it need to know?
Turns out, quite a lot. And almost none of it is about writing.
The artifact
Start with what's in front of you. A LinkedIn post.
Text, maybe an attachment — image, carousel, video. No headlines, no formatting beyond line breaks. Character limit exists but doesn't matter because the text folds after the first line or two. Everything before "see more" is the only thing guaranteed to be read.
We're writing it for people to read it. So two things need to happen. The post needs to show up — that's the algorithm, which rewards engagement, dwell time, conversation. And someone needs to actually stop and read — that's the first line, the visual, the opening beat.
A carousel earns swipes, which counts as engagement. A screenshot or image earns a pause. The text either earns the expand or it doesn't. If it doesn't, nothing else matters.
This is the artifact level. How LinkedIn works. True whether you're posting about research, a product launch, or a hiring push. The channel doesn't care about your message. It has its own physics.
The campaign
Zoom out. What are we posting about this time?
There's the research — what they found. 80,508 conversations across 159 countries, 70 languages. Hopes, fears, and the central finding: they're entangled. The same capability that helps also threatens. Both live inside the same person.
There's the tool — Anthropic Interviewer. A version of Claude that conducts conversations, listens, adapts follow-up questions. A product, or at least a named capability worth surfacing.
There's the methodology — 81,000 in-depth qualitative interviews in a week. That wasn't a thing before. The research team calls it "a new form of social science." The study about AI is itself proof that AI works.
And there are the stories. A Ukrainian soldier using AI for emotional survival during war. A butcher in Chile who'd touched a PC three times in his life, now building software. A mute worker who built a text-to-speech bot and talks to friends for the first time.
Four kinds of substance inside one study. Each requires context and in-depth knowledge. Each could lead a different post. Someone picks.
The strategy
Zoom out again. Why this angle? Why now?
An AI company could talk about this research in a dozen ways. Lead with scale. Lead with stories. Lead with methodology as proof of capability. Lead with the entanglement finding. Lead with the audience's own question — "is AI real yet for my work?" — and answer it with 81,000 first-person accounts.
These are strategic options — the menu of moves available right now. Not all at once. You pick one and make it specific: this is for these people, the thing that should stick is this, the shift we're going for is that.
That's the campaign-level decision. And it's the smallest piece of the whole puzzle — maybe three sentences. But it's the piece that determines whether the post has a point or just has information.
(This is also, if we're being honest, the piece most often skipped. Not because people are lazy — because by the time you've gathered the research summary and checked the brand guidelines and figured out what the carousel template looks like, you're already late and the post needs to go out today.)
The brand
Zoom out one more time.
Anthropic is the AI safety company — building frontier AI while thinking hardest about what could go wrong. Capability and responsibility, held in tension. That's the founding story, the hiring pitch, the public identity.
This study is that identity turned into action. They used their own AI to ask 81,000 people what scares them about AI. And the central finding — that hopes and fears are entangled, produced by the same capabilities — is their worldview confirmed by data.
The research isn't content marketing. They need this data to build responsibly. The post is a byproduct of the actual work. Which is the most credible kind of brand communication there is — earned through action, not crafted through messaging.
The brand level doesn't change per post. It doesn't change per campaign. It's the foundation that was there before anyone opened a blank LinkedIn draft, and it'll be there after. It determines not what you say, but what only you can say.
The assembly
So here's what a full prompt might look like — an assembly of four blocks, each owned by someone who already knows their piece.
This looks like a creative brief. It should. It's text, and it describes a task. The difference is who writes it and when. A brief is a one-off artifact. Specs are infrastructure.
A brief is authored from scratch by the person who happens to be writing the post. They reconstruct the brand positioning from memory, guess the channel constraints, skim the research for a pull quote, and invent the angle under deadline. It's a scavenger hunt dressed up as a document.
Here, each block already exists. The research team wrote the Feature Spec when the study shipped — not for this post, but because the study needed documenting. The brand team maintains the Brand Spec — it doesn't change per post or per campaign. The social lead keeps the Channel Spec current — how LinkedIn works this quarter, what formats earn reach. None of these people are writing a brief. They're maintaining what they know, in a form that can travel.
The only block that's genuinely new each time is the Intent Spec. What's this post for, who's it for, what's the one thing that should stick. Three sentences. That's the actual creative decision. Everything else was supposed to be there before anyone opened a blank draft.
(And if it isn't there — if the brand positioning lives in someone's head and the channel knowledge lives in someone else's gut — then the person writing the post has to reinvent all four levels from scratch. Which is exactly what happens. Every time.)
The thread
I wrote about this structural problem in "Why Marketing Needs Specs" — the thinking never assembles, knowledge doesn't compound, you can't debug what you can't see. This is what that looks like from the inside of one post. The knowledge existed at every level. It just didn't travel.
And remember the opening — hand this to an AI with zero context. That's not really a thought experiment about AI. That's what happens to every new hire, every agency partner, every team member who picks up a brief and doesn't have the full picture. They fill the gaps. With judgment if they're experienced. With the average of everything they've seen if they're not.
Sound familiar? It should. That's how AI fills gaps too.