Ilya Petrov

Growth You Get. Every Tuesday, 7am CET

What Went Into This Post

Every LinkedIn post carries an invisible mountain of knowledge. Let's climb one.

The post

Here's a LinkedIn post. Anthropic published it last week — 81,000 people interviewed about their hopes and fears for AI. Carousel, two links. You've scrolled past a hundred like it.

Now imagine handing this to an AI with zero context and saying: "write something like this." What would it need to know?

Turns out, quite a lot. And almost none of it is about writing.

The artifact

Start with what's in front of you. A LinkedIn post.

Text, maybe an attachment — image, carousel, video. No headlines, no formatting beyond line breaks. Character limit exists but doesn't matter because the text folds after the first line or two. Everything before "see more" is the only thing guaranteed to be read.

We're writing it for people to read it. So two things need to happen. The post needs to show up — that's the algorithm, which rewards engagement, dwell time, conversation. And someone needs to actually stop and read — that's the first line, the visual, the opening beat.

A carousel earns swipes, which counts as engagement. A screenshot or image earns a pause. The text either earns the expand or it doesn't. If it doesn't, nothing else matters.

This is the artifact level. How LinkedIn works. True whether you're posting about research, a product launch, or a hiring push. The channel doesn't care about your message. It has its own physics.

The campaign

Zoom out. What are we posting about this time?

There's the research — what they found. 80,508 conversations across 159 countries, 70 languages. Hopes, fears, and the central finding: they're entangled. The same capability that helps also threatens. Both live inside the same person.

There's the tool — Anthropic Interviewer. A version of Claude that conducts conversations, listens, adapts follow-up questions. A product, or at least a named capability worth surfacing.

There's the methodology — 81,000 in-depth qualitative interviews in a week. That wasn't a thing before. The research team calls it "a new form of social science." The study about AI is itself proof that AI works.

And there are the stories. A Ukrainian soldier using AI for emotional survival during war. A butcher in Chile who'd touched a PC three times in his life, now building software. A mute worker who built a text-to-speech bot and talks to friends for the first time.

Four kinds of substance inside one study. Each requires context and in-depth knowledge. Each could lead a different post. Someone picks.

The strategy

Zoom out again. Why this angle? Why now?

An AI company could talk about this research in a dozen ways. Lead with scale. Lead with stories. Lead with methodology as proof of capability. Lead with the entanglement finding. Lead with the audience's own question — "is AI real yet for my work?" — and answer it with 81,000 first-person accounts.

These are strategic options — the menu of moves available right now. Not all at once. You pick one and make it specific: this is for these people, the thing that should stick is this, the shift we're going for is that.

That's the campaign-level decision. And it's the smallest piece of the whole puzzle — maybe three sentences. But it's the piece that determines whether the post has a point or just has information.

(This is also, if we're being honest, the piece most often skipped. Not because people are lazy — because by the time you've gathered the research summary and checked the brand guidelines and figured out what the carousel template looks like, you're already late and the post needs to go out today.)

The brand

Zoom out one more time.

Anthropic is the AI safety company — building frontier AI while thinking hardest about what could go wrong. Capability and responsibility, held in tension. That's the founding story, the hiring pitch, the public identity.

This study is that identity turned into action. They used their own AI to ask 81,000 people what scares them about AI. And the central finding — that hopes and fears are entangled, produced by the same capabilities — is their worldview confirmed by data.

The research isn't content marketing. They need this data to build responsibly. The post is a byproduct of the actual work. Which is the most credible kind of brand communication there is — earned through action, not crafted through messaging.

The brand level doesn't change per post. It doesn't change per campaign. It's the foundation that was there before anyone opened a blank LinkedIn draft, and it'll be there after. It determines not what you say, but what only you can say.

The assembly

So here's what a full prompt might look like — an assembly of four blocks, each owned by someone who already knows their piece.

# LinkedIn Post: Global AI Perception Study, March 2026 **Channel Spec: LinkedIn Company Page** *Owned by: social/channel lead. Updated quarterly.* Format: text post + carousel attachment (up to 20 slides, designed separately). Text and carousel complement, not repeat — text carries the argument, carousel carries evidence and data. Text folds after ~2 lines on mobile, ~3 on desktop. Everything before "see more" is the only guaranteed impression — treat the first line as a headline even though LinkedIn doesn't have headlines. Front-load the number or the tension, not the company name. Algorithm signals, in rough order of weight: comment velocity in the first hour, dwell time (long-form text that holds attention), saves, shares to DMs (dark social — high signal), clicks to external links (counts but competes with dwell). Carousel swipes count as engagement and extend session time. Native documents (.pdf uploads) slightly outperform image carousels for reach but underperform for engagement quality. Company page posts index lower than personal profiles on organic reach — budget 15–20% of the personal-profile benchmark. Offset with employee reshares (first 30 minutes), strategic tagging, and comment seeding from team accounts. Don't ask questions in the post to prompt comments — it reads as engagement bait on company pages and suppresses reach. Audience composition for Anthropic's page skews ML researchers, developers, and tech-adjacent knowledge workers. Policy and safety content draws a different cohort — expect slower engagement but higher share-to-comment ratio. **Feature Spec: "What 81,000 People Want From AI"** *Owned by: research team. Written when the study ships.* Study: In December 2025, we invited Claude.ai users to have an open conversation with Anthropic Interviewer — a version of Claude designed to conduct qualitative interviews. It listens, adapts follow-up questions, and lets participants steer the conversation toward what matters to them. Scale: 80,508 participants. 159 countries. 70 languages. The methodology itself is novel — in-depth qualitative research at this scale wasn't possible before AI. The research team describes it as "a new form of social science." Central finding: hopes and fears about AI aren't opposing camps. They're tensions held within the same person. The capability that helps you learn might erode your thinking. The time you save gets eaten by a faster treadmill. Five paired tensions emerged, all entangled. Notable stories: a Ukrainian soldier using AI for emotional survival during war; a butcher in Chile who'd touched a PC three times in his life, now building software with AI; a mute factory worker who built a text-to-speech bot and speaks to friends for the first time; a lawyer in India who overcame a lifelong math phobia. Assets: full research article (long-form, ~8,000 words), interactive quote wall (browseable by region, concern, vision), methodology appendix, downloadable dataset. **Intent Spec: Announcement Post** *Owned by: campaign/product marketing lead. Written per post.* Goal: announce the study and drive traffic to the full article and quote wall. The post is a teaser — people should click, not walk away satisfied. Lead with the scale and credibility of the methodology. The "entanglement" finding is the hook, not the stories (save those for follow-up posts). **Brand & Positioning Spec: Anthropic** *Owned by: brand/comms. Updated annually or at strategic inflection points.* Core narrative: Anthropic is the AI safety company — building frontier AI while thinking hardest about what could go wrong. Capability and responsibility, held in tension. Not anti-AI, not utopian. The tension is the identity. This study is that identity in action: we used our own AI to ask 81,000 people what they fear about our AI. The central finding — benefits and risks aren't separate, they're entangled — mirrors the founding thesis. This is not content marketing. We need this data to build responsibly. The post is a byproduct of the work, not the point of the work. Voice on institutional channels: factual, understated, precise. Let the numbers and the participants speak. Don't editorialize findings, don't celebrate the company, don't frame it as a PR moment. The credibility comes from restraint.

This looks like a creative brief. It should. It's text, and it describes a task. The difference is who writes it and when. A brief is a one-off artifact. Specs are infrastructure.

A brief is authored from scratch by the person who happens to be writing the post. They reconstruct the brand positioning from memory, guess the channel constraints, skim the research for a pull quote, and invent the angle under deadline. It's a scavenger hunt dressed up as a document.

Here, each block already exists. The research team wrote the Feature Spec when the study shipped — not for this post, but because the study needed documenting. The brand team maintains the Brand Spec — it doesn't change per post or per campaign. The social lead keeps the Channel Spec current — how LinkedIn works this quarter, what formats earn reach. None of these people are writing a brief. They're maintaining what they know, in a form that can travel.

The only block that's genuinely new each time is the Intent Spec. What's this post for, who's it for, what's the one thing that should stick. Three sentences. That's the actual creative decision. Everything else was supposed to be there before anyone opened a blank draft.

(And if it isn't there — if the brand positioning lives in someone's head and the channel knowledge lives in someone else's gut — then the person writing the post has to reinvent all four levels from scratch. Which is exactly what happens. Every time.)

The thread

I wrote about this structural problem in "Why Marketing Needs Specs" — the thinking never assembles, knowledge doesn't compound, you can't debug what you can't see. This is what that looks like from the inside of one post. The knowledge existed at every level. It just didn't travel.

And remember the opening — hand this to an AI with zero context. That's not really a thought experiment about AI. That's what happens to every new hire, every agency partner, every team member who picks up a brief and doesn't have the full picture. They fill the gaps. With judgment if they're experienced. With the average of everything they've seen if they're not.

Sound familiar? It should. That's how AI fills gaps too.