Hey, AI, Make Me a Growth Plan (2/?): The Research
There's a number in every strategy deck. Everyone nods at it. Now AI found it too, cited the source, made a nice table. It's on the Internet! So it must be true.
I'm exploring an AI-only workflow, so I'll skip the "AI as assistant" angle — using it to process your data, clean transcripts, summarize interviews. That works fine. The interesting question is whether AI can do the research itself.
Three modes of AI research are in play here.
Gathering — AI reads the web, pulls reports, synthesizes openly available information. It's doing the legwork, not the thinking.
Referencing — AI finds someone else's findings and presents them as building blocks. It's a librarian, not a researcher.
Pattern-surfacing — and this one's easy to miss. AI itself is a form of research. It's a probability engine trained on the world's text, which means its outputs lean toward whatever conclusions were most common in the training data. This makes it surprisingly good for discovery — surfacing patterns, mapping landscapes, finding things you didn't know to look for. And genuinely dangerous for decisions — because popular conclusions and correct conclusions aren't the same thing.
The make-or-break question turns out to be simple: can AI access the data? Public information, published reports, open web — it handles well. Licensed databases, gated research — it can work with them if connected, but usually isn't. Your internal data — CRM, product analytics, sales pipeline — obviously not available unless you link it properly. And primary research — sitting across from a customer, hearing the hesitation in their voice — not even close (just yet).
For our experiment, this means AI can handle chapters 1 through 3 (market, audience, competitors) and honestly can't touch chapter 4 (internal data). Claude actually recommended skipping it. Nice! Knowing what you can't do is underrated, even for a probability engine.
I've been looking at data for two decades, and here's what I believe: people use numbers as a habit. Not as a thought — instead of a thought.
Say we learn there are 43 million developers. Or 37 million. How does this change your plan? It doesn't. You nod, put the number in a slide, and move on. The number creates the feeling of rigor without producing any actual insight.
There are times you need numbers. Often not.
What I'm actually looking for are stories and repetitions. What's going on, not "how much exactly." When independent sources tell the same directional story, that's signal. When a number appears in every pitch deck but traces back to a single survey with a 260-person sample — that's decoration.
This matters more with AI, because AI is exceptionally good at producing decoration. It pulls numbers confidently, presents them cleanly, and never flags that maybe you should check the methodology. The most dangerous thing about AI research isn't hallucination — it's confirmation theater. You already had a hypothesis. AI made it look researched. The test is simple: did the research change any of your priors? If not, it wasn't research. It was a better-dressed version of what you already believed.
Looking back at the growth plan frames from Part 1, only one was detailed enough to drive research: the first one, from the direct story-based prompt. The rest covered the same territory with less depth.
I did a meta-prompt exercise with Claude Opus — showed it our work so far, shared my requirements, and asked it to create a research prompt. It proposed a chapter-by-chapter approach. I fed the prompts into deep research with extended thinking.
Three things I was solving for in every prompt: structured output I can review without drowning, reasoning transparency so I can catch bad assumptions early, and validation hooks so I can check claims against reality. The tabs below have the very long prompts (which are mostly assembled from the structure documents in Part 1), outcomes, and my reactions.
# Chapter 1: Market Landscape — Deep Research Prompt for CLion
## Context
I'm building a growth plan for **CLion**, JetBrains' cross-platform IDE for C and C++ development. This is Chapter 1 of a five-part growth planning framework focused on Discovery — building a factual foundation before any strategy or opinion.
The output of this chapter directly feeds into subsequent chapters on Customer Understanding, Competitive Landscape, and Current State assessment. Accuracy and source quality matter more than comprehensiveness — I'd rather have fewer data points I can trust than a wall of unverifiable claims.
## What I Need
Work through each of the three sections below **sequentially**. Each section has a defined output artifact — produce it before moving to the next section.
---
### Section 1.1: Market Definition
Define the market CLion operates in. Answer:
- **Problem category:** What job does CLion do? How would a C/C++ developer describe their need before knowing CLion exists?
- **Boundaries:**
- Geographic scope (where does JetBrains sell? any restrictions?)
- Customer type scope (individual developers, teams, enterprises, students, embedded, game dev, systems programming — who's in, who's out?)
- Use case scope (what's clearly in scope for CLion vs. adjacent JetBrains products like ReSharper C++ or Rider?)
- **Adjacent markets** that blur the boundaries: VS Code with C++ extensions, general-purpose IDEs with C++ support, embedded/RTOS-specific toolchains
**Output → Market Definition Statement:** One clear paragraph defining what market we're analyzing, readable by someone with no context.
---
### Section 1.2: Market Sizing
Build TAM/SAM/SOM from the ground up with explicit assumptions.
**TAM:**
- How many C/C++ developers exist globally? (use Stack Overflow surveys, JetBrains Developer Ecosystem surveys, Evans Data, SlashData, GitHub Octoverse, etc.)
- What percentage use or would use a dedicated IDE (vs. text editors, vi/emacs, or basic editors)?
- What would they pay annually? (use JetBrains' own pricing as one benchmark, competitor pricing as others)
**SAM:**
- Which segments can CLion realistically serve given its current capabilities? (e.g., desktop app dev yes, deeply embedded bare-metal maybe less so)
- Geographic or platform constraints?
- What's the realistic subset of TAM?
**SOM:**
- What market share does CLion plausibly hold today? (use any available proxy data: JetBrains survey results, download estimates, community size)
- What share is realistic in 1-3 years based on comparable product trajectories?
**Present the output as a structured table** with columns: Data Point | Value | Source (with URL) | Confidence Level (High/Medium/Low)
---
### Section 1.3: Market Dynamics
Analyze the forces shaping this market:
- **Growth rate:** Is the C/C++ developer tools market expanding or contracting? What's driving this? (Rust adoption impact, embedded/IoT growth, AI/ML infrastructure in C++, gaming, automotive)
- **Technology shifts:** How do AI coding assistants (Copilot, Cursor, etc.) affect the IDE market? What about cloud IDEs? Language server protocol standardization?
- **Competitive dynamics:** Is the market consolidating (VS Code dominance) or fragmenting (specialized tools)?
- **Economic sensitivity:** How does developer tooling spending respond to tech industry cycles?
- **Regulatory/industry factors:** Any relevant trends (automotive safety standards requiring C++, government/defense C++ usage, export controls)?
**Output → Market Dynamics Summary:** A structured brief on where this market is heading and why, with timeframes where possible.
---
## Source and Citation Requirements
These are strict:
1. **Every quantitative claim** (market size, growth rate, user count, revenue figure, percentage) must include a direct clickable URL to the source.
2. **Preferred sources** (in priority order):
- JetBrains Developer Ecosystem Survey (annual)
- Stack Overflow Developer Survey (annual)
- GitHub Octoverse
- SlashData / Developer Nation surveys
- Evans Data Corporation
- TIOBE Index, RedMonk rankings, IEEE Spectrum
- Company earnings reports, press releases, blog posts
- Gartner, IDC, Forrester reports (link to summary if paywalled, note it's paywalled)
3. **When sources conflict**, present both values with their respective sources and note which you consider more reliable and why.
4. **When no direct source exists** and you're estimating, mark it clearly:
`[ESTIMATE: derived from [Source A](URL) and [Source B](URL) — reasoning: ...]`
5. **When making an assumption**, flag it:
`[ASSUMPTION: ... — basis: ...]`
6. **Do not fabricate or approximate URLs.** If you cannot find a direct source for a claim, say so explicitly rather than guessing a URL.
---
## Formatting
- Use clear headers matching the section numbers (1.1, 1.2, 1.3)
- Produce each output artifact as a distinct, labeled block
- Write in direct prose — no filler, no "it's worth noting that," no hedging preamble
- After all three sections, add a final section: **Confidence Gaps** — list the 3-5 data points you're least confident about or couldn't find reliable sources for, so I know where to validate or supplement with internal data
Summary: CLion operates in a ~$1.8–2.7B global C/C++ IDE market serving 18–20 million developers, where it holds an estimated 15–33% share (roughly $40–80M in revenue) alongside Visual Studio and VS Code as the three dominant tools. The market is growing at 6–7% CAGR driven by embedded/automotive/AI infrastructure demand, but faces structural transformation from AI-native coding tools (Cursor, Copilot) and longer-term erosion from Rust adoption — with JetBrains' key strategic challenge being whether deep language intelligence can sustain premium pricing as AI assistants become table-stakes within 2 years.
My thoughts: I like the market definition and the market dynamics summary. Overall, too much numbers: trying to cook something when most ingredients smell and are past their expiration dates. Well, that's what I asked for.
# Chapter 2: Customer Understanding — Deep Research Prompt for CLion
## Context
I'm building a growth plan for **CLion**, JetBrains' cross-platform IDE for C and C++ development. This is Chapter 2 of the Discovery phase, focused on understanding who buys, why they buy, and how they decide.
**This chapter depends on Chapter 1 (Market Landscape).** Use the market definition and boundaries established there — specifically the problem category (professional C/C++ development tooling), customer type scope, and use case boundaries. If you completed Chapter 1 in a prior session, treat those outputs as given context. If not, use your best understanding of CLion's market positioning.
**Important constraint:** The framework calls for internal data sources (support tickets, sales recordings, NPS scores, churn data) that aren't publicly available. Focus this research on what *can* be found externally — public community signals, reviews, forum discussions, surveys, and observable behavior patterns. Flag where internal data would be needed to complete the picture.
---
## What I Need
Work through each of the three sections below **sequentially**. Each section has a defined output artifact — produce it before moving to the next section.
---
### Section 2.1: Customer Segmentation
Identify 5-7 distinct customer segments for CLion. These should be based on observable differences in needs, behavior, and value — not just demographics.
**Segmentation dimensions to research for each segment:**
- **Who they are:** Role, industry, company size, experience level
- **What they build:** Type of C/C++ work (embedded systems, game engines, desktop apps, OS/systems, HPC/scientific, automotive, robotics, audio/DSP, etc.)
- **Segment size:** How many developers fit this profile? (use developer survey data, industry employment data, GitHub/community signals)
- **Accessibility:** How hard are they to reach? Where do they congregate online and offline?
- **Need intensity:** How much do they need a dedicated C/C++ IDE vs. getting by with alternatives?
- **Willingness to pay:** Budget reality — are they individual purchasers, does the company buy tools, is there a tooling budget?
- **Current tools:** What do they predominantly use today? (VS Code, Visual Studio, vim/emacs, Eclipse CDT, Qt Creator, CLion, other)
- **Fit with CLion:** How well does CLion's current feature set serve their specific workflow?
**Use data from:** JetBrains Developer Ecosystem surveys, Stack Overflow surveys, Reddit communities (r/cpp, r/embedded, r/gamedev, r/C_Programming), Hacker News discussions, LinkedIn job data patterns, industry reports on embedded/automotive/gaming workforce.
**Output → Segment Profiles:** A structured profile for each segment using the template above, with estimated segment size and a preliminary attractiveness rating (High/Medium/Low) based on size × accessibility × need intensity × willingness to pay.
---
### Section 2.2: Buyer Journey Mapping
Map the buying journey for the **top 3 most attractive segments** from 2.1. Developer tool purchases follow distinct patterns depending on whether it's an individual decision or an organizational one — capture both paths where relevant.
**For each segment, map these stages:**
**1. Trigger**
- What event or pain point makes them look for a better C/C++ development environment?
- Is it a new project, frustration with current tools, team scaling, onboarding, platform change?
**2. Research**
- Where do C/C++ developers look for tool recommendations? (specific subreddits, forums, YouTube channels, blogs, colleagues, conference talks)
- What search queries do they use? What questions do they ask?
- How much weight do they give to peer recommendations vs. reviews vs. feature lists?
**3. Evaluation**
- What criteria matter most when choosing a C/C++ IDE? (CMake support, debugger quality, code navigation, refactoring, cross-platform, performance on large codebases, plugin ecosystem, price)
- Do they trial first? For how long?
- Who else is involved in the decision? (for enterprise: procurement, security review, IT approval)
- How do they compare options? (side-by-side testing, reading comparisons, asking peers)
**4. Purchase**
- Individual license vs. team/organizational purchase — how does each work?
- JetBrains subscription model — is this a friction point or accepted?
- What role does the All Products Pack play vs. standalone CLion?
- Common objections or hesitations at the purchase stage?
**5. Onboarding**
- What does successful CLion adoption look like in the first week/month?
- Known pain points: project configuration, CMake/Makefile setup, indexing large codebases, debugger setup for embedded targets
- Where do people get stuck and give up?
**6. Expansion/Advocacy**
- What drives a solo user to advocate for team adoption?
- What drives upgrades (individual → team license, adding plugins)?
- What makes CLion users vocal advocates or detractors?
**Research sources for journey mapping:** Reddit threads about IDE switching decisions, JetBrains community forums, YouTube "CLion vs X" comparison videos, blog posts about C++ development setup, Twitter/X discussions, developer conference talks about tooling.
**Output → Journey Maps:** Narrative journey map for each of the top 3 segments, highlighting the key decision points, friction moments, and influence channels.
---
### Section 2.3: Voice of Customer Data
What do CLion's customers and prospects actually say? This section is research-intensive — go deep into public forums, reviews, and community discussions.
**Sources to mine:**
- **Reddit:** r/cpp, r/Clion, r/C_Programming, r/embedded, r/gamedev — search for CLion mentions, IDE comparison threads, complaints, praise
- **JetBrains issue tracker (YouTrack):** Top voted CLion issues, most discussed feature requests
- **Twitter/X:** Sentiment around CLion, common complaints and praise
- **Hacker News:** Threads about CLion releases, JetBrains discussions
- **YouTube comments:** Under CLion tutorials and comparison videos
- **G2, Capterra, TrustRadius:** CLion reviews — both positive and negative
- **Stack Overflow:** Questions tagged with CLion, common pain points
- **JetBrains blog post comments:** User reactions to CLion feature announcements
**Synthesize findings into these five lists, with representative quotes or paraphrased evidence for each:**
1. **Top 5 reasons people choose CLion** (why they buy/adopt)
2. **Top 5 reasons people don't choose CLion** (why they pass or choose alternatives)
3. **Top 5 reasons people leave CLion** (why they churn — switch to VS Code, Visual Studio, vim, etc.)
4. **Top 5 feature requests** (most wanted capabilities that CLion doesn't have or does poorly)
5. **Top 5 complaints** (ongoing pain points from current users)
**For each item in each list:**
- State the finding
- Provide 2-3 representative quotes or evidence points with source links
- Note which customer segment(s) from 2.1 this applies to most
**Output → Customer Evidence Library:** The five synthesized lists above, organized with source links and segment tags.
---
## Source and Citation Requirements
Same standards as Chapter 1:
1. **Every factual claim** must include a direct clickable URL to the source.
2. **Preferred sources for this chapter:**
- JetBrains Developer Ecosystem Survey (for tool usage and developer demographics)
- Stack Overflow Developer Survey (for tool preferences and developer profiles)
- Reddit threads (link to specific threads, not just subreddits)
- JetBrains YouTrack (link to specific issues where possible)
- Review platforms: G2, Capterra, TrustRadius (link to CLion product pages)
- Specific blog posts, forum threads, or discussions — not generic site links
3. **When paraphrasing community sentiment**, link to at least one representative thread or comment.
4. **When estimating segment sizes**, use the same convention:
`[ESTIMATE: derived from [Source A](URL) and [Source B](URL) — reasoning: ...]`
5. **Do not fabricate or approximate URLs.** If you can't find a source, say so.
---
## Formatting
- Use clear headers matching the section numbers (2.1, 2.2, 2.3)
- Produce each output artifact as a distinct, labeled block
- Direct prose — no filler or hedging preamble
- Segment profiles in 2.1 should use a consistent structure across all segments for easy comparison
- Journey maps in 2.2 should follow the six-stage structure exactly
## Confidence Gaps
After all three sections, add a final section listing:
- Data points you couldn't find reliable external sources for
- Areas where internal JetBrains data (support tickets, churn analysis, NPS, sales objections) would significantly change the picture
- Segments where you're least confident in the size estimate or behavioral patterns
Summary: CLion's addressable market of 5–8 million C/C++ developers splits into seven segments, with systems/infrastructure developers, embedded engineers, and audio/DSP specialists offering the highest growth potential due to their need intensity, willingness to pay, and alignment with CLion's CMake-first, cross-platform value proposition. The dominant barriers to expansion are performance on large codebases (the #1 reason users churn to VS Code) and limited non-CMake build system support, while the May 2025 free non-commercial edition fundamentally changes the acquisition funnel for hobbyists and students.
My thoughts: Segmentation is helpful and looks reasonable. Consumer journeys may be overkill (keeping in mind it's not a real consumer survey). Voice of the customer part is cool but requires further validation.
# Chapter 3: Competitive Landscape — Deep Research Prompt for CLion
## Context
I'm building a growth plan for **CLion**, JetBrains' cross-platform IDE for C and C++ development. This is Chapter 3 of the Discovery phase, focused on understanding who else is fighting for the same customers.
**Dependencies:** This chapter builds on Chapter 1 (Market Landscape) for market boundaries, and its output feeds directly into Chapter 2 (Customer Understanding) — particularly buyer journey mapping and voice of customer, where knowing the competitive alternatives shapes how customers evaluate and decide. If Chapter 1 outputs are available from a prior session, use the market definition as the boundary for what counts as "competing."
**Scope note:** The competitive landscape for C/C++ development tooling is unusually layered. CLion competes against full IDEs, against VS Code with extensions, against terminal-based workflows (vim/neovim + LSP), and against the inertia of "what I already use." The analysis needs to capture all these layers, not just the obvious IDE-vs-IDE comparison.
---
## What I Need
Work through each of the three sections below **sequentially**.
---
### Section 3.1: Competitor Identification
Cast a wide net, then categorize. The goal is a comprehensive map of everything a C/C++ developer might choose instead of CLion.
**Direct competitors** (dedicated C/C++ IDEs or IDEs with first-class C/C++ support):
- Visual Studio (Microsoft) — the Windows-dominant option
- VS Code + C/C++ extensions (Microsoft) — the free, extensible option
- Eclipse CDT
- Qt Creator
- Any others with meaningful market presence
**Indirect competitors** (solve the same problem differently):
- Vim/Neovim with LSP (clangd) + plugins — the terminal-native workflow
- Emacs with LSP/Irony
- Sublime Text with LSP
- Xcode (Apple platforms only)
- Other JetBrains IDEs that partially overlap (ReSharper C++ in Visual Studio, Rider for Unreal)
**Substitutes** (entirely different approaches):
- Cloud IDEs (GitHub Codespaces, Gitpod) with C++ support
- AI-native editors (Cursor, Windsurf) as emerging category
- "Do nothing" — staying with whatever came with the OS or build system
**Potential entrants:**
- Companies or products that could credibly add C/C++ IDE capabilities (e.g., could Zed or another new editor become a serious C++ environment?)
- Platform vendors expanding developer tool suites
**For each competitor, note:** Name, category, parent company, approximate user base or market presence indicator, primary platform(s), and one-line positioning statement.
**Output → Competitor Map:** Categorized table of all identified competitors across all four types.
---
### Section 3.2: Competitor Deep Dives
Build detailed profiles for the **top 5 competitors that matter most** to CLion. Based on market signals, these are likely: Visual Studio, VS Code (with C++ extensions), Qt Creator, Eclipse CDT, and vim/neovim+LSP ecosystem. Adjust if your research suggests a different top 5.
**For each, research and document:**
**Company/Project Overview**
- Owner/maintainer, business model, funding (for open source: governance model, key sponsors)
- Team size or contributor base
- Strategic focus — is C/C++ their core focus or one of many languages?
**Product Analysis**
- Core C/C++ features: build system support (CMake, Meson, Makefile, Bazel), debugger, code navigation, refactoring, code completion/IntelliSense, static analysis
- Pricing: free vs. paid tiers, what's behind the paywall
- Platform support: Windows, macOS, Linux, remote development
- Strengths — what are they genuinely best at? (cite community evidence where possible)
- Weaknesses — where do users consistently complain? (cite specific threads, reviews, issues)
**Go-to-Market**
- Target segments: who are they primarily serving?
- Positioning and messaging: how do they describe themselves?
- Primary acquisition channels: how do new users find them?
- Extension/plugin ecosystem: how vibrant? What does it signal about community investment?
- Partnership strategy: notable integrations, bundling deals, OEM relationships
**Market Position**
- Estimated user base or market share (use survey data: JetBrains Dev Ecosystem, Stack Overflow, etc.)
- Growth trajectory: gaining or losing share? Over what timeframe?
- Recent momentum signals: major releases, community growth, hiring patterns
- Customer perception: synthesize from reviews, Reddit sentiment, community discussions
**Strategic Direction**
- Recent major product announcements (last 12-18 months)
- Publicly stated roadmap or priorities
- Hiring patterns: what roles are they hiring for? (check careers pages)
- Likely next moves: based on trajectory and announcements, what will they do next?
- AI strategy: how are they integrating AI coding assistance? This is a critical competitive dimension right now.
**Output → Competitor Profiles:** Structured brief on each of the top 5, following the template above consistently.
---
### Section 3.3: Competitive Positioning Analysis
Synthesize the profiles into positioning insights. Instead of just 2×2 grids (which are hard to produce in text), provide:
**Positioning dimensions that matter to C/C++ developers:**
1. **Price vs. C/C++ feature depth** — map where each competitor sits
2. **Ease of setup/onboarding vs. power for large/complex projects** — the classic tradeoff
3. **Generalist (multi-language) vs. C/C++ specialist** — where does each focus?
4. **Lightweight/fast vs. full-featured/heavy** — editor vs. IDE spectrum
5. **Individual developer focus vs. team/enterprise focus** — who are they really built for?
**For each dimension:** describe where CLion and its top competitors sit, using evidence from the deep dives.
**Then identify:**
- **Crowded positions:** Where is everyone fighting? (likely: free + good-enough C++ support, i.e., the VS Code zone)
- **White space:** Underserved positions that represent opportunity (e.g., is there a gap in "enterprise-grade C++ IDE with seamless embedded/cross-compilation support"?)
- **CLion's current position:** Where it actually sits today based on product reality and customer perception
- **Positioning vulnerabilities:** Where CLion is most threatened and by whom
- **Positioning opportunities:** Where CLion could credibly move that competitors can't easily follow
**Output → Competitive Positioning Analysis:** Narrative analysis covering the five dimensions, with the crowded/white space/vulnerability/opportunity synthesis.
---
## Source and Citation Requirements
Same standards as prior chapters:
1. **Every factual claim** (user numbers, pricing, feature comparisons, market share data) must include a direct clickable URL.
2. **Preferred sources:**
- JetBrains Developer Ecosystem Survey, Stack Overflow Developer Survey (for market share/usage data)
- Official product websites and documentation (for features and pricing)
- Official blogs, release notes, changelogs (for strategic direction)
- GitHub repositories (for contributor activity, release cadence, star counts as proxy signals)
- Careers/jobs pages (for hiring pattern analysis)
- Reddit, HN, community forums (for customer perception — link specific threads)
- G2, Capterra, TrustRadius (for review aggregation)
- Company press releases, earnings calls, investor presentations (for business data)
3. **When comparing features**, base it on current stable releases, not beta or announced features. Note the version you're referencing.
4. **When estimating market share or user base**, show the calculation:
`[ESTIMATE: derived from [Source A](URL) and [Source B](URL) — reasoning: ...]`
5. **Do not fabricate URLs.** If you can't source a claim, say so.
---
## Formatting
- Use clear headers matching section numbers (3.1, 3.2, 3.3)
- Competitor profiles in 3.2 should follow identical structure for easy cross-comparison
- Direct prose — no filler
- Positioning analysis in 3.3 should be narrative with embedded evidence, not just a list
## Confidence Gaps
After all three sections, add a final section listing:
- Competitors you suspect matter but couldn't find enough data on
- Market share estimates you're least confident in
- Strategic direction claims that are speculative vs. evidence-based
- Areas where the competitive picture is shifting fast enough that data may already be outdated
Summary: The C/C++ IDE market has reached an unusual three-way parity, with CLion, Visual Studio, and VS Code each holding roughly 25–30% of C++ developers—a dramatic shift from Visual Studio's historical dominance. CLion's defensible position lies in its cross-platform C++ specialization and deep tooling (Constexpr Debugger, Nova engine, comprehensive refactoring), but it faces pressure from VS Code's "good enough for free" ecosystem below and AI-native editors like Cursor (17.9% developer adoption, $1B+ ARR) emerging above.
My thoughts: Overall, pretty strong. The competitors' deep dive is detailed enough, and the positioning assessment is on point. We'll use a lot from this in the next chapters.
Two unexpected things came out of the research. The smaller one: the most fun part turned out to be the history of C and C++ — how the languages emerged, what trade-offs got baked in decades ago, and why the market still looks the way it does because of them. Worth reading separately if you like origin stories.
The bigger one was a gap. The audience research was thinking about people in the context of the product — what IDE they use, what languages they code in, what frustrates them at work. And that's a miss. A huge one.
So I ran an additional chapter that wasn't in the scope: audience in context. Not "C++ developers as CLion prospects," but what surrounds and defines these people as people. What their industries look like, what pressures they face, what their careers feel like — not when they're staring at another memory overflow error, but when they're thinking about whether to stay in embedded or move to something that pays more.
C and C++ developers hold titles that are far more specialized than the generic "Software Engineer" label common in web development. The vocabulary includes Embedded Software Engineer, Firmware Developer, Systems Programmer... rather than the management-track titles more common in other ecosystems. This reflects a culture that values deep individual technical contribution.
This is transferable to any product. Your audience research probably describes users. It probably doesn't describe humans. AI defaults to the same pattern — it thinks about people through your product's lens, because that's how most marketing content is written. You have to explicitly steer it somewhere wider.
AI research gives you a map. A good-looking one — labels, borders, proportions. The problem is: some coastlines are real, and some interiors are imagined. Early cartographers drew accurate ports they'd visited, then filled the middle of continents with plausible guesses. AI does the same with data.
So I spent a separate session questioning the methodology behind the numbers AI gave me. Not the numbers — the methodology.
One example. Several sources suggested CLion holds roughly a third of the C++ IDE market — an "equal thirds" split with Visual Studio and VS Code. Sounds plausible. AI presented it cleanly. But the survey behind the claim uses a multi-select question. Someone who uses CLion daily but opens VS Code for quick edits counts toward both. "Equal thirds" doesn't mean equal primary usage — it means equal mention in a checkbox list. A different survey, with a single-select "primary tool" question, shows Visual Studio at 64%.
Then there's who's answering. The JetBrains survey skews toward JetBrains users (acknowledged by JetBrains itself). Stack Overflow's survey skews toward web developers — embedded engineers, one of C++'s biggest segments, are 3% of respondents versus 32% in C++-specific surveys.
After triangulating across sources, realistic CLion primary-tool market share is probably 15–25%. Not 33%. That's not a rounding error — that's a different strategic reality.
AI caught none of this. Every number compiled clean. Full analysis is in this document.
Roger von Oech's creative process has four roles: Explorer, Artist, Judge, Warrior. The idea is to stay conscious of which role you're in and not mix them. I like this as a stopping rule for research: you keep exploring until you have so many thoughts and ideas that you can't help but jump into creation.
I'm there. The discovery stage is about capturing what exists — facts, no judgment. I have enough data to move forward, and enough hard-won understanding of where that data is reliable and where it's decoration.
Next time, we start making sense of it. Diagnosis: where could we go?