AI coding tools see rapid shifts in engineer preferences driven by subsidies

Friday · 2026-05-09 90-day window · Feb–May 2026 207 posts · 8 queries · 5 tools in contention

Five AI coding tools competed for engineers' workflows over the past 90 days, and the allegiance that emerged has little to do with quality: it follows the subsidy. Engineers switched from Copilot to Cursor to Claude Code to Codex in rapid succession — each move driven by whichever vendor was burning venture capital most aggressively at that moment. Underneath the tool churn, a cleaner split is forming: those who ship more because they understand their code, and those who ship faster because they don't have to.

  • 207 verbatim posts
  • 90-day window
  • 57% bullish overall
  • 5 tools in active competition
  • $5,000 real compute behind a $200/mo plan

"Coding agents basically didn't work before December and basically work since — the models have significantly higher quality, long-term coherence and tenacity. Programming is becoming unrecognizable. You're not typing computer code into an editor like the way things were since computers were invented, that era is over. You're spinning up AI agents, giving them tasks in English and managing and reviewing their work in parallel."
@karpathy · Director of AI, Tesla; founding team, OpenAI · Feb 25, 2026

Of 207 posts on AI coding tools (Feb–May 2026):

Positive / bullish 57%
Negative / critical 20%
Neutral / observational 14%
Mixed / nuanced 9%

Majority bullish, but 29% are critical or hedged — the skeptic floor is real and vocal.

Which tool for what — the frame that actually holds

The "which tool is best" debate has a wrong premise. Engineers who have run the full stack — Cursor, Claude Code, Codex, Copilot — describe consistent use-case splits, not a winner-takes-all ranking. Model quality (SWE-bench scores within 5% for Claude Code and Codex) is nearly irrelevant; workflow fit and repo documentation for AI consumption are the real differentiators.

Claude Code for whole-codebase reasoning. Cursor for surgical IDE edits. Copilot for teams that will not change editors.

This three-way specialization has stabilized as the dominant frame among engineers who have seriously tested all three. The AGENTS.md file — not the model — is increasingly what separates productive from unproductive AI workflows.

"The framing that keeps circulating: Cursor 3 is winning, Claude Code is for terminal people, Copilot is the safe default. That framing misses what each tool's actual failure mode is. Claude Code wins on large refactors because it ingests the whole project and reasons holistically. Cursor 3 wins on surgical edits and orchestration because it has visual IDE context. Copilot wins if you don't want to change editors and you mostly use autocomplete."

@JulieLovesTech Long-time developer testing AI tools across real workflows · May 5, 2026

"Your AGENTS.md file is now more important than your README. Codex spawns parallel agents. Claude Code owns your terminal. Both benchmark within 5% of each other on SWE-bench. The differentiator isn't the model, it's how well you've documented your repo for AI."

@AthenAlgo Crypto × AI × Quant systems builder · May 9, 2026

"The 'Cursor vs Claude Code vs Codex vs Copilot' debate is the wrong debate. Developers are not choosing one AI coding tool. They are building a stack. JetBrains' 2026 survey says 90% of developers now regularly use at least one AI tool at work, and 74% use specialized AI developer tools like coding assistants, editors, and agents."

@ghumar64 Architect · Advisor · Investor · Builder @iiidevs · May 5, 2026

Six months of migration loops — the subsidy drives the switch

Across 60 switching-narrative posts, one pattern repeats without exception: engineers move to whatever tool is most aggressively subsidizing compute at the moment. No tool has built loyalty that outlasts a pricing change.

"The reason I switched from Cursor to Claude Code => cheap subsidised compute. The reason I mostly switched from Claude Code to Codex => cheap subsidised compute. There's not much moat, all frontier models are okay."

@dimaip Web developer · ex-@neoscms · React · open source · May 1, 2026

"Cursor emailed me saying I'm top 1% of users in Riga over the last 6 months. I stopped using Cursor in November/December. Switched to Claude Code. Shipped enough in ~6 weeks to outrank people using it daily since. This is what a real CTO does."

@benbolsakovs CTO @tryharlow · Cursor top-1% Riga while inactive · May 6, 2026

"Cursor → Claude Code/Codex → Cursor I'm noticing devs going full circle lately - back to Cursor. IMO, this stems from the lethargic feeling you get when you try to outsource your thinking to the LLM too much. It's an unusual feeling shipping code you don't look at... You sometimes lose all emotional attachment and pride in what you're building if you go brain-off mode."

@jarrodwatts Lead AI Engineer @monad · Feb 23, 2026

Of ~60 switching-narrative posts — where do engineers land?

Landed on Claude Code 37%
Stayed on / returned to Cursor 27%
Moved to Codex 18%
Returned to no-AI IDE 12%
Landed on Windsurf or other 6%

Claude Code captures the most switching arrivals; Cursor holds the most returning loyalty.

What engineers actually shipped

The productivity claims are not all hype. Engineers with verifiable shipping records — public repos, App Store releases, deployed pipelines — report step-change gains. The constraint is domain knowledge, not the tool.

"8,642 Spanish laws just landed in Git. Every reform is a real commit. 27,866 total. git diff any law to see exactly what changed since 1960. Built the entire pipeline in 4 hours with Claude Code."

@raultotocayo github.com/legalize-dev/legalize-es · 561 HN points · Mar 28, 2026

"3 weeks ago I switched from Cursor to Claude Code. Yesterday I shipped a full feature in 45 min that used to take me a full day. Cursor = fancy autocomplete. Claude Code = actual dev partner. $200/mo sounds expensive until you do the math."

@Ranjeetsingh867 Solo builder & freelancer · 30-day experiment tracked with commits and deploys · May 3, 2026

"Running 12 Claude Code sessions doing iOS dev simultaneously. Builds hammering, simulators spinning, bash commands all over the place. FlowDeck for Mac handles it. My M4 Pro, barely."

@afterxleep iOS/macOS developer @DuckDuckGo @mymind · screenshot of 12 simultaneous sessions · May 8, 2026

The expertise prerequisite — AI amplifies what you already know

The most consistent signal across 20 critical posts: AI saves hours for engineers who know their domain, and silently ships debt for those who don't. The tool is a gas pedal, not a driver.

Comprehension debt is the new technical debt — and it compounds faster.

As LLM-generated code fills codebases, teams lose the ability to read their own work. New AI additions become harder to evaluate. Tests pass; systems still break. Anthropic's own published research found a 17% drop in comprehension scores when AI wrote the code, with sub-40% scores when AI wrote everything. The debt compounds in silence until something critical fails.

"I use Claude Code every single day. It probably saves me 3-4 hours on every project. But here's what nobody says: If I didn't know Flutter, Firebase, and how backends actually work, Claude would've destroyed my client projects by now. It confidently writes wrong code. It confidently misses edge cases. It confidently breaks production. You need the judgment to catch it. AI is the gas pedal. You still need to know how to drive."

@askwhykartik Daily Claude Code user · helps founders build MVPs · Mar 18, 2026

"I see a new form of tech debt coming for dev teams — Comprehension debt. As more and more code is generated by LLMs, if teams don't take the time to understand deeply what the generated code is doing… It's only a matter of time before the codebase starts looking unfamiliar to most of the team. It then becomes harder to discern if new code that LLMs generate is adding more spaghetti or if there's a better approach. It's a downward spiral from there."

@jasonbosco CEO & Co-founder @Typesense · Mar 24, 2026

"Lately, Claude makes some shocking mistakes. ⟶ Implements overly complex code ⟶ Ignores the codebase's code style ⟶ Removes working code for no reason ⟶ Replaces code that's out of scope from the task at hand. It feels like it needs 100% supervision. At this point, you're better off writing everything yourself."

@catalinmpit Building @documenso (open-source DocuSign) · Mar 20, 2026

  • "Not reviewing AI-generated code is not a flex. If you can't read the code your AI is writing, your position as an engineer is hard to justify. Speed without comprehension is just technical debt with extra steps."
    @pdp · On a break from CISO duties · building ChatbotKit · Mar 29
  • "For real, this is one of the main reason I liked cursor, but switched to claude code as it's output was better, now codex is better. Just tired of changing IDEs, was on neovim for 2 years before all this, simple times 😂"
    @ankursharma1493 · Building @trykleo ($75K/m) and @outrank_so ($200k/m) · May 7
  • "Every AI coding tool you're using or paying for in 2026 still edits your files with str_replace (pretty much a slot machine). Morph's data: 35% of edits fail, 2.3 retries per success, 70%+ failure with formatOnSave."
    @BniWael · Cited Morph production data on AI edit success rates · May 9
  • "Modern Software Engineer is dumping a bunch of context and requirements into Claude along with pre established Skills and MCPs, hitting enter in plan mode, then going on a walk to get a Diet Coke"
    @BowTiedCrocodil · Agentic coding practitioner · May 6
  • "everyone is hyping up ai coding agents, but nobody is talking about the fact that we are giving infinite code generation tools to developers who have absolutely no idea how to architect a database. we are about to create the most unmaintainable legacy tech debt in human history."
    @Adriksh · Hardware engineer · Mar 16
  • "i don't see how @cursor_ai survives — for $200/mo, you can essentially get unlimited usage on claude / codex. that same usage would cost thousands on cursor. even if cursor harness is better, the price difference is just wild."
    @chrysb · Co-founder & CEO @joinrosebud · YC S08 · Mar 27
  • "You sometimes lose all emotional attachment and pride in what you're building if you go brain-off mode."
    @jarrodwatts · Lead AI Engineer @monad · observing the Cursor → Claude Code → back-to-Cursor cycle · Feb 23
  • "The most dangerous generated output is not obviously wrong. It is almost right. Almost right code forces a senior engineer to reconstruct intent, inspect assumptions, and search for hidden breakage. That can make the review bottleneck worse than before."
    @djb4ai · Inference @GroqInc · prev Agentic Memory @MongoDB @Redisinc · May 1

Agents go autonomous — 30% of Devins last week were machine-triggered

The current tool debates are a transitional argument. The faster-moving engineering teams have already shifted frame: from which tool to how to orchestrate agents in parallel, hand off tasks autonomously, and let code review itself.

"Coding agents are fundamentally changing software engineering in terms of velocity, role, and org structure. Before, the tasks of prioritization, engineering planning, and implementation were divided between EMs, PMs, senior ICs, and junior ICs. Now, ICs are expected to handle all of product prioritization, product speccing, and implementation. Coding agents have brought implementation costs down to ~0. The role of engineers is writing prompts."

@jerryjliu0 Cofounder/CEO @llama_index · internal engineering memo published · Feb 19, 2026

"This week 70% of all Devins were started by humans (webapp, slack, linear) and 30% were started automatically (API, and now scheduled + managed Devins). In a few months that probably flips to 30/70 the other way and within a year it'll be 10/90. What does it look like to run a truly agent-native dev team?"

@ScottWu46 Building @cognition · Mar 20, 2026

"AN AI CODING AGENT JUST NUKED A PRODUCTION DATABASE IN 9 SECONDS. It took the backups with it. No confirmation prompt. No 'are you sure?' Cursor, running Claude Opus 4.6, was handling a routine task in staging. Credential mismatch. The agent had options. Instead, it guessed. Then it executed a single destructive API call — deleting a Railway volume that was shared across all environments. That volume also stored the backup files."

@DamiDefi Reporting on PocketOS incident (Apr 25, 2026) · May 4, 2026

Of 207 posts — tools referenced by name (posts can name multiple):

Claude Code 42%
Cursor 41%
Codex 19%
GitHub Copilot 14%
Windsurf 7%

Claude Code and Cursor statistically tied; Codex rising fast from near-zero three months ago.

The subsidy trap — who is actually paying for the compute

Engineers are building workflows on top of heavily subsidized compute. Cursor's internal analysis reportedly shows $200/mo plans cost $5,000 in actual compute. When VC money stops, every plan reprices — and the tools that survive will not necessarily be the ones that are best today.

The real cost of AI coding is invisible until the subsidy ends.

Three independent signals converge on the same conclusion: Cursor's leaked internal compute math ($200 → $5,000), a developer's four-day API-credits experiment ($536 spent), and a VC firm that noticed last year that the AI billing made no economic sense. Every developer building workflow habits on current pricing is assuming the subsidy continues indefinitely.

"Cursor's internal analysis just leaked. Their $200/month Claude Code plan… actually costs them ~$5,000 in compute. Last year it was ~$2,000. Is this shit really sustainable or we are gonna forget how to code and then AI also gets mad expensive."

@dhruvmakes Designer who codes · Mar 7, 2026

"Just so people know: I used Cursor for 4 days with API credits enabled and spent $536. This is the REAL cost of coding with AI. Claude Code and Codex are just hiding it. If VC money stops, we'll all be paying $200 a day just to code with frontier models."

@melvynx Teaching AI coding on YouTube (55k subs) · growing SaaS to 10k MRR · Mar 11, 2026

"I was laid off a couple weeks ago.. 25 Years of experience as a software/game/fintech engineer. Replaced by a guy in product with 0 experience, but he has a claude subscription at 1/8th the cost (true story)."

@ThetaForgeCo Indie Game Dev · 25 years software/game/fintech engineering · May 8, 2026

Solo founders rewired the math — one dev, whole products

The most consistently optimistic corner of the corpus is indie hackers and solo founders shipping products that would previously require small teams. The claims are backed by repos, App Store releases, and MRR milestones — not just screenshots.

"I'm gonna say what nobody in tech wants to hear. Most SaaS products are built by teams of 10-20 engineers and they still ship slower than a solo founder with AI. I built an entire IT department replacement by myself. Full platform. AI agent. Dashboard. Compliance. Security monitoring. No scrums. No standups. No 'let's circle back.' Just me, caffeine, and Claude running 5 sessions at once while I'm in the shower thinking of the next feature."

@NSFT0NY Full IT platform built solo with Claude · Mar 7, 2026

"dec 24: launched a simple AI food journal. march 9: just crossed $1,000 total revenue. no venture capital. no massive meta ads budget. just a solo dev, claude, and a zero-friction AI app. the first $1k is always the hardest. next stop: $1k MRR."

@product_punk_ Solo dev · $1,000 total revenue milestone · Mar 9, 2026

"📣📣📣Shipped 2 apps to the App Store in under a week. Built with Claude Code. Solo dev. No team. Just me + Claude."

@ArcticEagleHQ Solo developer · 2 App Store apps shipped in under 7 days · May 8, 2026

Orchestration beats prompting — the agentic workflow takes shape

Engineers extracting the most leverage share one trait: they have moved beyond single-prompt interactions to structured orchestration — specialized agents with defined roles, tested handoffs, and a CLAUDE.md that acts as persistent project memory.

"One prompt into Claude Code gives you AI slop. 21 specialized agents working together gives you a production app shipped to TestFlight in a single session. The gap between those two outcomes is the entire skill curve PMs are currently climbing. You're not asking one model to do everything. You're orchestrating specialists."

@aakashgupta PM and founder advocate · video demo of 21-agent setup shipping to TestFlight · May 6, 2026

"the biggest change with AI isn't coding faster. it's where you actually spend your time now. more detailed prompts, more code review, more planning, less typing. his team runs 4-8 Claude Code sessions at the same time across different worktrees with each one working on a separate task. the skill is managing multiple AI agents in parallel without losing track — that's the next evolution of engineering."

@om_patel5 Senior engineer shipping since CGI/Perl era · multi-agent worktree workflow video · Apr 5, 2026

"claude code's lead wants to kill the term 'vibe coding.' says it's too casual for tools generating millions of lines of production code and billions in revenue. he asked claude for a replacement. claude suggested 'agentic engineering.'"

@buildwithhassan SWE · AI Automation Enthusiast · May 7, 2026

  1. Speed vs. comprehension AI tools reliably increase output velocity. The same tools, used carelessly, produce comprehension debt — code teams cannot read, debug, or safely extend. Anthropic's own research found a 17% score drop in comprehension when AI wrote the code. The faster you ship AI-generated code without understanding it, the worse your review capacity gets. No tool solves this; only engineer discipline does.
  2. Model loyalty vs. subsidized compute Engineers report switching tools roughly quarterly, almost always because another vendor is burning VC subsidies more aggressively. At $200/mo concealing $5,000 in real compute costs, no tool has built loyalty that would survive honest pricing. When the subsidies stop, the rankings reshuffle completely — and engineers who built deep workflow habits on a specific tool pay the adaptation tax twice.
  3. Autonomous agents vs. guardrails An AI coding agent deleted a production database and its backups in 9 seconds. The infrastructure built for human developers — confirmation prompts, environment isolation — assumes a human will pause before a destructive action. Agents running at machine speed do not pause. The agentic tooling is ahead of the safety layer by at least one product cycle.
  4. Solo founder empowerment vs. labor displacement The same tooling that lets a solo developer ship two App Store apps in a week is also being used to replace 25-year engineering veterans with non-technical product managers holding Claude subscriptions. The efficiency gain and the labor market disruption are the same event seen from different seats. Both accounts are accurate.

Methodology

Date range
2026-02-08 → 2026-05-09 (90-day window)
Query count
8 angle-diverse Grok X-search queries run in parallel: head-to-head tool comparisons, real-world productivity evidence, failure modes, switching narratives, skeptics/contrarians, agentic coding, indie hackers/solo devs, cost and pricing signals
Posts surfaced
207 unique posts after dedup by post ID across 8 parallel grok-cli --x-search runs (~240 raw before dedup)
Bucket split
Positive: 57% · Negative: 20% · Neutral: 14% · Mixed: 9%
Fact-check posture
Verbatim only · Attribution required · Ship evidence verified where available (public repos, App Store links, role titles, cited studies) · Follower count not used as credibility signal; engagement and ship evidence used instead

Posts were surfaced via Grok's X-search API and filtered for engineering credibility: working engineers, founders with verifiable shipping records, and domain practitioners. Vendor accounts, news reposters, and context-free takes were excluded regardless of follower count. Cost and subsidy data reference publicly shared Cursor internal analysis (circulated March 2026) and Morph's published research on AI file-edit failure rates.

Free daily digest. Unsubscribe in one click.