Why AI Will Not Bring Sustainable Competitive Advantage
An Evidence Dossier for (Marketing) Leaders
Why read this article: This evidence dossier compiles peer-reviewed research, top-tier strategy journals, and rigorous industry studies to answer one question: can AI provide a sustainable competitive advantage in marketing? The honest answer challenges the prevailing CMO narrative and points to where defensible advantage actually lives in 2026.
Across boardrooms in 2026, marketing leaders are making the same bet at the same time.
They are investing record sums in AI. Content generation, predictive analytics, agentic SEO, dynamic creative. And they are doing it in the belief that this investment will set them apart from competitors who are doing the exact same thing, with the exact same tools, from the exact same vendors, often with strikingly similar prompts.
This is the quiet paradox of marketing in 2026. The technology every CMO is racing to adopt is, by definition, the one technology that cannot differentiate any of them.
And yet the strategic assumption underneath the entire AI-in-marketing conversation, that early and aggressive adoption will translate into competitive advantage, has gone almost completely unchallenged. It deserves to be challenged. Not with opinions, but with evidence.
The Thesis, in One Sentence
AI is becoming marketing infrastructure, not differentiation, and the strongest case for this comes from the founder of modern strategy theory himself.
In May 2025, Jay B. Barney, author of the most-cited paper in strategic management (Barney, 1991), published an article in MIT Sloan Management Review with a deliberately direct title: Why AI Will Not Provide Sustainable Competitive Advantage (Wingate et al., 2025).
Barney’s framework, taught in every MBA strategy course on earth, holds that a resource creates sustained advantage only when it is Valuable, Rare, Inimitable, and Organized (VRIO). His verdict on AI: valuable yes, rare no, inimitable no. “How can AI be the centerpiece of a sustained competitive advantage when everyone has it? We argue that it simply cannot” (Wingate et al., 2025).
Far from being a source of differentiation, AI is, in their words, a source of homogenization.
The Evidence for Commoditization
This is not a philosophical position. The data is brutal, and the collapse has only accelerated.
Inference costs fell 280× in 18 months, and they have kept falling: Inference is what AI models do every time they answer a prompt. It is priced per million tokens, the unit of language AI processes (a million tokens is roughly 750,000 words, the length of a short novel). Stanford’s 2025 AI Index documented that running a model with GPT-3.5-level capability fell from $20 per million tokens in November 2022 to $0.07 by October 2024 (Maslej et al., 2025). The decline has not stopped. GPT-4 launched at $30 per million input tokens in March 2023. GPT-5 nano in 2026 lists at $0.05, a reduction of over 99% in three years. Stanford’s 2026 AI Index adds that hardware costs continue to fall 30% annually, and energy efficiency improves 40% annually (Maslej et al., 2026).
Free models have caught up with paid ones: Open-weight models, AI models whose underlying code anyone can download and run, used to lag well behind closed commercial models from OpenAI, Anthropic, and Google. On Chatbot Arena, the leading head-to-head benchmark for AI quality, the performance gap between top open-weight and top closed-weight models shrank from 8.04% to 1.7% in a single year (Maslej et al., 2025).
The same intelligence now runs on much smaller models: Two years ago, hitting 60% accuracy on MMLU, a standard test of general knowledge across 57 subjects, required a 540 billion parameter model. By 2024, a 3.8 billion parameter model cleared the same threshold. That is a 142× compression of the resources needed to achieve the same intelligence.
And the gap between regions has nearly vanished: Three years ago, frontier AI development was almost exclusively American. By March 2026, the performance gap between the top US and top Chinese models had narrowed to just 2.7 percentage points (Maslej et al., 2026).
The translation for marketing leaders is simple. The AI capability your competitor uses today, you can buy at the same price tomorrow. What used to be a moat is now a utility, and it is getting cheaper faster than any technology in modern business history.
The Marketing-Specific Evidence
Marketing is where this homogenization hits hardest, and the peer-reviewed evidence is precise.
AI raises individual creativity but lowers collective novelty. A Science Advances study by Doshi and Hauser (2024) found AI-assisted stories were 8.9 to 10.7% more similar to each other than human-written ones. The authors call it “an increase in individual creativity at the risk of losing collective novelty.” Marketing teams using the same models produce work that converges, even when each piece looks polished on its own.
AI-generated content wins on search, which means more brands sound the same in the same places. A Marketing Science field experiment showed AI-generated SEO content is “virtually indistinguishable” from work by SEO experts, and outperforms human writers in search rankings (Reisenbichler et al., 2022). The same engine that wins for one brand wins for every brand using it.
Adoption is already mainstream, accelerating the convergence. A Patterns study from Stanford documented that by late 2024, roughly 24% of US corporate press releases were LLM-assisted (Liang et al., 2025). The share of brand communication shaped by the same handful of models is no longer marginal.
Brand voice itself is converging. Linguistic analysis of marketing copy shows that LLM-signature phrases like “delve into,” “in an era of,” and “navigating the landscape” appear several times more frequently in published marketing text in 2024 than in 2022. Even when individual texts read polished, the collective voice of marketing is becoming statistically recognizable as machine-shaped.
The cause is mechanical, not cultural. The very technique that makes ChatGPT and Claude usable, reinforcement learning from human feedback, measurably reduces output diversity (Padmakumar & He, 2024; Kirk et al., 2024). The polished corporate voice of generative AI is a statistical mode, and every brand using these models converges toward it.
👉 In a nutshell: When every brand has access to the same models, runs them at marginal cost, and prompts them in similar ways, output differentiation is mathematically excluded. Convergence is no longer a risk. It is the default setting.
The Floor Rises. The Ceiling Does Not.
Generative AI lowers the floor of marketing output dramatically. Anyone with a prompt can produce a usable brief, a decent ad concept, a publishable blog post. Tasks that once required a junior team now take minutes.
The peer-reviewed evidence backs this up precisely. Brynjolfsson, Li and Raymond (2025), in a Quarterly Journal of Economics field study of 5,179 customer support agents, found that AI assistance raised novice performance by 34% while leaving the most skilled performers nearly unchanged. AI is a floor-raising technology by design.
But the ceiling has not moved.
AI models are trained to predict the statistically most likely next token. By construction, they produce the average, not the exceptional. The polished, plausible, well-structured middle of the distribution. That is genuinely useful for most marketing work. It is also exactly why AI cannot produce the brand-defining campaign, the category-creating positioning, or the insight that makes a strategy memorable.
The pattern extends well beyond creative work. The same ceiling effect applies to every strategic discipline in marketing. AI can summarize a category landscape, but it cannot tell you which segment is worth owning. AI can draft a positioning statement that scans well, but it cannot decide whether your brand should fight for differentiation in feature space or in meaning space. AI can list ten innovation directions, but it cannot weigh which one fits your organization’s capabilities, culture, and risk appetite. These are judgment calls that depend on context the model does not have, on consequences the model cannot weigh, and on conviction that comes from years of pattern recognition the model has never lived through.
In a recent conversation with the team at Jung von Matt, one of Germany’s most awarded creative agencies, the point came up exactly this way. Real creative excellence still comes from human experience, taste, and judgment. AI can contribute to every sub-step of the work. It accelerates the brief, generates variations, tests concepts. But the differentiating idea, the one that makes a brand recognizable for a decade, comes from people doing things AI is structurally incapable of doing.
The same is true for strategic decisions. AI can run the analysis. Humans still have to make the call.
👉 In a nutshell: AI raises the floor of competence. It does not lower the cost of excellence. The work that defines a brand, the positioning that wins a category, the strategy that holds for a decade, still requires the rarest resource in marketing: human judgment trained by experience.
The Strongest Counter-Argument
The best case against the homogenization thesis comes from Iansiti and Lakhani at Harvard. In Competing in the Age of AI, they argue that AI-native firms win through superior operating models. Their canonical example: Ant Financial serves over ten times more customers than the largest US banks with less than one-tenth the employees (Iansiti & Lakhani, 2020).
We see the same pattern playing out among startups right now. Companies moving first to build AI-native operating models are unlocking remarkable efficiencies, achieving with small teams what used to require large ones. A handful of engineers and marketers can run operations that would have demanded entire departments three years ago.
That looks like a sustainable advantage. It is not.
Iansiti and Lakhani themselves write that “the software of these new types of company is often open source. Competitive advantage has moved from the production technologies to the data these companies have amassed” (Iansiti & Lakhani, 2020).
Translation: the AI is commodity. The advantage sits in everything around it:
Data,
processes,
organizational design.
That is the five-moat argument in different words. Any advantage built on a technology lasts only until competitors get the same technology. The first movers among AI-native startups are real winners today, but only until competitors recognize the pattern and rebuild around it. AI factories had a meaningful lead five years ago. That lead is compressing now. The moat always migrates to whatever has not yet been commoditized.
So What Actually Works?
If AI is infrastructure, where does the moat live? The literature converges on five sources:
Proprietary data, but only when activated: Brynjolfsson et al.’s (2021) analysis of predictive analytics and Wedel and Kannan’s (2016) Journal of Marketing synthesis show that raw data does not create advantage. Data combined with IT capital, educated workers, and workflow integration generates up to $918,000 in additional sales versus matched competitors. The data is the easy part. The complementary capabilities are the moat.
Consider Spotify Wrapped: Every streaming service has listening data. Only Spotify turned that data into an annual cultural moment competitors cannot replicate even with identical inputs.
Brand and distinctive assets, which matter more as AI mediates discovery: When 80% of consumers trust the brands they use, more than they trust business, media, government, or NGOs (Edelman, 2025), and when LLMs increasingly curate purchase choices, distinctive brand assets become survival infrastructure. Longoni and Cian (2022) found in Journal of Marketing that 76.8% of consumers prefer AI for utilitarian choices but 81.2% prefer humans for hedonic ones. Brand still matters. In some categories, it matters more.
Patagonia is the cleanest illustration: Decades of consistent environmental activism mean that when a customer asks an LLM for sustainable outdoor brands, Patagonia surfaces first. The brand position is so distinctive that AI amplifies it rather than dilutes it.
Trust, which is migrating away from AI vendors and toward brands: Trust in AI companies fell from 61% to 53% globally between 2019 and 2024, and from 50% to 35% in the US (Edelman, 2024). A 13-experiment study in Organizational Behavior and Human Decision Processes found that disclosing AI use systematically erodes trust, including when the AI use is high quality (Schilke & Reimann, 2025). Brands that earned trust before the AI wave have an asset competitors cannot prompt into existence.
Human judgment at the jagged frontier: The most-cited workplace AI experiment to date, by Dell’Acqua and colleagues at Harvard, BCG, and MIT (2023), showed that AI-augmented consultants completed 12.2% more tasks, 25.1% faster, and at 40% higher quality on tasks inside AI’s capability frontier. On tasks outside that frontier, they were 19 percentage points more likely to produce wrong answers. The professionals who lost performance “blindly adopt[ed] AI output and interrogate[d] it less.” Judgment about when to trust the machine is the new craft. And the work at the very top, the strategy, the positioning, the brand-defining idea, remains structurally beyond AI, because AI optimizes toward the statistical middle, not the meaningful edge.
Speed of organizational learning: An MIT Sloan and BCG study of 3,000+ managers found that only 10% of organizations achieve significant financial benefits from AI, and the differentiator is mutual human-AI learning (Ransbotham et al., 2020). BCG’s 2025 study went further. Only 5% of firms are “future-built,” but they outperform laggards by 1.7× on revenue growth and 3.6× on three-year total shareholder return. They allocate AI investment 10-20-70: 10% to algorithms, 20% to data, 70% to people and processes (Apotheker et al., 2025).
The Strategic Test
Here is the question every marketing leader should be asking right now:
If your top three competitors deployed a comparable AI stack to the one you are building right now, similar models, similar tools, similar agents, what would still make your brand the obvious choice for your customers?
If the answer is nothing, you are building infrastructure, not strategy.
If the answer is clear, a unique data asset wired into customer experience, a brand with distinctive assets, trust earned through human accountability, judgment that works at the jagged edge, an organization that learns faster than the team across the street, then AI becomes a multiplier on something defensible.
That is the difference between an AI-enabled brand and an AI-dependent one.
But Not Investing Is Also a Strategic Mistake
A marketing leader might draw the wrong conclusion: if AI is just infrastructure, perhaps the smart move is to wait, let competitors waste their budgets, and pick up the proven tools later. The evidence rules this position out.
Infrastructure decisions, by definition, are not optional. Electricity was not a competitive advantage by 1930, but firms without electricity went out of business. The same logic now applies to AI.
The performance gap between AI leaders and laggards is now measurable and widening. Babina et al. (2024), publishing in the Journal of Financial Economics and drawing on resume data covering 535 million individuals and 180 million job postings, found that a one-standard-deviation increase in firm AI investment is associated with 20.3% higher sales growth, 21.9% higher employment growth, and 22.4% higher market valuation growth. The authors note that “AI-powered growth concentrates among the ex-ante largest firms, leading to higher industry concentration and reinforcing winner-take-most dynamics.” This is not a productivity story. It is a market-structure story.
Customer expectations have shifted irreversibly. Salesforce’s 2024 State of the AI Connected Customer survey of 15,015 consumers and 1,570 business buyers found that the share of customers who feel brands treat them as unique individuals nearly doubled from 39% in 2023 to 73% in 2024 (Salesforce, 2024). The AI-enabled customer experience that was a differentiator 18 months ago is now table stakes.
Talent is migrating toward AI-enabled organizations. Microsoft and LinkedIn’s 2024 Work Trend Index found that 66% of leaders would not hire someone without AI skills, and 71% would prefer a less experienced candidate with AI skills over a more experienced one without (Microsoft & LinkedIn, 2024). The most AI-fluent employees are precisely the ones most willing to leave laggard organizations.
The cost of inaction is not falling behind on AI. It is falling out of the market entirely.
The Implementation Trap: Why Most Investments Fail Anyway
So AI is infrastructure, not advantage. And not investing is fatal. But here is the third side of the triangle that completes the strategic picture: even investing usually fails.
The MIT NANDA initiative’s 2025 GenAI Divide report estimated that of $30 to $40 billion in enterprise GenAI investment, roughly 95% of organizations are seeing zero measurable P&L impact (Challapally et al., 2025). Only about 5% of pilots reach production. McKinsey’s 2025 State of AI survey arrived at a strikingly similar number from a different angle: just 6% of firms qualify as AI high performers (Singla et al., 2025).
For large enterprises, the gap is even more pronounced. This is the territory I explored in The AI Innovator’s Dilemma in Marketing. The pattern Christensen (1997) identified almost three decades ago applies to AI with painful precision. Large organizations have legacy systems, established processes, internal politics, and embedded incentive structures that resist exactly the kind of fundamental rewiring AI requires. They run pilots. They publish announcements. They do not transform.
The diagnosis comes from McKinsey’s own data (Singla et al., 2025): out of 25 organizational attributes tested, workflow redesign has the largest effect on AI EBIT impact. Only 21% of firms have fundamentally redesigned even some workflows. The other 79% are layering AI on top of legacy processes and wondering why the ROI never arrives.
BCG (Apotheker et al., 2025) calls this the 10-20-70 rule: 10% of AI transformation work is the algorithms, 20% is the technology backbone, and 70% is people and processes. Most enterprises invest in the opposite ratio.
The cost of not investing in AI is real and rising. But the cost of investing without organizational transformation is even higher: a fully funded, fully visible failure that drains budget, talent, and credibility.
What This Means for Marketing Leaders
The three findings combine into a single strategic position.
AI is infrastructure. It will not, on its own, differentiate your brand. Stop expecting it to.
Not investing is not a viable strategy. Laggards face widening performance gaps, customer defection, talent flight, and structural disruption.
Investing without organizational rearchitecture is the most expensive mistake of all. The 95% who fail are not failing because they bought the wrong models. They are failing because they bought models and assumed the rest would follow.
The defensible position is to invest in AI as infrastructure so you are not left behind, while investing disproportionately in the things AI cannot commoditize: proprietary data, brand, trust, human judgment, and organizational learning speed. The 10% on algorithms keeps you in the game. The 70% on people and processes is where the moat actually lives.
Three Moves for the Next 90 Days
If the analysis above is right, the playbook for the next quarter is not a tool list. It is three deliberate moves.
Move 1: Audit your moat, not your stack: Stop counting tools. Map the five sources of defensible advantage against your organization. Where do you have proprietary data that is genuinely activated, not just stored in a CDP? Where is your brand distinctive enough to survive LLM mediation? Where is trust earned rather than claimed? Where do your people exercise judgment AI cannot replicate? Where does your organization learn faster than the team across the street? Each gap is a strategic priority.
Move 2: Redesign one workflow end-to-end before adding another tool: The McKinsey data is unambiguous: workflow redesign has the single largest effect on AI EBIT impact (Singla et al., 2025). Pick the workflow where the gap between current and possible performance is largest. Redesign it from first principles. Then layer AI in. Resist the temptation to deploy three more tools while the existing ones sit half-integrated.
Move 3: Invert your AI investment ratio: Most marketing organizations spend 70% on algorithms and tools, 20% on data, and 10% on people and processes. The 10-20-70 rule of future-built firms inverts that exactly (Apotheker et al., 2025). Pause your next AI tool purchase. Reallocate the budget to training, role redesign, and process work. The visible 30% of the iceberg is where most CMOs invest. The invisible 70% is where the harvest lives.
None of these moves require a new tool. All three require a different theory of where competitive advantage comes from in 2026.
Key Takeaways
AI fails Barney’s VRIO test: It is valuable but neither rare nor inimitable. By the framework that defines modern strategy theory, AI cannot be a sustainable competitive advantage (Wingate et al., 2025).
The evidence for commoditization is overwhelming: Inference costs collapsed 280× in 18 months and have continued to fall by over 99% across three years, while small models now match what required 142× more parameters two years ago (Maslej et al., 2025, 2026).
Marketing is where homogenization hits hardest: AI-assisted content is more similar across brands (Doshi & Hauser, 2024), SEO output is indistinguishable from expert work (Reisenbichler et al., 2022), and 24% of US corporate press releases are already LLM-assisted (Liang et al., 2025).
AI raises the floor, not the ceiling: AI improves novice performance by 34% but leaves top performers nearly unchanged (Brynjolfsson, Li, & Raymond, 2025). The work that defines a brand, including positioning and strategic choices, still requires human judgment trained by experience.
Five sources of defensible advantage remain: proprietary data activated through complementary capabilities, brand and distinctive assets, trust, human judgment at the jagged frontier, and speed of organizational learning.
Not investing is fatal: AI leaders outperform laggards 1.7× on revenue growth and 3.6× on three-year TSR (Apotheker et al., 2025). The cost of inaction is widening, not narrowing.
But 95% of AI investments fail to deliver P&L impact: (Challapally et al., 2025). Workflow redesign, not technology selection, is the differentiator. The 10-20-70 rule is the rule that matters: 70% of the work is people and processes.
If this resonated, the question worth sitting with this week is the strategic test above. Send it to your team. The answers will tell you a lot about where your real moat lives.
Yours,
Prof. Dr. Andreas Fuchs 🦊🎓
Sources
Apotheker, J., de Bellefonds, N., Luther, A., Forth, P., Franke, M., & Kropp, M. (2025, September). The widening AI value gap: Build for the future 2025. Boston Consulting Group.
Babina, T., Fedyk, A., He, A., & Hodson, J. (2024). Artificial intelligence, firm growth, and product innovation. Journal of Financial Economics, 151, Article 103745.
Barney, J. B. (1991). Firm resources and sustained competitive advantage. Journal of Management, 17(1), 99–120.
Brynjolfsson, E., Jin, W., & McElheran, K. (2021). The power of prediction: Predictive analytics, workplace complements, and business performance. Business Economics, 56(4), 217–239.
Brynjolfsson, E., Li, D., & Raymond, L. R. (2025). Generative AI at work. Quarterly Journal of Economics, 140(2), 889–942.
Challapally, A., Pease, C., Raskar, R., & Chari, P. (2025). The GenAI divide: State of AI in business 2025 (Project NANDA Report). MIT Media Lab.
Christensen, C. M. (1997). The innovator’s dilemma: When new technologies cause great firms to fail. Harvard Business School Press.
Dell’Acqua, F., McFowland, E., III, Mollick, E. R., Lifshitz-Assaf, H., Kellogg, K., Rajendran, S., Krayer, L., Candelon, F., & Lakhani, K. R. (2023). Navigating the jagged technological frontier: Field experimental evidence of the effects of AI on knowledge worker productivity and quality (Harvard Business School Working Paper No. 24-013).
Doshi, A. R., & Hauser, O. P. (2024). Generative AI enhances individual creativity but reduces the collective diversity of novel content. Science Advances, 10(28), Article eadn5290.
Edelman. (2024). 2024 Edelman Trust Barometer, Tech sector supplement.
Edelman. (2025). 2025 Edelman Trust Barometer special report: Brand trust, From we to me.
Iansiti, M., & Lakhani, K. R. (2020). Competing in the age of AI: Strategy and leadership when algorithms and networks run the world. Harvard Business Review Press.
Kirk, R., et al. (2024). Understanding the effects of RLHF on LLM generalisation and diversity. In Proceedings of ICLR 2024.
Liang, W., Zhang, Y., Codreanu, M., Wang, J., Cao, H., & Zou, J. (2025). The widespread adoption of large language model-assisted writing across society. Patterns, 6(9), Article 101326.
Longoni, C., & Cian, L. (2022). Artificial intelligence in utilitarian vs. hedonic contexts: The “word-of-machine” effect. Journal of Marketing, 86(1), 91–108.
Maslej, N., Fattorini, L., Perrault, R., et al. (2025). The AI Index 2025 annual report. Institute for Human-Centered AI, Stanford University.
Maslej, N., Fattorini, L., Perrault, R., et al. (2026). The AI Index 2026 annual report. Institute for Human-Centered AI, Stanford University.
Microsoft & LinkedIn. (2024). 2024 work trend index annual report. Microsoft Corporation.
Padmakumar, V., & He, H. (2024). Does writing with language models reduce content diversity? In Proceedings of ICLR 2024.
Ransbotham, S., Khodabandeh, S., Kiron, D., Candelon, F., Chu, M., & LaFountain, B. (2020). Expanding AI’s impact with organizational learning. MIT Sloan Management Review and Boston Consulting Group.
Reisenbichler, M., Reutterer, T., Schweidel, D. A., & Dan, D. (2022). Frontiers: Supporting content marketing with natural language generation. Marketing Science, 41(3), 441–452.
Salesforce. (2024). State of the AI connected customer (7th ed.). Salesforce.
Schilke, O., & Reimann, M. (2025). The transparency dilemma: How AI disclosure erodes trust. Organizational Behavior and Human Decision Processes, 188, Article 104405.
Singla, A., Sukharevsky, A., & Yee, L. (2025). The state of AI in 2025: Agents, innovation, and transformation. McKinsey & Company.
Wedel, M., & Kannan, P. K. (2016). Marketing analytics for data-rich environments. Journal of Marketing, 80(6), 97–121.
Wingate, D., Burns, B. L., & Barney, J. B. (2025, May 8). Why AI will not provide sustainable competitive advantage. MIT Sloan Management Review.



