Marketing's Strange Loop

Marketing spent the last fifteen years becoming an engineering problem. Now it's becoming marketing again.

The shift happened quietly but completely. Between 2010 and 2015, the discipline transformed from persuasion into optimization, from brand-building into algorithm-hacking, from storytelling into reverse-engineering Google's ranking factors. But with AI models now mediating how customers discover brands, the engineering playbook has hit a wall. You can't game a neural network the way you gamed PageRank. The strange result: marketing is returning to its pre-internet roots—authentic brand building, compelling narratives, and genuine authority. Not because someone decided this was better, but because the architecture of AI systems makes manipulation functionally impossible.

The great unbundling of marketing and persuasion

In April 2012, Andrew Chen published a blog post that crystallized something already happening in Silicon Valley. "Growth Hacker is the New VP Marketing," he declared, arguing that marketing had fundamentally changed: "Instead of asking, 'How do I get customers for my product?' and answering with TV ads and PR, growth hackers would answer with A/B tests, landing pages, viral factor, and email deliverability."

The timing was perfect. Facebook had reached hundreds of millions of users. Google commanded 90%+ of search traffic. For the first time, "superplatforms" gave marketers API-level access to massive audiences. The question shifted from "how do we tell our story?" to "how do we exploit the algorithm?"

Sean Ellis had coined "growth hacker" in 2010 out of necessity—traditional marketers couldn't replace him at startups because they lacked the technical skills his role required. What Ellis needed wasn't someone who understood brand positioning or creative campaigns. He needed someone whose "true north is growth" and who would "scrutinize everything by its potential impact on scalable growth." Marketing became a metrics problem. Creativity became overhead.

The results were spectacular. Dropbox's referral program drove 3,900% growth in 15 months—from 100,000 to 4 million users—by offering free storage to both referrer and referee. Airbnb reverse-engineered Craigslist (despite no public API) to tap into tens of millions of users. Facebook's growth team discovered that getting users to "7 friends in 10 days" was the key retention metric, then built everything around that single number. Hotmail had pioneered the approach in 1996 with "PS: I Love You. Get your free email at Hotmail" in every email footer, reaching 12 million users in 18 months.

These weren't marketing campaigns. They were engineering solutions to distribution problems.

When SEO became computer science

The mindset infected search optimization completely. By 2010, SEO had evolved from "make good content and get links" into a sophisticated reverse-engineering discipline. The goal was simple: figure out Google's ranking algorithm, then exploit it.

The tactics were brazen. Content farms like Demand Media pumped out 7,000 articles per day—equivalent to four English-language Wikipedias annually—paying writers $10-15 to answer popular search queries in minutes. "Does Baking Soda Cure Cancer?" Quality didn't matter. Search volume did. By late 2009, Demand Media was a top-20 web property in the U.S. When they IPO'd in January 2011, they hit a $1.5 billion market cap with shares closing up 33% on day one.

Elsewhere, SEOs built Private Blog Networks—expired domains purchased for their authority, then used to create artificial backlinks. They stuffed keywords at 5-10% density (versus today's 1-2%). They bought footer links from hundreds of sites. They created doorway pages, cloaked content, and automated spam across thousands of forums. The industry developed sophisticated tools—Moz's Domain Authority scores, SEMrush's competitor analysis, Ahrefs' backlink crawler—all designed to measure and manipulate ranking factors.

Google's algorithm had over 200 ranking factors, but they were knowable, measurable, and therefore exploitable. PageRank was elegant mathematics: links were votes, voting weight came from the importance of the linking page, iterate until rankings stabilized. This made it a solvable problem. If you could measure backlink velocity, domain authority, keyword density, title tag optimization, and internal linking architecture, you could engineer your way to the top of search results.

The cat-and-mouse game was fierce. Google's Panda update in February 2011 hit 12% of all search queries—one of the largest algorithmic changes in history. Demand Media's traffic collapsed. They posted a $6.4 million loss in Q4 2011 versus a $1 million profit the year before. Unique visitors dropped 56% over the following year. Their market value evaporated. Google's Penguin update in April 2012 explicitly targeted link schemes. The Hummingbird update in 2013 shifted from keyword matching to semantic understanding. Exact Match Domain updates in 2012 killed the practice of ranking on keyword-rich domains alone.

But SEOs adapted. They built more sophisticated PBNs with better "footprint" hiding. They moved to guest blogging for links (until Google targeted that too). The arms race continued because the fundamental dynamic held: Google's algorithm, however complex, was ultimately a deterministic system following explicit rules. With enough data and testing, you could find the patterns and exploit them.

Why large-language models break the game

Large Language Models operate on entirely different principles. They don't rank pages—they generate text. They don't follow rules—they recognize patterns. The distinction matters completely.

Traditional search works through PageRank and keyword matching. LLMs use transformer architectures trained on hundreds of billions of text tokens. During training, neural networks with millions of interconnected nodes adjust connection weights to predict language patterns. These models convert words into numerical vectors (embeddings) where semantic similarity means mathematical proximity. "Puppy," "kitten," and "infant" cluster together not because someone programmed that relationship, but because the model learned it from patterns in training data.

The critical difference: LLMs have no explicit ranking factors to reverse-engineer. There's no list of 200 things to optimize. Knowledge is distributed across millions of neural weights learned through unsupervised pattern recognition. You can't optimize for "keyword density" because LLMs work in vector space, not keyword space. You can't build link schemes because LLMs prioritize brand mentions across trusted sources, not backlink counts. You can't manipulate meta tags because LLMs ingest entire pages and evaluate semantic clarity, not HTML elements.

When someone searches "best running shoes for beginners," an LLM doesn't match keywords to documents. It converts the query to a vector [0.87, -0.45, 0.63, -0.12], finds content with similar embeddings, and synthesizes an answer from multiple sources. If your content discusses "easy athletic footwear for novice runners" [0.85, -0.47, 0.60, -0.10], the model understands the semantic equivalence despite entirely different keywords. The system evaluates meaning, not signals.

Self-attention mechanisms let transformers process entire sequences simultaneously, determining which parts of content are relevant to which questions. When evaluating "The animal didn't cross the road because it was too tired" versus "because it was too wide," attention mechanisms understand whether "it" refers to the animal or the road. This contextual sophistication makes simple gaming impossible.

Perhaps most importantly, LLMs generate responses by synthesizing across their entire training corpus. They evaluate your content relative to billions of other documents, looking for patterns that indicate expertise, clarity, and authority. As one researcher noted: "You can't fake clarity, authority, or substance once the model is on the page." The evaluation is holistic, not algorithmic.

The return of brand building and narrative

Research from Hard Numbers found that traditional editorial media account for 61% of all AI-generated responses, dominating especially when evaluating trustworthiness (65%) and brand value (72%). When AI systems need to recommend brands, they rely on the same signals humans do: reputation, consistency, credible mentions in authoritative sources.

Semrush discovered that visitors who find a brand through AI answers are 4.4 times more valuable than those from traditional search—they're pre-qualified by an AI "endorsement." Adobe Analytics reported that generative AI traffic to U.S. retail websites surged 1,200% between July 2024 and February 2025. Companies are adapting fast. Taskrabbit now optimizes press releases and executive quotes not just for media pickup but for AI summaries. Their VP of Marketing revealed that ensuring content "directly answers questions people may be asking AI chatbots" has become a core strategy.

The philosophical shift is captured perfectly in industry analysis that distinguishes "persuading algorithms" from "educating intelligence." In the SEO era, marketers manipulated ranking signals—keyword density, backlink profiles, title tags. In the AI era, they provide clear, structured, credible information that helps models understand and accurately represent brands.

Jason Barnard, a pioneer in what he calls "Answer Engine Optimization," frames it as treating AI "not as a simple list-maker to be influenced, but as an intelligence to be taught." The language is pedagogical: letting AI understand, communicating what you're about, providing structured facts. This isn't manipulation—it's building a coherent, consistent brand identity that AI systems can confidently recognize and recommend.

Kathleen Booth, SVP of Marketing at Pavilion, put it bluntly: "The increased saturation of AI-generated content is driving renewed interest and investment in the brand as a way to stand out. When everything looks the same, the power of a strong, authentic brand cannot be overstated."

This creates a strange inversion. The engineering era taught us to distrust brand building as unmeasurable and focus on concrete metrics: CTR, bounce rate, conversion rate, Domain Authority scores. But AI systems evaluate exactly what traditional brand building creates: genuine authority, clear positioning, consistent messaging, credible third-party validation. You can't engineer your way to that. You have to build it.

Full circle, but different

Marketing hasn't simply reverted to its pre-internet form. The tools are wildly different—we have real-time analytics, A/B testing, sophisticated measurement, API integrations. The technical foundation of modern marketing remains. What's changed is the strategy layer.

In the growth hacking era, technical sophistication was the competitive advantage. Whoever could reverse-engineer the algorithm fastest won. But when Facebook's growth team could access APIs to 10+ million consumers, when exact-match domains could rank on domain names alone, when content farms could arbitrage Google's algorithm at scale, the winners were those who treated marketing as an optimization problem.

Google's algorithmic evolution and now AI's architectural differences have closed that window. You still need technical excellence—clean site architecture, semantic HTML, schema markup, fast load times. You still need data and measurement. But these are table stakes. The differentiator has shifted back to the pre-engineering fundamentals: Do you have a clear brand? Can you articulate authentic value? Do credible sources validate your expertise? Is your narrative compelling and consistent?

The irony is thick. We spent fifteen years building infrastructure to game algorithms, only to discover that the next generation of systems—the ones that matter more because they directly answer user questions rather than just ranking links—reward exactly what we optimized away. Brand. Authority. Trust. Story.

Marketing became an engineering problem when algorithms were simple enough to reverse-engineer. It's becoming marketing again because neural networks are too sophisticated to game. What looked like disruption was a detour. The strange loop closes where it begins—with the basic work of building brands worth recommending.

Keller Maloney

Unusual - Founder

Share