When someone has a buying question today—Which tools fit my stack? What's best for a team like ours?—they start by asking ChatGPT. This creates an odd new marketing problem: you're no longer competing for position on a static list. You're competing for preference inside a conversation.

The conversation typically goes like this. A buyer starts broad ("What are the best options for X?"), then narrows ("Which integrates natively with Microsoft 365?"), then narrows again ("Okay, which one is friendlier for non-technical teams?"). By the time a choice gets made, the model has accumulated context about the buyer's stack, constraints, and preferences. In that setting, showing up in the first response doesn't mean much. What matters is whether the model recommends you after the follow-ups—after it understands what the buyer actually needs.

Most companies stop at visibility. We think that's half the problem. The harder half is preference.

Fighting the last war: AEO/GEO

The emerging field of "AI search optimization"—sometimes called AEO or GEO—treats this challenge like SEO with a twist. Dozens of emerging tools track which prompts mention you. They measure your "share" of prompts relative to competitors. You "optimize" for long-tail prompts, like you would for long-tail keywords. It's useful as far as it goes. Knowing whether you show up matters. But it's a single-turn mindset applied to a multi-turn problem.

In traditional search, a query produces a ranked list and you fight to climb it. Chat works differently. The model isn't just ranking—it's reasoning. It asks follow-ups, accumulates context, then makes a single recommendation. The average ChatGPT conversation is eight messages long. Treating the first prompt like a final answer misses the entire middle of the process—the part where the buyer's constraints get specific and the model decides who actually fits.

Prompt share tells you about visibility. It tells you very little about what wins the recommendation when the buyer adds their real constraints.

Models as high-leverage influencers

Anyone who has tried out more than one AI models—ChatGPT, Claude, Grok, Gemini—notices they have distinct personalities. Each one develops a kind of latent opinion about brands: who you're for, what you're good at, whether you're credible, how you compare to alternatives.

This is surprisingly good news. Models behave more like a human reviewers than search engines. They don't need to be gamed; they need to be convinced and persuaded. If you give them clear claims, real proof, and consistent corroboration across sources, they reward you with trust. In a conversational setting, trust looks like recommendations.

Models build their view of your brand from everything they can read: product docs, integration guides, comparison pages, case studies, reviews, developer threads, credible third-party sources. What consistently helps is clarity, specificity, and corroboration. Titles that say the literal thing. Claims that are concrete rather than aspirational. Proof that matches the claim. The more your owned properties—and the broader web around you—tell the same true story, the more confidently a model can retrieve and defend that story when a buyer pushes for detail.

Over time, this coalesces into a durable narrative. Great for startups but not enterprise-ready. Native Microsoft fit. Premium service justified by outcomes. Excellent for regulated industries but heavier to implement. Your job is to figure out which of these opinions you need to change, then change them with precision.

Why you we call it AI Marketing

At Unusual, we built our entire method around a simple insight: if models form opinions like people, then the work of changing those opinions looks more like brand marketing than technical optimization. We call it AI Marketing—marketing to AI.

The sequence is straightforward: Survey → Plan → Deploy → Monitor.

Analyze. We start with an "AI brand survey." How do models currently describe you, compare you, and recommend you across scenarios that matter? Where do they place you on axes like premium vs. budget, enterprise vs. startup fit, depth vs. simplicity? This surfaces opinion gaps—those frustrating, slightly-off assumptions that cost you recommendations without you realizing it.

Plan. From there, we choose the few opinions that matter most for your goals. If you need to be seen as enterprise-ready, the content strategy must show why, not assert it. If Microsoft-native workflow is your edge, the model needs to see what "native" means in literal terms: versions, file types, steps. We craft proof-rich arguments grounded in detail a model can parse and reuse.

Deploy. We publish on surfaces models can crawl reliably. Many teams opt for an AI-friendly subdomain (ai.brand.com) so they can iterate without risking their main site architecture or design language. Others prefer to keep everything in their CMS. Either way, the principle stays the same: make it easy for models to read. Use literal titles, clean structure, and link to corroborating sources so the model sees a coherent, consistent signal.

Monitor. We measure recommendation share in representative scenarios. We track referral growth. We watch for opinion deltas—movement in the specific assertions that matter to you. When the model's view shifts, we refresh the content. The work compounds.

What actually moves the needle

A model can't recommend you confidently if it can't explain why. The assets that change minds are the ones that make explanation easy:

Head-to-head comparisons that are fair, literal, and honest about tradeoffs. When should someone pick you? When shouldn't they?

Fit narratives that specify who you're for and who you're not for—by company size, stack, and use case—so the model can match you to the right buyer.

Integration proof that spells out "native" in specifics: the objects, formats, versions, and steps that remove ambiguity.

Credibility assets that reduce uncertainty: customer logos and quotes, audits, security documentation, implementation timelines.

Pricing logic that explains the "why" behind premium or budget positioning. Models repeat reasoning, not just numbers.

None of this is about gaming a system. It's about putting the best, most legible version of the truth where the model can find and reuse it.

A case in point

One of our first customers is a legal-tech startup called Parley, which builds AI tools for immigration lawyers. The following conversation pattern happened repeatedly after we started working with them.

A prospective buyer would start with a broad question about visa-application drafting tools. ChatGPT would sketch the landscape and name a few familiar products. Parley wasn't in that first sweep—typical in a crowded category where general answers gravitate toward well-known names.

Then the buyer would add the constraint that mattered to them: they wanted to draft directly inside Microsoft Word. On that follow-up, the model would search the web, find Parley's AI-optimized page, then pivot. It recommended Parley's Word add-in and described, in plain terms, how it fits a legal team's day-to-day drafting workflow.

What changed wasn't Parley's brand awareness. It was the presence of clear, crawlable content that spelled out exactly that Word-native use case. Parley had a page that literally framed the question—Word-native immigration drafting vs. general assistants—and backed it with concrete details. Faced with a specific constraint and specific proof, the model revised its recommendation.

The first answer gets you into the conversation. The second question—anchored to a real constraint—decides the recommendation. Visibility earns a mention. Opinion, supported by precise, verifiable content, earns the win.

Different starting points, same underlying logic

Startups fight to be seen at all. They often need to earn inclusion in the third-party sources models consult in those initial searches—places like G2 and credible Reddit threads. Once routinely visible, they typically need to escape the "toy software" label with concrete proof: credible customers, implementation timelines, compliance documentation, platform-grade signals.

Enterprises are visible by default. Their challenge is fit clarity. Without careful stewardship, models can default to "legacy" or "overkill" for many contexts.

In both cases, the work is the same: pick two or three opinions that most block recommendations and concentrate on moving them with specific, corroborated content.

Why honesty compounds

You can't bluff your way to durable recommendations. Models triangulate. If you oversell an integration, the rest of the web—or your own documentation—will contradict you. The model treats this like any human would: as a negative signal.

The right strategy is simpler and more powerful. Publish the clearest, most honest articulation of where you shine. Name the tradeoffs. Show your work. Align the model's opinion with reality, and you'll earn both AI trust and human trust that compounds over time.

Where to publish

Models read the open web. You'll get the best results where crawling is reliable, iteration is fast, and claims stay consistent across properties.

A dedicated subdomain (ai.brand.com) is often the most pragmatic path. You can ship and update quickly as the model's understanding evolves, and it keeps AI-optimized content separate from your human-facing pages and search-engine-optimized content. If you prefer centralized management, that works too.

The non-negotiables are literalness and consistency. Title pages for what they are. Keep structure clean. Make sure your claims agree with your docs, product UI, and third-party mentions.

Measuring what matters

If the goal is recommendations, measure recommendations.

We track recommendation share—in realistic scenarios, how often does the model choose you? We monitor referral growth—the pipeline influenced by model-initiated journeys. We watch opinion deltas—are the model's key assertions shifting in the direction you want?

Where feasible, we also add lightweight conversation diagnostics: intake notes or self-reports that capture how often a buyer heard about you from an AI assistant. It's imperfect but directional, and it keeps teams focused on the right work.

How we work

Some teams want software that helps them diagnose opinion gaps, plan narratives, generate model-friendly assets, and keep them fresh over time. Others prefer full-service—we run the research and analysis, produce and maintain the content, and report on recommendation share and referral lift.

Either way, the aim is the same: change the model's opinion in the places it matters, then keep it aligned as your product and market evolve.

Where to start

The first step of any good sales process is understanding the target customer. We take the same approach to "selling" to AI models.

In a short working session, we can diagnose how LLMs currently think and talk about your brand. Then we identify at least one opinion you'd like to change that will meaningfully improve your AI presence. From there, the work is concrete.

Book a consult

Unusual helps brands understand—and change—the way AI models think and talk about them. On average, customers grow their AI referrals roughly 40% month over month.

Keller Maloney

Unusual - Founder

Share