AI

Join 500+ brands growing with Passionfruit!
The two statistics that tell the whole story
Two numbers surface the core tension in AI search content strategy.
The first: Perplexity cites content updated in the last 30 days at roughly twice the rate it cites content older than a year. The second: Ahrefs analyzed 17 million citations and found AI-cited content is 25.7% fresher than organic Google results, but the average cited URL is still roughly three years old.
Both are true simultaneously. Which means AI search engines aren't uniformly biased toward fresh content or toward evergreen they split the work by query type. Recency-implying queries get fresh sources. Stable-demand queries get older, more authoritative ones. This single mechanical fact, once internalized, rewrites your content calendar.
Key takeaway: The content type doesn't determine citation odds in AI search. The query does. Your job is to match content type to query type not to pick a single format and apply it universally.
Three content types, reframed by how AI retrieves them
Editorial tradition defines evergreen, seasonal, and trending by publication intent. AI search retrieval defines them by query behavior. The difference is strategic.
Try Passionfruit Labs free → Track citation share across ChatGPT, Perplexity, Gemini, and AI Overviews.
Content type | Query signal | Example queries | 5-year demand curve |
|---|---|---|---|
Evergreen content | No recency implied | "what is retrieval-augmented generation," "how to set canonical tags" | Flat |
Seasonal content | Calendar-anchored | "Black Friday deals 2026," "Q4 budget planning," "tax deduction changes" | Recurring spikes |
Trending content | Recency-mandatory | "GPT-5 release," "latest Google algorithm update" | Sharp spike, fast decay |
Google has described this logic in its own documentation for years. Per Google's ranking systems guide, Google operates various "query deserves freshness" systems designed to show fresher content for queries where it would be expected. The 2011 Freshness Update formally named three categories where recency matters: recent events, regularly recurring events, and frequently updated topics. LLMs apply the same logic with tighter windows and sharper penalties.
How freshness is weighted across ChatGPT, Perplexity, Gemini, and AI Overviews
The aggregate 25.7% freshness premium hides big platform-level differences. Here's how each behaves in practice:
Platform | Freshness sensitivity | Sweet spot | Notes |
|---|---|---|---|
Perplexity | Highest | Content ≤30 days old | Freshness penalty kicks in at 60–90 days; ~47% of top citations from Reddit |
ChatGPT | High but mixed | Recent + authoritative | Cites URLs 393–458 days newer than organic Google results; rewards micro-authority |
Google AI Overviews | Moderate | Inherits Google signals | Slightly older content than ChatGPT prefers; multimodal content over-indexes |
Claude | Moderate | Structured, well-hierarchied pages | ~44% of citations to blogs; structural signals beat date |
Gemini | Moderate | Recent + authoritative | Freshness real but not as steep as Perplexity |
Aggregated distribution across platforms:
~65% of AI citations go to content published in the last year
~80% to content in the last two years
~90% to content in the last three years
The ~10% long tail is dominated by Wikipedia, government domains, and category-defining reference works
Pro tip: If your primary AI visibility concern is Perplexity, run a quarterly refresh on Tier 1 pages. If it's Google AI Overviews, prioritize E-E-A-T signals and multimodal assets instead of chasing freshness.
Where seasonal content earns its keep
Seasonal content produces the highest citation return per content hour in AI search. The reason: seasonal queries are almost always comparative ("best Cyber Monday deals for project management software") or explicitly time-anchored ("2026 federal tax deduction changes"). Both are patterns LLMs disproportionately cite comparative queries earn roughly three times the AI visibility of how-to queries in the same category.
Three rules separate seasonal content that wins from seasonal content that wastes the opportunity:
Rule 1: Publish at 15% of peak search volume, not at peak. AI crawlers need lead time to index, and citation signals need weeks to accumulate. Publishing in peak week means competing with fresh drops for a citation window that's already closing. For a November peak, September publishes consistently out-cite November publishes.
Rule 2: Update one perennial URL instead of creating a new one annually. This is the single biggest seasonal content mistake. A
/holiday-gift-guide-runners/URL, refreshed yearly, compounds citation history on one canonical page. Separate/2023-…/,/2024-…/,/2025-…/URLs each start from zero every year. AI search engines have no memory that they're the same asset.Rule 3: Put explicit year entities in body copy and schema. An H2 reading "What changed for Black Friday 2026" plus an
Articleschema with a currentdateModifieddoes more than a generic title. Entity recognition is how retrieval systems resolve temporal context.
Common mistake: Treating seasonal content as trending content — scrambling in peak week, losing to competitors who started eight weeks earlier on citation accumulation they could have owned.
Trending content and the 30-day citation window
Trending content has the highest ceiling and the steepest cliff in AI search. A well-timed post on a breaking topic can capture 40–50% citation share on day two and be invisible by day forty-five.
Three non-obvious mechanics matter:
IndexNow is the trending cheat code: For Perplexity (via Bing) and ChatGPT (via its Bing integration), submitting URLs through IndexNow drops crawl time from 48–72 hours to under 24. On a 30-day window, losing 2–3 days to crawl lag is losing 10% of the opportunity.
Backlinks arrive too late to matter: Link accumulation runs on a weeks-to-months timeline. Trending battles are decided in days. What matters instead: clean schema, parseable structure, a recognized author entity, and existing brand mentions in the retrieval corpus.
Watch for "ghost citations:" A piece can be cited heavily while the brand goes unmentioned — the AI uses the content as reference material to recommend competitors. Trending content is especially prone to this because retrieval systems reach for the freshest available source even when brand association is weak.
The hard question: Is your brand entity strong enough that a citation converts to a recommendation? If the answer is no, trending content subsidizes competitors. Don't publish it.
Evergreen is the infrastructure and why it decays faster than you think
The compounding case for evergreen content is still real. Foundational guides and pillar pages dominate the older-than-two-years slice of AI retrieval and drive the bulk of long-term organic ROI.
What's changed is the decay curve. Content that stayed relevant for 24–36 months in traditional search now feels outdated in 6–9 months in AI search especially for anything with a current-state dimension (pricing, tool comparisons, best-of lists, market data). Perplexity's freshness penalty kicks in around 60–90 days. Even ChatGPT, the most forgiving major platform, prefers meaningfully updated content for queries where underlying reality could have shifted.
Google's Helpful Content guidelines sharpen the point. Google explicitly asks creators whether they're changing the date of pages to make them seem fresh when the content has not substantially changed and frames that practice as working against the site, not for it. AI search engines apply the same logic more aggressively: if the text hasn't changed, retrieval systems detect the mismatch and treat the page as stale regardless of the modified date.
Tiered refresh cadence (recommended):
Tier | Content type | Refresh cadence | Update depth |
|---|---|---|---|
Tier 1 | >20% of organic revenue, or 3+ prompt categories | Quarterly | Full data refresh, new examples, expanded FAQs, fresh outbound links |
Tier 2 | Mid-traffic pillar pages | Biannual | Stats and examples updated, light structural revisions |
Tier 3 | Long-tail support content | Annual | Date accuracy, broken-link fixes, terminology |
Match refresh frequency to topic decay rate, not to a uniform calendar:
Technology tutorials: 30–40% annual decay
Best-of / comparison lists: 25–35%
Statistical guides: 15–25%
Foundational how-tos: 5–10%
"How to tie a tie" can sit for three years. "Best AI content tools in 2026" is stale in six months.
The 60/30/10 portfolio (and when to break it)
For most B2B and DTC brands, a 60/30/10 split is the right starting baseline:
60% evergreen content — pillar pages, category-defining reference content, tool comparisons, foundational how-tos. Long-term AI citation durability lives here.
30% seasonal content — perennial URLs updated annually, launched at 15% of peak demand.
10% trending content — published fast, tied to topics with existing brand entity authority.
Three adjustment rules bend this baseline:
Rule 1 — comparative-heavy category. If >70% of AI citations in your category go to comparative queries ("best X for Y," "X vs. Y"), shift 10 points from evergreen to seasonal. Seasonal is structurally comparative and over-indexes in comparative retrieval.
Rule 2 — weak brand entity. If branded search volume is flat, third-party presence is thin (Reddit, G2, Trustpilot, industry publications), cut trending to 0% until entity strength catches up. Trending without entity strength funds competitors via ghost citations.
Rule 3 — fast-moving technical category. For AI tools, crypto, developer platforms: evergreen drops to ~40%, reactive climbs to ~25%. In these categories the "evergreen" topic itself moves too fast for the traditional split.
Decision tree (text version):
Measurement split:
Track evergreen content via AI citation frequency over rolling 90-day windows.
Track seasonal content by citation share within the peak 14-day window.
Track trending content by day-1-to-day-30 citation velocity.
Common misconceptions
"Evergreen means no updates." Not in AI search. Evergreen content in 2026 is a product with an SLA, not a post-and-forget artifact.
"Freshness = changing the modified date." Google and major LLMs detect this. Surface-level date edits can actively demote recency-sensitive pages.
"Trending content is risk-free traffic." Not if your brand entity is weak. Ghost citations subsidize competitors.
"All platforms treat freshness the same way." Perplexity penalizes aggressively at 60–90 days. Google AI Overviews barely penalizes at all. Strategy should differ by platform priority.
"More content = more AI citations." Volume without maintenance = flat citation graphs. We see this constantly.
Get a GEO audit → 50-URL AI visibility diagnosis within 10 business days.
Current trends in AI search (2026)
Platform fragmentation is widening. Citation overlap between ChatGPT and Perplexity for the same query is under 15%. Single-platform measurement is no longer enough.
Entity authority is eating keyword optimization. LLMs evaluate topical depth and credibility directly; backlink-centric signals are losing weight.
Multimodal content over-indexes in Google AI Overviews. Video, images with clean alt text, and diagrams earn disproportionate citation share.
Reddit and G2 are infrastructure now. Third-party platforms feed AI retrieval heavily — especially Perplexity. A thin presence there caps AI visibility regardless of site quality.
Troubleshooting & edge cases
Symptom | Likely cause | Fix |
|---|---|---|
Flat citation graph despite publishing volume | Evergreen decay outpacing new publishes | Pause new content. Run a Tier 1 refresh on top 10 revenue pages. |
Trending posts get cited, brand never named | Weak brand entity → ghost citations | Cut trending. Invest in branded search, Reddit, G2, earned media. |
Perplexity citations drop sharply at ~90 days | Freshness penalty kicking in | Quarterly update cadence for Perplexity-priority pages. |
Seasonal URL performs worse each year | Creating new URL annually instead of updating one | Consolidate to perennial URL; 301 prior years. |
ChatGPT cites but Google AI Overviews doesn't | Strong content, weak E-E-A-T / authority signals | Prioritize author bylines, expert quotes, multimodal assets. |
Content refresh didn't move citations | Cosmetic edit (date only, no substance change) | Republish with ≥25% new material: data, examples, sections. |
When not to use this framework:
Pre-product-market-fit brands with <10 content assets. Build foundation first; portfolio logic is premature.
Regulated industries where publication timing is locked (pharma, finance, legal). Work with compliance first.
Brands with zero current AI search visibility. Fix technical crawlability before worrying about the split.
Talk to an expert → Book a 30-minute diagnostic call.
How this framework compares to alternatives
Approach | What it gets right | What it misses | Best for |
|---|---|---|---|
60/30/10 portfolio (this piece) | Matches content type to query behavior in AI search | Requires disciplined measurement across platforms | Most B2B / DTC brands |
Pure evergreen compounding | Lower operational overhead | Leaves seasonal + trending citations on the table | Small teams with <5 content hours/week |
Trend-chasing / news-jacking | Fast citation spikes | High ghost-citation risk for weak brands | Established brands with strong entity strength |
Pillar-cluster only | Strong topical authority | Poor seasonal comparative visibility | Technical documentation / SaaS docs |
Platform-specific optimization (Perplexity-first) | Highest freshness returns | Single-platform risk as landscape shifts | Brands where Perplexity drives >40% of AI traffic |
When the 60/30/10 framework is the wrong choice: If your category has zero seasonal behavior (e.g., pure B2B SaaS with no recurring buying cycles), drop seasonal to 10–15% and reallocate to evergreen. Don't force seasonal content where no seasonal demand exists.
How Passionfruit implements this
Most content teams already know their calendar needs to change for AI search. The hard part is diagnosing what's already broken — which evergreen pages are decaying, which seasonal URLs are fragmenting citation history, whether brand entity strength can support trending content at all.
Passionfruit runs a three-part engagement built exactly around this framework:
1. AI search readiness audit. We pull your top 50 URLs, score each on freshness decay risk, entity strength, and platform citation presence across ChatGPT, Perplexity, Gemini, and Google AI Overviews. Deliverable: a tiered list with refresh priorities and a portfolio diagnosis (where your current 60/30/10 actually sits).
2. 90-day rollout:
Weeks 1–4 (setup): Audit complete, Tier 1 pages identified, refresh calendar built, measurement baselines locked across all four major platforms.
Weeks 5–8 (adoption): First Tier 1 refresh cycle shipped, IndexNow submission pipeline live, seasonal campaign runway started for the next peak.
Weeks 9–12 (optimization): Citation velocity reviewed, portfolio split adjusted against the three decision-tree rules, Tier 2 refresh begins.
3. Passionfruit Labs (self-serve). For teams that want to track AI citation share across platforms themselves, Labs provides a prompt-testing infrastructure with weekly citation monitoring — so you can watch the portfolio work without a full engagement.
Proof points from prior work (named clients, real numbers):
Typsy Beauty: 14x ROI on organic investment across SEO + GEO
Necesera: +31% purchases from organic, driven by seasonal perennial-URL consolidation
A kitchen appliances brand: +20% non-branded organic revenue after a Tier 1 refresh cycle
When Passionfruit is not a fit:
Teams with <$50k/year content budgets (DIY with the framework above)
Pre-launch brands with no existing content library
In-house teams that already have a dedicated GEO lead (we complement; we don't replace)
FAQs
How often should I update evergreen content for AI search?
Match cadence to decay rate, not to a calendar. Tier 1 pages (top revenue drivers) deserve quarterly updates. Tier 2 pages, biannual. Tier 3, annual. For fast-moving technical topics, compress Tier 1 to every 60–75 days because Perplexity's freshness penalty starts around 60–90 days.
Does seasonal content compound year-over-year in AI search?
Yes, but only if you update one perennial URL. Creating a new URL each year ("2024-holiday-guide," "2025-holiday-guide") resets citation history annually. One consolidated URL accumulates link equity, citation signals, and entity associations across years.
Why did my content stop getting cited in Perplexity after a few months?
Perplexity's freshness penalty is the steepest of the major platforms. Citations drop significantly on content older than 60–90 days unless the page is either earning new backlinks consistently or being meaningfully updated. A full data refresh typically restores citation share within 3–4 weeks.
Should small teams publish trending content if brand authority is weak?
No. Trending content without brand entity strength creates "ghost citations" — your content gets cited, but competitors get recommended. Invest in branded search demand, third-party presence (Reddit, G2, Trustpilot), and earned media first. Add trending content once branded search volume is climbing without paid support.
What's the difference between AI citations and AI recommendations?
A citation is a URL in an AI-generated answer. A recommendation is the brand being named as a solution. They don't track together: Seer Interactive documented one brand cited 100+ times in 25 days with zero brand mentions. Track both separately.
How much should a GEO-focused content strategy cost?
For in-house teams, expect 20–30% of existing SEO budget redirected to AI-specific work (content refreshes, schema, visibility tracking, third-party citation building). For agency engagements, diagnostic audits typically run $5k–$15k; ongoing retainers $5k–$25k/month depending on scope and portfolio size.
Can AI-generated content earn AI citations?
Yes, with a caveat. AI-assisted content performs fine when humans add original data, first-hand experience, and expert review. Pure AI-generated content without editorial oversight loses on both E-E-A-T signals and entity association strength. The winning formula is AI-for-efficiency plus human-for-depth.
Which platform should I prioritize first?
Depends on where your audience is. B2B buyers skew toward ChatGPT and Google AI Overviews. Researchers and analysts skew toward Perplexity. Creators and developers skew toward Claude. Run a one-month manual audit across all four with your top 20 category queries before committing a budget.
Is traditional SEO still worth investing in?
Yes. Google still drives the majority of organic traffic for most categories, and ChatGPT's Bing integration inherits many traditional SEO signals. The shift isn't away from SEO — it's toward SEO + GEO as parallel investments measured separately.
What's the fastest way to test this framework?
Start with a two-column audit: your top 20 URLs by traffic, and whether each is currently cited in ChatGPT and Perplexity for its intended query. The gap between ranking and citation is your priority refresh list.
How do I measure AI citations without an enterprise tool?
Run 20 category queries manually each month across ChatGPT, Perplexity, Gemini, and Google AI Overviews. Log mentions, citations, position, and sentiment in a spreadsheet. Compare month-over-month. Three months of manual audits teaches you what to pay for when you're ready to automate.
Does trending content hurt evergreen performance?
Only if you're using the same URL for both. Keep trending content on dated slugs (/news/ or /updates/) separate from evergreen pillars. This prevents freshness penalties on trending from bleeding into evergreen rankings — and prevents evergreen decay from pulling down your recent-news visibility.





