From audit to first AI citation in 14 days — what actually changes on your site

From audit to first AI citation in 14 days — what actually changes on your site

A question we hear at the decision point, almost always: "If we begin now, when do we see something change?"

Fair question. It deserves a fair answer. AI search optimization is not a switch you flip. Citations in ChatGPT, Perplexity, and Google AI Overviews do not appear on a fixed schedule, and any company that promises a specific outcome by a specific date is, with respect, selling something else. What we can describe, honestly, because we have watched it unfold on the kind of small-business site we audit weekly, is the typical pattern of the first fourteen days. The work that gets done. The signals that appear. And the things that simply do not change in two weeks, no matter how much one might wish them to.

Day 1 — The audit and the scope

Everything begins with the standings audit. Sixty seconds of input, and the Licheo agents return a picture of where the site currently stands: which AI surfaces can read it, which schema is missing, which pages are citation-eligible and which are not, what the editorial layer looks like, what the Google Business Profile is missing. It is not a vanity report. It is a working document, the kind you can open on a Monday morning and act on by Tuesday.

From this audit, scope is set. Not every page is worth rewriting in the first two weeks. Typically, we identify the three or four service pages with the most potential to become citation sources, the location signals that need tightening, and the one editorial piece that the owner or a practitioner can authentically author. The scope, alla fine, is small, focused, and honest. By the evening, both sides know what the next thirteen days will look like. No mystery, no fog.

Days 2–3 — The technical foundation

This is where the heavy lifting happens early, because it must. Without a clean technical foundation, every later improvement sits on sand.

Schema is injected first. LocalBusiness for the entity itself. Service schema for each primary service page. Organization schema with the proper sameAs links pointing to LinkedIn, GBP, industry directories, anywhere the entity legitimately appears. FAQPage schema on pages where genuine Q&A exists, never invented. If there is a leadership team, Person schema for the principals, with credentials, certifications, and a real biography. We validate everything through Google's Rich Results Test and Schema.org's official validator before it goes live; one malformed JSON-LD block can quietly poison the whole graph.

The llms.txt file is published next, at the root of the domain. This is a small but increasingly important signal, originally proposed by Jeremy Howard in September 2024: a plain-language index that tells AI crawlers what the site is about, where the authoritative pages live, and what content is canonical. It is not a magic file. It does not force anyone to cite you. But it removes ambiguity, and AI systems reward unambiguous sources.

The robots.txt is then cleaned. We have seen, more than once, well-meaning sites that accidentally block GPTBot, PerplexityBot, ClaudeBot, or Google-Extended without realising it. Usually the culprit is a security plugin that decided, on its own initiative, that "bot" means "bad." If you are blocking the crawlers, you are not getting cited. Simple as that. By the end of Day 3, the site is technically legible to the AI surfaces that matter. Foundational work. Not glamorous.

Days 4–7 — The page rewrites

Now the writing begins. Four days of focused work on the service or practice pages that the audit flagged as highest leverage.

What changes on these pages? Several things at once.

The opening paragraph is rewritten to be citation-ready: a clear, self-contained answer to the question the page implicitly answers. Not a marketing preamble. A direct, useful paragraph that an AI model could pull verbatim and present as a recommendation. We have written elsewhere about what makes a paragraph citation-ready, and the principle is the same here. Clarity, specificity, and the absence of fluff.

Sub-sections are added with question-shaped H2s. The questions are real ones, drawn from actual customer conversations the owner has had, not invented to game an algorithm. The truth is, when we ask an owner "what do clients ask you in the first phone call?", the list of five or six questions that comes back is almost always better than anything we could have written from outside the business. Below each H2, a focused answer of two to four sentences. This structure mirrors the way AI systems retrieve and quote: passage by passage, not page by page.

An FAQ block is added at the bottom of each rewritten page. Not five generic questions. Three to six specific questions that genuinely come up in the business, with answers that contain concrete details. Process. Timeline. What to expect. What is included, what is not. FAQPage schema wraps the block, so the structure is machine-readable as well as human-readable.

Internal links are added between the rewritten pages and the rest of the site, so the entity graph tightens. By Day 7, three or four pages have moved from generic marketing copy to genuinely citable sources.

Days 8–10 — Google Business Profile and the review layer

Now the work shifts outward, beyond the website itself.

The Google Business Profile is brought to completion. Every field filled: categories (primary and secondary), services with descriptions, attributes, hours, holiday hours, service area where relevant. Photos are reviewed. Enough of them, recent ones, varied, with descriptive filenames (not "IMG_8472.jpg"). The Q&A section is seeded with three or four real questions and proper answers from the business, rather than left to whatever a passerby might post on a Sunday afternoon.

Review-response automation is configured. Not bot replies. Automation here means alerting the owner the moment a review arrives, with a draft response ready for human approval. Speed matters; specificity matters more. Reviews that receive thoughtful, personalised responses signal a healthy entity to both Google and the AI surfaces that pull from Google's index.

Directory gaps are corrected. Bing Places, Apple Business Connect, Yelp, and the relevant industry directories (Avvo for lawyers, Healthgrades for clinicians, Houzz for trades, and so on). The goal is consistency: same name, same address, same phone, same description, everywhere the entity legitimately appears. By Day 10, the off-site signals match the on-site signals.

Days 11–14 — The editorial layer

This is the part most small businesses skip, and it is precisely what separates the sites that get cited from the sites that do not.

In the final stretch, the first owner-authored or practitioner-authored article is published. Not a ghostwritten generic post. A real piece, on a real topic the practitioner knows from experience, with a real byline, a real photograph, real credentials, and Person schema connecting all of it. The article addresses a question the business has answered, in person, dozens of times. The voice is the practitioner's voice. The expertise is verifiable.

Why does this matter so much in the first fourteen days? Because AI systems weigh authorship signals heavily. A site with anonymous marketing copy is one of thousands. A site with a named practitioner who has demonstrably written about a topic, on the other hand, becomes a preferred source. Google's E-E-A-T framework (the extra E added in December 2022 for "Experience") was the public signal, and the AI surfaces that pull from Google's index inherited the same bias.

Alongside the article, citation tracking is initialised. We begin querying ChatGPT, Perplexity, and Google AI Overviews with the prompts the business actually wants to be found for. Not vanity prompts. Prompts a real customer would type, sitting on the couch at nine in the evening with a problem they want solved. The baseline is captured, so that over the following weeks and months, change can be measured rather than assumed. The methodology behind this is something we have written about separately in how to measure GEO results, and it applies from Day 14 onward.

By the end of Day 14, the site has changed in concrete, observable ways: schema present, llms.txt live, robots clean, priority pages rewritten, GBP complete, first authored article published, tracking baseline captured.

What does NOT change in fourteen days

Here is the part that matters, and the part nobody wants to put in writing. So we will.

Rankings on the most competitive head terms do not change in fourteen days. They simply do not. If your industry's most contested keyword has been dominated by three large competitors for five years, two weeks of clean work will not displace them. Anyone who tells you otherwise is selling a fantasy.

Citation frequency in ChatGPT and Perplexity may not change measurably in fourteen days either. The AI surfaces refresh on their own schedule. ChatGPT's web index, for instance, updates on its own cadence that OpenAI does not publish; Perplexity's retrieval is closer to live but still shaped by the underlying source weighting. The lag between a site improvement and a visible citation can be days, weeks, or sometimes longer. We have seen first citations appear within two weeks. We have also seen them take six. Both are normal.

Domain authority, in any meaningful sense, does not change in fourteen days either. Backlinks take time. Mentions take time. Trust takes time. There is no shortcut here, and the agencies that promise one are usually pointing at the wrong metric.

What does change is the underlying eligibility. The site moves from "not a candidate for citation" to "a candidate for citation." Whether and when the surfaces choose to cite is, in the end, their decision, not ours. Our job is to make the site the kind of source they should reasonably choose. After that, patience.

What this is and is not

This is a typical pattern, not a guarantee. Every site begins from a different starting point. A site with strong existing content and a clean technical foundation may move faster; a site with deeper issues (duplicate content, manual penalties, an entity confused across multiple business names, a CMS that strips schema on every republish) will need more than fourteen days simply to clear the underbrush. The audit is what tells us where a particular site actually stands.

Fourteen days is the beginning, not the end. The editorial layer compounds over months, not weeks. What happens between Day 15 and Day 90 is where most of the actual citation growth tends to occur, and that is a story for another post.

Frequently asked questions

How long does AI search optimization take to show real results? The technical foundation is set within the first two weeks. Visible citation growth typically appears between Day 30 and Day 90, depending on the site's starting position and the competitiveness of the queries. Anyone promising specific results by a specific date is overselling.

Can I really get cited by ChatGPT within two weeks? Sometimes, yes. Often, no. First citations within fourteen days do happen on sites that begin from a reasonable baseline, but they are not the rule. What is reliable is that the site becomes citation-eligible in fourteen days.

What are the quickest GEO wins for a small business? Schema injection, llms.txt publication, robots.txt cleanup, GBP completion, and rewriting two or three service pages with citation-ready paragraphs and FAQ blocks. These are the highest-leverage moves in the first two weeks. For a fuller picture, the GEO checklist for AI search lays out the full set in order.

Is fourteen days enough to see ranking changes in Google? No. Rankings on competitive terms move on a much longer horizon. Fourteen days is enough to fix technical eligibility and begin the editorial work that drives later movement — not to displace established competitors on head terms.

What happens after Day 14? The editorial cadence continues — typically one or two authored pieces per month — and citation tracking runs continuously. The audit is re-run at Day 30 and Day 90 to measure what has actually shifted, what has not, and what to adjust.


The first fourteen days run from the standings audit. Start at licheo.com/seo-standings — sixty seconds, no email required.