I just rebuilt my site. The part I want to walk through is the part you can't see: the layer search engines and LLMs actually read to interpret what the site is.
That layer is structured data. Schema. Entity signals. Machine-readable declarations underneath the page that tell engines what they're looking at, who runs it, and what each page is for. I treat my own site as the showcase for the work. What works here, works there. What I'd ask a client to trust me with, I ship on my own first.
There isn't a proven recipe for how every engine interprets your content across the web. There are practices the field broadly agrees on, and they're the ones I shipped. Google says AI Overviews and AI Mode need no special technical signals beyond standard search. Interpretation is converging across surfaces, not fragmenting. Then I sat down to write this post and caught three things I'd been doing wrong. Here's how the layer works, what I shipped on mine, and what to look at when you check yours.
Do top-ranking pages even need schema?
Honest answer: probably not, in the strict sense. Google's own docs say AI Overviews and AI Mode have no special technical signals beyond standard search. Pages win featured snippets, FAQ blocks, and AI Overview citations every day with thin schema or none.
The exception is documented rich results. Where Google documents a specific outcome (Recipe cards, Product star ratings, Job listings), schema does the documented thing. The hedge is for AI search and citations, not for traditional rich results.
I still ship structured data carefully on every site I work on. The reason is simpler than the SEO industry makes it sound. It's best practice.
Best practice is the work the field broadly agrees produces clean signals. Readable structure. Predictable types. Accurate fields. Every responsible practitioner ships it, every time, even on the parts nobody sees.
Why ship it when sites rank without it? The asymmetry: careful schema costs hours, sloppy schema costs positions you'll never know you missed. And the standard: any site whose competitors ship clean schema can't afford to leave its own layer messy.
I won't tell you schema is what makes engines understand you. It's one input. It's the well-defined one. I will tell you I'd never sign off on a site that didn't have it. That's the bet, and it's the bet I'd want a client to take with me.

What's actually basics in 2026?
Schema markup is structured data. It's a machine-readable vocabulary that lets a page declare what it is and what's on it.
Major search engines created it in 2011 (Google, Bing, Yahoo, Yandex), and Schema.org has run it since. Most schema today lives in JSON-LD blocks in the page source. Invisible to humans, native to machines.
Four schema types belong on every site, regardless of vertical. They're not optional. They're the floor.
Organization (or LocalBusiness)
The single canonical declaration of who runs the site, where, and how to reach you. Lives on the home page with a stable identifier. Every site needs one of these.
Local businesses use LocalBusiness instead, with address, area served, and contact details.
WebSite
Declares the site itself: name, URL, publisher.
Helps search and AI engines pick the right brand name when they describe you in results. Without it, you're handing the engines a guess.
WebPage or Article
Page-level typing. The "what is this page" layer.
Article or BlogPosting on editorial content. The right WebPage subtype on everything else.
Tells engines whether they're reading a homepage, an about page, a product page, or a piece of editorial content.
FAQPage
Where the page actually has FAQ content. Don't sprinkle it on every page.
Mark it where there are real questions and answers. Works as a fit signal, not a checkbox.
These four ship clean on every site I work on. The rest is layered work, and what to layer depends on what your site is. More on that next.
What I shipped on the rebuild: the basics

Every page on the rebuild emits the four basics. LocalBusiness and WebSite ship the same way everywhere: LocalBusiness on the home page declares the firm, WebSite alongside it declares the site. Stable identifiers, consistent across every URL.
Page-level typing changes by page. Here's how it lands.
Home page
LocalBusiness with @id /#organization. WebSite alongside it. FAQPage covering the questions a prospect actually asks before booking a call. Three blocks. No more.
About page
WebPage variant. Specifically ProfilePage, since the about page is about a person (me). The Person block lives there too as the canonical author entity for the whole site. No FAQPage; the page doesn't have FAQs.
Service pages (/aeo/, /attorney-seo/, /local-seo/)
WebPage with the Service overlay. Service is layered work, not basics; I'll cover it in the next section. FAQPage where the page has real Q/A. Standard service-page pattern.
Blog posts
BlogPosting with the editorial fields populated: datePublished, dateModified, author, image. FAQPage where the post has FAQ content; not every post does.
That's the basics walkthrough. Nothing exotic, nothing custom.
The work isn't in inventing new types. It's in using the right type for what the page actually is, and emitting it cleanly every time.
What I layered on top, and why
Basics give you the floor. Layered work is where you start matching schema to what the page actually is.
This is also where Google's documented outcomes get specific. Some layered schemas trigger named search features. Others don't, and you ship them anyway because they're the accurate signal for the page.
The clearest cases I've worked on are the ones where Google documents the outcome and the surface fills in. Product schema: star ratings, price, availability. Review markup: aggregate ratings. VideoObject: video rich results and key moments. Documented outcomes, shipped, materialized.
The pattern holds across other documented types (Recipe cards, Event listings, Google for Jobs). Where Google has built a specific surface and you ship the right schema cleanly, the surface fills in.
No hedging.
Here's what I layered on the rebuild, and why each call.
BreadcrumbList
Every non-home page emits one. Google explicitly supports breadcrumb display in SERP, and the rich result is real. Low cost, documented outcome, ships everywhere.
Service
On the three service pages: /aeo/, /attorney-seo/, /local-seo/. Service schema declares what each page sells: serviceType, areaServed, provider.
Google doesn't trigger a specific SERP rich result for Service the way it does for Recipe or Product. The case is weaker. I ship it anyway because it's the accurate type for the page, it informs the LocalBusiness service catalog, and engines that read structured data get cleaner signal about what njseo offers.
This is the "set it up anyway" category. Best-practice declaration, not documented outcome trigger.
Person and ProfilePage
ProfilePage on /about/, with Person as mainEntity. Person also emits inline on every BlogPosting as the author entity.
Google documents Person for creator and author profile contexts, and ProfilePage as a content type for first-hand perspectives. The strongest signal is on author bylines, where Person tied to BlogPosting helps engines understand who wrote what. For a solo practitioner, the canonical Person and ProfilePage on /about/ become the entity hub the rest of the site references back to.
This is the schema side of E-E-A-T (Experience, Expertise, Authoritativeness, Trustworthiness), Google's framework for evaluating content credibility. Author bylines, ProfilePage credentials, and sameAs links to authoritative external profiles are the machine-readable version of saying who's behind a page. LLMs increasingly lean on similar author and source signals when grounding answers.
It's also the layer most relevant to generative engine optimization (GEO). LLM brand summaries draw on cross-web entity consolidation, and Person with sameAs to Wikidata, LinkedIn, X, and other profiles is how your site tells the machines you're a single coherent entity.
Same pattern. Some documented signal, some entity-disambiguation work. Ship it because it's the accurate type for the page.
That's the layered tier. Ship the right type for what the page actually is, every time.
Some layered schemas have concrete documented outcomes (Recipe, Product, Event, JobPosting, BreadcrumbList). Others are foundational without specific SERP triggers (Service, Person on solo sites). Both are work worth doing.
Caught my own mistakes
I shipped the schema work first. Then I sat down to write this post and read the docs again. Three calls turned out to be wrong.
Worth catching, even after eighteen years.
Using my own name as a label
I'd been using my name as a fragment label on my own About page record. The practitioner convention is to use a generic type label instead, so the URL doesn't quietly leak personal info into third-party tools.
Small fix. Small but real reason.
Assuming Google connects the dots between pages
I'd assumed Google would follow links between my pages. The blog post mentions me as author with a link to my About page; Google reads the About page; the entity gets resolved.
It doesn't work that way. Each page is read in isolation. Google doesn't follow these references across pages.
The practitioner solution is to repeat the full author block on every blog post, not just point to the About page. Stable identifier on the link, full data alongside it.
Pointing my schema back at itself
I'd had a self-reference where the schema lists external profiles (LinkedIn, X, Crunchbase). That field is for proving you're the same entity as someone known elsewhere. Pointing back to your own site does no work.
Dropped the self-reference. Kept the external ones.
These are the kind of mistakes that don't break anything visibly. They just quietly weaken the entity signal. Worth catching.
What's coming next, briefly
Schema is the layer most ready to ship today. The next ones are forming.
llms.txt is a proposed AI-readable manifest at the site root. AI crawl controls let you decide what GPTBot, OAI-SearchBot, ClaudeBot, and PerplexityBot see. MCP is the agent layer: how AI moves from reading sites to taking actions on them.
I haven't shipped all of these on njseo yet. I'll write about each as I do.
FAQ
View any page's source and search for <script type="application/ld+json">. That's where the schema lives. You can also use Google's Rich Results Test or Schema.org's validator, which let you paste a URL and see what types are on the page. Most sites have something. Many sites have the wrong something, or generic blocks generated by a CMS plugin nobody reviewed.
Structured data in 2026 isn't a ranking trick. It isn't the whole story of how search engines and LLMs interpret your site, either. It's the well-defined part. The part with documented vocabulary, agreed-on patterns, and tools that tell you whether you got it right.
The basics matter most. The layered work depends on what your site is. Some of it has documented SERP outcomes; some of it doesn't.
The honest part is admitting nobody can sell you a recipe for how engines read everything. The responsible part is doing the well-defined work carefully, because the asymmetry favors careful structure over sloppy structure on every site I've ever audited.
I'll keep doing it. Same way I have for eighteen years.
Book a free intro call. I'll look at your site's current schema, tell you what's solid, what's missing, and whether an engagement makes sense.
No pitch, no pressure.

WRITTEN BY
SEO & Answer Engine Optimization Specialist
I'm an independent SEO and answer engine optimization specialist based in Morris County. I help small businesses rank in Google, and now in ChatGPT, Perplexity, and Google's AI overviews. No agency overhead. No junior account managers. Just focused, expert work.
