Run an SEO audit before you start SEO work so you set the baseline and avoid guessing. You can also run one straight after any major change like a migration, redesign, CMS change, or domain change because these shifts often break crawl paths, indexing, and page templates. Conduct one when organic traffic drops and you do not know why, when rankings stall for months, or when Search Console shows indexation or coverage issues. If the conversion rate drops on organic landing pages, an audit helps you separate tracking problems, intent mismatch, UX issues, and technical blockers.
For high growth sites, run audits quarterly so you catch problems early and keep the roadmap current. For stable sites, run them twice a year so you stay ahead of gradual technical drift, content decay, and competitor changes.
Tools you need
Google Search Console shows how Google crawls, indexes, and surfaces your site in search. Use it to find coverage problems, indexing exclusions, sitemap issues, manual actions, mobile usability issues, and query level performance. It is the source of truth for what Google sees, what it is ignoring, and where errors exist.
GA4 or another analytics platform shows what users do after they land on your site. Use it to measure organic sessions, engagement, conversions, revenue, and which landing pages drive outcomes. It helps you separate ranking problems from conversion problems, and it tells you which pages matter most for business results.
A crawler like Screaming Frog simulates a search engine crawl and turns your site into a dataset. Use it to find broken pages, redirects, redirect chains, duplicate titles, missing metadata, canonical issues, thin templates, index bloat patterns, internal linking gaps, and orphan pages. It is how you move from opinions to a full URL level inventory.
A rank tracker monitors where you rank over time for your target keywords. Use it to spot trend changes, measure progress after fixes, segment performance by location or device, and identify volatility tied to updates or site changes. It is optional because Search Console already shows queries and clicks, but rank tracking is cleaner for consistent keyword monitoring.
A backlink tool like Ahrefs or Semrush shows who links to you, what pages attract links, anchor text patterns, and how your link profile compares to competitors. Use it to identify risky link patterns, find link opportunities, run competitor gap analysis, and support digital PR planning. It also helps you validate whether authority is a constraint for certain topics.
PageSpeed Insights and Lighthouse show performance and experience issues, with a focus on Core Web Vitals and key front end problems. Use them to diagnose slow templates, render blocking scripts, heavy images, layout shifts, and interaction delays. They help you prioritise technical changes that affect both rankings and user conversions.
Nice to have for advanced work
Server log access shows what bots and users hit on your site at a request level. Use it to confirm what Googlebot crawls, how often it crawls key templates, where crawl budget gets wasted, and whether important URLs get skipped. It is one of the best ways to diagnose crawl traps, index bloat, and slow discovery issues on large sites.
A site search tool for internal search terms shows what people type into your on site search bar. Use it to uncover demand you are not covering well, identify missing categories or content, find product naming mismatches, and spot friction points where users cannot find key pages through navigation. It feeds both SEO content planning and UX fixes.
A QA environment for testing fixes before release lets you validate changes without risking production. Use it to test redirects, canonicals, robots rules, sitemap output, schema changes, internal linking modules, and template edits. It reduces the chance you fix one issue and create three new ones.
A project board to manage backlog gives you control of execution. Use it to prioritise by impact and effort, assign owners, track dependencies, set due dates, and document proof for fixes. Audits fail when findings stay in documents. A board turns findings into shipped work.
Inputs to collect before you crawl
Start with the business goal for SEO so the audit focuses on outcomes, not noise. Leads, ecommerce revenue, bookings, calls, and quote requests each change what pages matter, what conversion paths you check, and what you prioritise first. Next lock in target locations, whether you want Australia wide visibility, a specific state, a city like Melbourne, or defined service areas, because location intent affects page structure, internal linking, and how you assess competitors.
Then confirm the core products and services you want to push, including priority margins, priority categories, and priority services, so you can separate vanity traffic from profitable traffic. Capture known constraints early, like CMS limitations, dev release cycles, and content approval workflows, because a perfect fix that cannot ship is wasted effort. Review previous work, including past audits, migrations, penalties, and agencies, because many issues are repeat problems or side effects of earlier changes.
Finally, secure access to Search Console, GA4, Tag Manager, the CMS, CDN, hosting, and a dev contact, because audits stall without data, and you need the right people to implement fixes properly.
How to perform an SEO audit step-by-step?
Step 1. Set scope and success criteria
Step 1 is about stopping scope creep and making sure the audit answers the business question. The deliverable is a one page scope statement that spells out what is included and what is not. List the site sections in scope, including any subdomains and languages, because each one can behave like a separate site. Call out the templates in scope, such as blog, category, product, service, location, and help pages, because most issues are template driven and fixes need to be applied at the template level, not URL by URL.
Define crawl depth upfront, whether you are doing a full crawl or a representative sample, because large sites can waste time and produce noisy results if you crawl without limits. Set the performance review window, usually the last 3 months for recent trend signals and the last 12 months for seasonality and long term movement. Finally, lock in KPIs tied to the business, such as leads, revenue, organic sessions, and conversions, so every finding is prioritised based on impact, not personal preference.
Step 2. Baseline performance and demand
Step 2 sets the baseline so you know what is working, what is failing, and where to focus. Check the organic traffic trend to spot growth, decline, or volatility, then check the organic conversions trend to confirm whether the traffic change is a business problem or a reporting issue. Pull the top landing pages by conversions because these pages deserve protection first.
Look at brand vs non brand to understand whether performance depends on people already searching for your name, since non brand is where most scalable growth sits. In Search Console, review top queries and pages to see what Google already rewards you for, then isolate pages losing clicks and impressions to find the fastest wins and the likely causes, such as indexation, intent mismatch, cannibalisation, or title changes.
To do this properly, use GA4 and Search Console together. In GA4, run the landing pages report filtered to Organic Search so you can see sessions and conversions by entry page. In Search Console, use the performance report and switch between queries and pages to map demand to URLs.
Compare the last 28 days to the prior 28 days to catch recent shifts, and compare the last 3 months to the same period last year to account for seasonality and long term movement. This gives you a clear short term and long term picture before you touch technical fixes.
If this step fails, you are flying blind. No conversion tracking means you cannot tie SEO work to outcomes, so priorities become guesswork. Conversions not attributed to landing pages usually means the setup is wrong, the conversion events are not configured properly, or reporting is not built to show entry page impact.
If Search Console is not connected or the data is missing, you lose visibility into indexing, query performance, and the pages driving impressions and clicks. That makes it hard to diagnose drops, stalls, or opportunities.
Fix it by tightening the measurement layer. In GA4, confirm you have events set up for real lead actions such as form submissions, calls, bookings, and quote requests. Then confirm the right events are marked as conversions so reports reflect value, not noise. In Search Console, confirm the correct property is verified, the domain or URL prefix property matches the site setup, and data is flowing.
Collect evidence by exporting a list of top landing pages with sessions and conversions from GA4, plus exporting top queries and top pages from Search Console so the audit has a defensible baseline.
Step 3. Crawl the site and capture the facts
Step 3 is where the audit stops being opinion and becomes a dataset. Configure your crawler to collect both raw HTML and rendered output where JavaScript affects content, links, or structured data. Start by respecting robots.txt so you see the site as search engines are meant to see it. Then, if you need diagnostics, run a second crawl that ignores robots.txt so you can identify what is blocked and whether those blocks are hurting important sections.
Crawl via internal links first to understand real discoverability, then run a second pass using XML sitemap URLs so you catch pages that exist but are poorly linked. Make sure you capture canonicals, hreflang, pagination, structured data, titles, H1, meta descriptions, and response codes, and also record click depth and inlinks so you can diagnose architecture and internal linking strength.
A crawl passes when you have a clean export set that represents all key templates and sections, and the data is stable enough to act on. It fails when the crawler gets blocked and misses whole areas, when parameters create infinite crawl traps, or when you see a high percentage of redirects and errors which usually point to broken internal linking, outdated URLs, or poor redirect rules.
Fix this by controlling parameters inside the crawler, adding XML sitemap URLs to the crawl scope, and limiting crawl depth for the first run so you get a signal fast, then expanding once you have control. The evidence you collect should be consistent across audits so results are comparable over time.
Export all internal URLs, response codes, redirects and redirect chains, canonicals, duplicate titles, duplicate meta descriptions, H1, pagination, hreflang, orphan pages if available, and an inlinks report for key pages.
Step 4. Indexation and crawl controls
Step 4 checks whether Google is able to crawl and index the right parts of your site, and whether you are controlling what should stay out of the index. Use Search Console to review index coverage, which pages are excluded and the reasons for exclusion, sitemap submission and processing status, and any manual actions or security issues.
Also, check enhancements and structured data reports because persistent errors there often indicate template problems. Outside Search Console, review robots.txt rules to confirm you are not blocking important paths or required assets, and check meta robots tags and X Robots Tag headers because these can quietly set noindex, nofollow, or other directives across large parts of a site.
You pass this step when key templates are indexable and indexed, critical coverage errors are not impacting priority pages, sitemaps are valid and processed, and there are no manual actions. Likewise, you fail when indexed pages far exceed meaningful pages, which usually signals index bloat from filters, tags, internal search results, or thin template pages.
Similarly, you also fail when many pages are excluded due to duplicate or alternate canonical, which often points to poor canonical logic or inconsistent internal linking. Soft 404 patterns indicate pages that return a success code but look empty or unhelpful to Google, and blocked important paths means robots.txt or meta robots rules are preventing discovery and indexing of pages you need to rank.
Fix this by tightening crawl and index controls. Reduce index bloat by applying noindex to low value templates and parameter driven pages, and ensure those pages are not included in XML sitemaps. Correct canonicals so each indexable page points to its preferred URL, and align internal linking to that preferred version. Resolve soft 404 by either improving the page content so it meets intent and value expectations, or by returning the correct status code, often a 404 or 410 for removed pages.
Update robots.txt to allow critical resources and paths, and remove accidental blocks on key sections. Capture evidence by exporting the Search Console Pages report and Sitemap report, and take a screenshot of the manual actions tab for documentation.
Step 5. Site architecture and internal linking
Step 5 focuses on how your site is organised and how link pathways guide both Google and AI systems through your content. The goal is a structure that clearly shows what you do, how topics relate, and which pages are most important. Check for a clean hierarchy from the homepage to category or hub pages, then down to detail pages like services, products, or locations.
Confirm you have hub pages for major topics or services, and that those hubs link out to the right child pages. Review click depth for priority pages, identify orphan pages with no internal links, and confirm breadcrumbs exist, stay consistent, and reflect your URL structure. Also make sure navigation does not hide key pages behind scripts, forms, or elements that do not render as normal links, because that weakens discovery and dilutes authority flow.
You pass when priority pages sit within three clicks from a relevant hub, each priority page receives internal links from related pages, no indexable pages are orphaned, and breadcrumbs align with the URL structure.
You fail when important pages sit five clicks deep, blog posts exist in isolation with no pathways to commercial pages, or you have many near duplicate location pages that do not have a strong hub model and do not earn internal links.
Fix this by building topic hubs and service hubs, adding contextual internal links inside body content where they make sense, improving breadcrumb trails so they mirror the real hierarchy, and adding related links blocks on key templates to create consistent pathways at scale.
Capture evidence with a crawl depth report for priority URLs, inlinks counts for those URLs, and a list of orphan pages so your recommendations are backed by data and are easy to validate after changes go live.
Step 6. Technical SEO checks
Step 6 is the technical layer, and it is the fastest place to find issues that block crawling, waste authority, and create poor user journeys. Start by checking 4xx errors, which are missing pages, and 5xx errors, which are server failures. Both can kill organic performance if they affect pages Google expects to crawl or pages users land on from search.
Next, check redirects, focusing on redirect chains where one redirect leads to another, and redirect loops where URLs bounce endlessly. Also look for incorrect redirect patterns, such as sending many old URLs to the homepage, because this breaks relevance, reduces retained rankings after changes, and often causes indexation confusion.
You pass when there are no 5xx errors on indexable pages, redirect chains are not longer than one hop for priority URLs, loops do not exist, and every redirect sends users and bots to the closest equivalent page. Fix problems by updating redirect rules at the server or platform level, restoring missing pages where they should still exist, and replacing blanket redirects with mapped redirects that preserve intent, topic, and URL relationships.
For evidence, export a 4xx list with inlinks so you know which broken URLs are still linked internally, export a 5xx list with the time observed so devs can correlate to logs and incidents, and export a redirect chains list so you can prioritise fixes by impact and by how many important pages are affected.





















