Part 2: The Claude Code Prompt I Use to QA an Optimizely CMS 13 Upgrade
Apr 17, 2026
In Part 1, I shared the Claude Code prompt that kicks off a CMS 12 to 13 upgrade - analysing the codebase, planning the migration steps and doing the heavy lifting on the code changes. That gets you a long way. But the upgrade isn't fully done when the site starts.
The gap between "it compiled" and "it's actually correct" is where things go wrong in production and that's where upgrades earn their reputation. Manually comparing two environments across dozens of URLs is tedious, inconsistent, and easy to rush. So here's the second prompt in the workflow: the one I use to systematically QA the upgraded site against the reference.
The approach
The idea is straightforward. You have two environments: the reference site (the known-good CMS 12 baseline) and the upgraded site (your CMS 13 candidate). The prompt treats these as a pair — crawling the reference site two hops from the homepage, deriving the equivalent URLs on the upgraded site, and testing each pair against a defined set of checks.
Two hops is a deliberate choice. It's deep enough to exercise most of your template types — homepages, landing pages, product listings, content pages — without going so deep that you're waiting an hour for results or racking up unnecessary token usage.
Here's the prompt:
What Claude Code does with this
One thing worth noting: Claude Code handles structured prompts like this really well. The numbered steps, the check table, the acceptable differences list — that level of instruction is exactly what it needs to stay on task across what can be a long running operation crawling and comparing dozens of URLs. It won't drift, summarise early, or skip checks because it got bored halfway through. Give it structure and it follows it.
The report format at the end is also deliberate. PASS/FAIL per URL means you can scan the output quickly, and the grouped failure categories tell you immediately what kind of problem you're dealing with rather than leaving you to triage a wall of text.
The translation key issue it caught in the wild
Translation key failures are a classic upgrade gotcha and easily missed in manual testing because you have to be looking at the right page at the right moment. On a recent upgrade run, the QA prompt flagged this in the report:
🟡 Site-wide — Raw translation keys in login/account form overlay
Strings like
/Login/Form/Label/Emailand/Shared/Address/Form/Label/FirstNamerender verbatim. Missing XML translation resource entries for the login/account/address components.
The XML translation files for those components hadn't been picked up in the upgraded project. It's the kind of thing that looks fine in a quick smoke test but you'd have to actually interact with the login or account overlay to spot it. The prompt caught it because it's specifically scanning the page body for that key path pattern on every URL it tests.
What it won't cover
To be clear about the limits: this is an anonymous crawl, so anything behind authentication won't be tested. It's also not a visual regression tool, it won't catch a layout that's broken but technically returning content. And if you have JavaScript-heavy interactions or client-side rendering, the body it fetches won't reflect what a real browser renders. You still need a human for all of that. Think of this as the systematic baseline check that clears the obvious failures before you spend time on the nuanced ones.
Conclusion
Used together, the migration prompt from Part 1 and this QA prompt cover the full upgrade arc from codebase analysis through to verified, production-ready output. It's the fastest I've seen a CMS 13 upgrade come together.
Running a CMS 13 upgrade? I'd love to hear what you're finding — find me on LinkedIn or X.
This blog represents personal views and experiences. For official Optimizely documentation, visit docs.developers.optimizely.com.