FAQ & Troubleshooting
Contents
How Meticulous Works:
- How does Meticulous handle network requests / BE calls?
- How does Meticulous ensure test coverage over different user types, data variants, or feature flag combinations?
- How does Meticulous choose which sessions to run?
- If Meticulous records sessions from half-finished branches on localhost won't that cause issues with the tests?
CI Setup:
- How does Meticulous compute the URL to simulate a session against?
- What branches does Meticulous need to run on, and against which environments?
- Can I record sessions from one environment (for example, production, or localhost) and simulate them against another environment (for example, a preview URL)?
Recorder Setup:
Questions
How does Meticulous handle network requests? / How does Meticulous ensure test coverage over different user types, data variants, or feature flag combinations?
By default Meticulous will record the network responses (XHR, Fetch & Web Sockets) in the initial session when it is recorded. These responses will be stored alongside the session, and when the session is later replayed against another commit Meticulous will automatically stub out the requests with the appropriate responses. Similarly Meticulous will record and replay local storage, session storage and cookie values.
This means that if you have two sessions recorded under different users, and the network responses return different data for each user, then when the sessions are replayed each session will get the correct original data and you'll be able to test over both cases.
Meticulous's session selection algorithms will automatically select sessions to cover all the different user types, data variants, and feature flag combinations that lead to different behaivour in your code/app. See the Selecting Which Sessions to Run page for details.
Automatically stubbing out the network responses allows Meticulous to ensure your tests are fast, fully deterministic and flake and side effect free. If you make a breaking API change to your network API and the responses get out of date then Meticulous will automatically swap in the older session for one or more newer ones that covers the same lines of code / edge cases.
However if you wish to test your backend code with Meticulous you can do so by selecting which subset of requests to stub in the 'Network Stubbing' tab in your Meticulous project's settings. If you're using NextJS with the app directory then Meticulous will automatically pass through requests for React server components if you select the 'Stub all requests, apart from requests for server components and static assets' option. This is the default behaviour for NextJS apps that use the app directory.
How does Meticulous choose which sessions to run?
See the Selecting Which Sessions to Run page for details.
If Meticulous records sessions from half-finished branches on localhost won't that cause issues with the tests?
The answer is no: Meticulous is designed to handle this case. It does so via two strategies:
Meticulous doesn't use every session recorded as a test but just a subset that cover the maximal distinct edge cases and lines/branches of code. Broken sessions get filtered out by the session selection algorithms.
Meticulous takes the base screenshots for comparison at replay time instead of at record time. When you open a PR we replay the selected sessions twice: once the base commit and once on the head commit of the PR. We take screenshots and compare them. If it does replay a session from localhost that, for example, clicks on a feature that isn’t pushed up yet, then that 'broken' session will generate the same screenshots when replayed against both the base and the head commit. So it won’t create any false diffs.
How does Meticulous compute the URL to simulate a session against?
When Meticulous simulates sessions it is configured to simulate the sessions against a particular base URL, which will likely be different to the URL the session was recorded at.
For example, if Meticulous is setup with GitHub actions, then the base URL will be the URL you pass as the appUrl
to report-diffs-action
, for example http://localhost:3000
.
If Meticulous is setup to use preview URLs, from Vercel or similar services, then the base URL will be the preview URL of the deployment, for example https://tps-reports-app-37tz-initech.vercel.app
. If there are multiple deployments Meticulous will look for one to an environment that is included under Environments to Test Against
in your Meticulous project settings.
When simulating a session, Meticulous takes the URL the session was recorded at and swaps out the origin with the new base URL. So if the session was recorded at https://www.initech.com/some/path?query=paramValue
, and you're running the Meticulous tests against https://tps-reports-app-37tz-initech.vercel.app
, then Meticulous will simulate the session at https://tps-reports-app-37tz-initech.vercel.app/some/path?query=paramValue
.
You'll therefore need to make sure that the base URL you are simulating sessions against (https://tps-reports-app-37tz-initech.vercel.app
) serves up the same app under the same configuration as the base URL sessions are recorded on (https://www.initech.com
).
What branches does Meticulous need to run on, and against which environments?
Meticulous works by simulating sessions against the head commit of each pull request and comparing the results to the base commit of the pull request.
It therefore needs to run on your main branch (e.g. main, master or develop) so that it has visual snapshots to compare against. And it also needs to run on any branches that you open pull requests from.
If you're using Vercel, Netlify, or similar preview URLs, then Meticulous will compare snapshots from the preview URL of the base commit on the main branch to snapshots from the preview URL of the head commit of the pull request branch.
In this case the environment variables and configuration you use to run & build your app needs to be the same for the deployments of the main branch (production deploys) and the deployments of pull request branches (preview deploys). If this isn't the case Meticulous could display false screenshot differences.
For example if you configure production deploys of your app (from the main branch) to have a blue banner, and preview deploys of your app (from pull request branches) to have a red banner, then Meticulous would display screenshot diffs of the banner changing from blue to red for every screen. You want to make sure that the only screenshot diffs Meticulous shows are due to changes in the code introduced by the pull request being tested, rather than enviornmental differences between the environments tested on.
You can learn how to avoid this here, and you can learn more about testing across environments here.
If, instead of preview URLs, you're using the report-diffs-action
GitHub action, then Meticulous will compare snapshots from running your app from the base commit of the main branch to snapshots from running your app from the head commit of the pull request branch. In this case it's simiarly important to make sure that you compile and run your app with the same configuration for both the main branch and the pull request branches.
Can I record sessions from one environment (for example, production, or localhost) and simulate them against another environment (for example, a preview URL)?
Yes. However the sessions may fail to simulate if there are significant differences between the environments. Please see the Record and Simulate on Different Environments page for more details.
Why does the Meticulous recorder script need to be the first script to execute?
See the Ensure Recorder Captures All Requests page for more details.
Where can I reach out for support?
Reach out to eng@meticulous.ai and we'll be happy to help. You can also join our community discord.