blob: 8477ca4e6d9716595f2d49420aa667b5fa7830d4 [file] [log] [blame] [view]
Kent Tamura59ffb022018-11-27 05:30:561# Web Test Expectations and Baselines
pwnalld8a250722016-11-09 18:24:032
3
Kent Tamura59ffb022018-11-27 05:30:564The primary function of the web tests is as a regression test suite; this
pwnalld8a250722016-11-09 18:24:035means that, while we care about whether a page is being rendered correctly, we
6care more about whether the page is being rendered the way we expect it to. In
7other words, we look more for changes in behavior than we do for correctness.
8
9[TOC]
10
Kent Tamura59ffb022018-11-27 05:30:5611All web tests have "expected results", or "baselines", which may be one of
pwnalld8a250722016-11-09 18:24:0312several forms. The test may produce one or more of:
13
14* A text file containing JavaScript log messages.
15* A text rendering of the Render Tree.
16* A screen capture of the rendered page as a PNG file.
17* WAV files of the audio output, for WebAudio tests.
18
Kent Tamura59ffb022018-11-27 05:30:5619For any of these types of tests, baselines are checked into the web_tests
Robert Ma06f7acc2017-11-14 17:55:4720directory. The filename of a baseline is the same as that of the corresponding
21test, but the extension is replaced with `-expected.{txt,png,wav}` (depending on
22the type of test output). Baselines usually live alongside tests, with the
23exception when baselines vary by platforms; read
Kent Tamura59ffb022018-11-27 05:30:5624[Web Test Baseline Fallback](web_test_baseline_fallback.md) for more
Robert Ma06f7acc2017-11-14 17:55:4725details.
26
27Lastly, we also support the concept of "reference tests", which check that two
28pages are rendered identically (pixel-by-pixel). As long as the two tests'
29output match, the tests pass. For more on reference tests, see
pwnalld8a250722016-11-09 18:24:0330[Writing ref tests](https://trac.webkit.org/wiki/Writing%20Reftests).
31
32## Failing tests
33
34When the output doesn't match, there are two potential reasons for it:
35
36* The port is performing "correctly", but the output simply won't match the
37 generic version. The usual reason for this is for things like form controls,
38 which are rendered differently on each platform.
39* The port is performing "incorrectly" (i.e., the test is failing).
40
41In both cases, the convention is to check in a new baseline (aka rebaseline),
42even though that file may be codifying errors. This helps us maintain test
43coverage for all the other things the test is testing while we resolve the bug.
44
45*** promo
46If a test can be rebaselined, it should always be rebaselined instead of adding
47lines to TestExpectations.
48***
49
50Bugs at [crbug.com](https://crbug.com) should track fixing incorrect behavior,
51not lines in
Kent Tamura59ffb022018-11-27 05:30:5652[TestExpectations](../../third_party/blink/web_tests/TestExpectations). If a
pwnalld8a250722016-11-09 18:24:0353test is never supposed to pass (e.g. it's testing Windows-specific behavior, so
54can't ever pass on Linux/Mac), move it to the
Kent Tamura59ffb022018-11-27 05:30:5655[NeverFixTests](../../third_party/blink/web_tests/NeverFixTests) file. That
pwnalld8a250722016-11-09 18:24:0356gets it out of the way of the rest of the project.
57
58There are some cases where you can't rebaseline and, unfortunately, we don't
59have a better solution than either:
60
611. Reverting the patch that caused the failure, or
622. Adding a line to TestExpectations and fixing the bug later.
63
64In this case, **reverting the patch is strongly preferred**.
65
66These are the cases where you can't rebaseline:
67
68* The test is a reference test.
69* The test gives different output in release and debug; in this case, generate a
70 baseline with the release build, and mark the debug build as expected to fail.
71* The test is flaky, crashes or times out.
72* The test is for a feature that hasn't yet shipped on some platforms yet, but
73 will shortly.
74
75## Handling flaky tests
76
77The
78[flakiness dashboard](https://test-results.appspot.com/dashboards/flakiness_dashboard.html)
79is a tool for understanding a test’s behavior over time.
80Originally designed for managing flaky tests, the dashboard shows a timeline
81view of the test’s behavior over time. The tool may be overwhelming at first,
82but
83[the documentation](https://dev.chromium.org/developers/testing/flakiness-dashboard)
84should help. Once you decide that a test is truly flaky, you can suppress it
85using the TestExpectations file, as described below.
86
87We do not generally expect Chromium sheriffs to spend time trying to address
88flakiness, though.
89
90## How to rebaseline
91
92Since baselines themselves are often platform-specific, updating baselines in
93general requires fetching new test results after running the test on multiple
94platforms.
95
96### Rebaselining using try jobs
97
98The recommended way to rebaseline for a currently-in-progress CL is to use
Quinten Yearsleya58f83c2017-05-30 16:00:5799results from try jobs, by using the command-tool
Kent Tamurab53757e2018-04-20 17:54:48100`third_party/blink/tools/blink_tool.py rebaseline-cl`:
pwnalld8a250722016-11-09 18:24:03101
Quinten Yearsleya58f83c2017-05-30 16:00:571021. First, upload a CL.
Kent Tamurab53757e2018-04-20 17:54:481032. Trigger try jobs by running `blink_tool.py rebaseline-cl`. This should
Quinten Yearsleya58f83c2017-05-30 16:00:57104 trigger jobs on
Preethi Mohan6ad00ee2020-11-17 03:09:42105 [tryserver.blink](https://ci.chromium.org/p/chromium/g/tryserver.blink/builders).
pwnalld8a250722016-11-09 18:24:031063. Wait for all try jobs to finish.
Kent Tamurab53757e2018-04-20 17:54:481074. Run `blink_tool.py rebaseline-cl` again to fetch new baselines.
pwnalld8a250722016-11-09 18:24:031085. Commit the new baselines and upload a new patch.
109
110This way, the new baselines can be reviewed along with the changes, which helps
111the reviewer verify that the new baselines are correct. It also means that there
Kent Tamura59ffb022018-11-27 05:30:56112is no period of time when the web test results are ignored.
pwnalld8a250722016-11-09 18:24:03113
Weizhong Xiaaa38f7c2022-10-17 21:34:00114#### Handle bot timeouts
115
116When a change will cause many tests to fail, the try jobs may exit early because
117the number of failures exceeds the limit, or the try jobs may timeout because
118more time is needed for the retries. Rebaseline based on such results are not
119suggested. The solution is to temporarily increase the number of shards in
120[test_suite_exceptions.pyl](https://source.chromium.org/chromium/chromium/src/+/main:testing/buildbot/test_suite_exceptions.pyl) in your CL.
121Change the values back to its original value before sending the CL to CQ.
122
Quinten Yearsleya58f83c2017-05-30 16:00:57123#### Options
124
Kent Tamurab53757e2018-04-20 17:54:48125The tests which `blink_tool.py rebaseline-cl` tries to download new baselines for
pwnalld8a250722016-11-09 18:24:03126depends on its arguments.
127
128* By default, it tries to download all baselines for tests that failed in the
129 try jobs.
130* If you pass `--only-changed-tests`, then only tests modified in the CL will be
131 considered.
132* You can also explicitly pass a list of test names, and then just those tests
133 will be rebaselined.
Quinten Yearsleya58f83c2017-05-30 16:00:57134* If some of the try jobs failed to run, and you wish to continue rebaselining
135 assuming that there are no platform-specific results for those platforms,
136 you can add the flag `--fill-missing`.
Xianzhu Wangc5e2eaf12020-01-16 22:13:09137* By default, it finds the try jobs by looking at the latest patchset. If you
138 have finished try jobs that are associated with an earlier patchset and you
139 want to use them instead of scheduling new try jobs, you can add the flag
140 `--patchset=n` to specify the patchset. This is very useful when the CL has
141 'trivial' patchsets that are created e.g. by editing the CL descrpition.
142
Xianzhu Wang61d49d52021-07-31 16:44:53143### Rebaseline script in results.html
144
145Web test results.html linked from bot job result page provides an alternative
146way to rebaseline tests for a particular platform.
147
148* In the bot job result page, find the web test results.html link and click it.
149* Choose "Rebaseline script" from the dropdown list after "Test shown ... in format".
150* Click "Copy report" (or manually copy part of the script for the tests you want
151 to rebaseline).
152* In local console, change directory into `third_party/blink/web_tests/platform/<platform>`.
153* Paste.
154* Add files into git and commit.
155
Xianzhu Wangdca49022021-08-27 20:50:11156The generated command includes `blink_tool.py optimize-baselines <tests>` which
157removes redundant baselines. However, the optimization doesn't work for
158flag-specific baselines for now, so the rebaseline script may create redundant
159baselines for flag-specific results. We prefer local manual rebaselining (see
160below) for flag-specific rebaselines when possible.
Xianzhu Wang61d49d52021-07-31 16:44:53161
Xianzhu Wangc5e2eaf12020-01-16 22:13:09162### Local manual rebaselining
163
Xianzhu Wang61d49d52021-07-31 16:44:53164```bash
165third_party/blink/tools/run_web_tests.py --reset-results foo/bar/test.html
166```
pwnalld8a250722016-11-09 18:24:03167
Xianzhu Wang61d49d52021-07-31 16:44:53168If there are current expectation files for `web_tests/foo/bar/test.html`,
169the above command will overwrite the current baselines at their original
170locations with the actual results. The current baseline means the `-expected.*`
171file used to compare the actual result when the test is run locally, i.e. the
172first file found in the [baseline search path](https://cs.chromium.org/search/?q=port/base.py+baseline_search_path).
173
174If there are no current baselines, the above command will create new baselines
175in the platform-independent directory, e.g.
176`web_tests/foo/bar/test-expected.{txt,png}`.
177
178When you rebaseline a test, make sure your commit description explains why the
179test is being re-baselined.