blob: 17b0f48d28c6bbf9c3bdc3a3e13cb3e5377f5a14 [file] [log] [blame] [view]
pwnallae101a5f2016-11-08 00:24:381# Layout Tests
2
3Layout tests are used by Blink to test many components, including but not
4limited to layout and rendering. In general, layout tests involve loading pages
5in a test renderer (`content_shell`) and comparing the rendered output or
6JavaScript output against an expected output file.
7
pwnall4ea2eb32016-11-29 02:47:258This document covers running and debugging existing layout tests. See the
9[Writing Layout Tests documentation](./writing_layout_tests.md) if you find
10yourself writing layout tests.
11
Kent Tamuraa045a7f2018-04-25 05:08:1112Note that we're in process of changing the term "layout tests" to "web tests".
13Please assume these terms mean the identical stuff. We also call it as
14"WebKit tests" and "WebKit layout tests".
15
pwnallae101a5f2016-11-08 00:24:3816[TOC]
17
18## Running Layout Tests
19
20### Initial Setup
21
22Before you can run the layout tests, you need to build the `blink_tests` target
23to get `content_shell` and all of the other needed binaries.
24
25```bash
kyle Ju8f7d38df2018-11-26 16:51:2226autoninja -C out/Default blink_tests
pwnallae101a5f2016-11-08 00:24:3827```
28
29On **Android** (layout test support
30[currently limited to KitKat and earlier](https://crbug.com/567947)) you need to
31build and install `content_shell_apk` instead. See also:
32[Android Build Instructions](../android_build_instructions.md).
33
34```bash
Max Morozf5b31fcd2018-08-10 21:55:4835autoninja -C out/Default content_shell_apk
pwnallae101a5f2016-11-08 00:24:3836adb install -r out/Default/apks/ContentShell.apk
37```
38
39On **Mac**, you probably want to strip the content_shell binary before starting
40the tests. If you don't, you'll have 5-10 running concurrently, all stuck being
41examined by the OS crash reporter. This may cause other failures like timeouts
42where they normally don't occur.
43
44```bash
45strip ./xcodebuild/{Debug,Release}/content_shell.app/Contents/MacOS/content_shell
46```
47
48### Running the Tests
49
50TODO: mention `testing/xvfb.py`
51
52The test runner script is in
Kent Tamuraa045a7f2018-04-25 05:08:1153`third_party/blink/tools/run_web_tests.py`.
pwnallae101a5f2016-11-08 00:24:3854
55To specify which build directory to use (e.g. out/Default, out/Release,
56out/Debug) you should pass the `-t` or `--target` parameter. For example, to
57use the build in `out/Default`, use:
58
59```bash
Kent Tamuraa045a7f2018-04-25 05:08:1160python third_party/blink/tools/run_web_tests.py -t Default
pwnallae101a5f2016-11-08 00:24:3861```
62
63For Android (if your build directory is `out/android`):
64
65```bash
Kent Tamuraa045a7f2018-04-25 05:08:1166python third_party/blink/tools/run_web_tests.py -t android --android
pwnallae101a5f2016-11-08 00:24:3867```
68
69Tests marked as `[ Skip ]` in
70[TestExpectations](../../third_party/WebKit/LayoutTests/TestExpectations)
71won't be run at all, generally because they cause some intractable tool error.
72To force one of them to be run, either rename that file or specify the skipped
pwnall4ea2eb32016-11-29 02:47:2573test as the only one on the command line (see below). Read the
74[Layout Test Expectations documentation](./layout_test_expectations.md) to learn
75more about TestExpectations and related files.
pwnallae101a5f2016-11-08 00:24:3876
pwnall4ea2eb32016-11-29 02:47:2577*** promo
78Currently only the tests listed in
pwnallae101a5f2016-11-08 00:24:3879[SmokeTests](../../third_party/WebKit/LayoutTests/SmokeTests)
80are run on the Android bots, since running all layout tests takes too long on
81Android (and may still have some infrastructure issues). Most developers focus
82their Blink testing on Linux. We rely on the fact that the Linux and Android
83behavior is nearly identical for scenarios outside those covered by the smoke
84tests.
pwnall4ea2eb32016-11-29 02:47:2585***
pwnallae101a5f2016-11-08 00:24:3886
87To run only some of the tests, specify their directories or filenames as
Mathias Bynens172fc6b2018-09-05 09:39:4388arguments to `run_web_tests.py` relative to the layout test directory
pwnallae101a5f2016-11-08 00:24:3889(`src/third_party/WebKit/LayoutTests`). For example, to run the fast form tests,
90use:
91
92```bash
Mathias Bynens172fc6b2018-09-05 09:39:4393python third_party/blink/tools/run_web_tests.py fast/forms
pwnallae101a5f2016-11-08 00:24:3894```
95
96Or you could use the following shorthand:
97
98```bash
Mathias Bynens172fc6b2018-09-05 09:39:4399python third_party/blink/tools/run_web_tests.py fast/fo\*
pwnallae101a5f2016-11-08 00:24:38100```
101
102*** promo
103Example: To run the layout tests with a debug build of `content_shell`, but only
104test the SVG tests and run pixel tests, you would run:
105
106```bash
Mathias Bynens172fc6b2018-09-05 09:39:43107python third_party/blink/tools/run_web_tests.py -t Default svg
pwnallae101a5f2016-11-08 00:24:38108```
109***
110
111As a final quick-but-less-robust alternative, you can also just use the
112content_shell executable to run specific tests by using (for Windows):
113
114```bash
Kent Tamuracd3ebc42018-05-16 06:44:22115out/Default/content_shell.exe --run-web-tests --no-sandbox full_test_source_path
pwnallae101a5f2016-11-08 00:24:38116```
117
118as in:
119
120```bash
Kent Tamuracd3ebc42018-05-16 06:44:22121out/Default/content_shell.exe --run-web-tests --no-sandbox \
pwnallae101a5f2016-11-08 00:24:38122 c:/chrome/src/third_party/WebKit/LayoutTests/fast/forms/001.html
123```
124
125but this requires a manual diff against expected results, because the shell
126doesn't do it for you.
127
Mathias Bynens172fc6b2018-09-05 09:39:43128To see a complete list of arguments supported, run:
129
130```bash
131python run_web_tests.py --help
132```
pwnallae101a5f2016-11-08 00:24:38133
134*** note
135**Linux Note:** We try to match the Windows render tree output exactly by
136matching font metrics and widget metrics. If there's a difference in the render
137tree output, we should see if we can avoid rebaselining by improving our font
138metrics. For additional information on Linux Layout Tests, please see
139[docs/layout_tests_linux.md](../layout_tests_linux.md).
140***
141
142*** note
143**Mac Note:** While the tests are running, a bunch of Appearance settings are
144overridden for you so the right type of scroll bars, colors, etc. are used.
145Your main display's "Color Profile" is also changed to make sure color
146correction by ColorSync matches what is expected in the pixel tests. The change
147is noticeable, how much depends on the normal level of correction for your
148display. The tests do their best to restore your setting when done, but if
149you're left in the wrong state, you can manually reset it by going to
150System Preferences → Displays → Color and selecting the "right" value.
151***
152
153### Test Harness Options
154
155This script has a lot of command line flags. You can pass `--help` to the script
156to see a full list of options. A few of the most useful options are below:
157
158| Option | Meaning |
159|:----------------------------|:--------------------------------------------------|
160| `--debug` | Run the debug build of the test shell (default is release). Equivalent to `-t Debug` |
161| `--nocheck-sys-deps` | Don't check system dependencies; this allows faster iteration. |
162| `--verbose` | Produce more verbose output, including a list of tests that pass. |
Xianzhu Wangcacba482017-06-05 18:46:43163| `--reset-results` | Overwrite the current baselines (`-expected.{png|txt|wav}` files) with actual results, or create new baselines if there are no existing baselines. |
pwnallae101a5f2016-11-08 00:24:38164| `--renderer-startup-dialog` | Bring up a modal dialog before running the test, useful for attaching a debugger. |
Quinten Yearsley17bf9b432018-01-02 22:02:45165| `--fully-parallel` | Run tests in parallel using as many child processes as the system has cores. |
pwnallae101a5f2016-11-08 00:24:38166| `--driver-logging` | Print C++ logs (LOG(WARNING), etc). |
167
168## Success and Failure
169
170A test succeeds when its output matches the pre-defined expected results. If any
171tests fail, the test script will place the actual generated results, along with
172a diff of the actual and expected results, into
173`src/out/Default/layout_test_results/`, and by default launch a browser with a
174summary and link to the results/diffs.
175
176The expected results for tests are in the
177`src/third_party/WebKit/LayoutTests/platform` or alongside their respective
178tests.
179
180*** note
181Tests which use [testharness.js](https://github.com/w3c/testharness.js/)
182do not have expected result files if all test cases pass.
183***
184
185A test that runs but produces the wrong output is marked as "failed", one that
186causes the test shell to crash is marked as "crashed", and one that takes longer
187than a certain amount of time to complete is aborted and marked as "timed out".
188A row of dots in the script's output indicates one or more tests that passed.
189
190## Test expectations
191
192The
qyearsley23599b72017-02-16 19:10:42193[TestExpectations](../../third_party/WebKit/LayoutTests/TestExpectations) file (and related
194files) contains the list of all known layout test failures. See the
pwnall4ea2eb32016-11-29 02:47:25195[Layout Test Expectations documentation](./layout_test_expectations.md) for more
196on this.
pwnallae101a5f2016-11-08 00:24:38197
198## Testing Runtime Flags
199
200There are two ways to run layout tests with additional command-line arguments:
201
202* Using `--additional-driver-flag`:
203
204 ```bash
Mathias Bynens172fc6b2018-09-05 09:39:43205 python run_web_tests.py --additional-driver-flag=--blocking-repaint
pwnallae101a5f2016-11-08 00:24:38206 ```
207
208 This tells the test harness to pass `--blocking-repaint` to the
209 content_shell binary.
210
211 It will also look for flag-specific expectations in
212 `LayoutTests/FlagExpectations/blocking-repaint`, if this file exists. The
213 suppressions in this file override the main TestExpectations file.
214
215* Using a *virtual test suite* defined in
qyearsley23599b72017-02-16 19:10:42216 [LayoutTests/VirtualTestSuites](../../third_party/WebKit/LayoutTests/VirtualTestSuites).
pwnallae101a5f2016-11-08 00:24:38217 A virtual test suite runs a subset of layout tests under a specific path with
218 additional flags. For example, you could test a (hypothetical) new mode for
219 repainting using the following virtual test suite:
220
221 ```json
222 {
223 "prefix": "blocking_repaint",
224 "base": "fast/repaint",
225 "args": ["--blocking-repaint"],
226 }
227 ```
228
229 This will create new "virtual" tests of the form
Robert Ma89dd91d832017-08-02 18:08:44230 `virtual/blocking_repaint/fast/repaint/...` which correspond to the files
pwnallae101a5f2016-11-08 00:24:38231 under `LayoutTests/fast/repaint` and pass `--blocking-repaint` to
232 content_shell when they are run.
233
234 These virtual tests exist in addition to the original `fast/repaint/...`
235 tests. They can have their own expectations in TestExpectations, and their own
236 baselines. The test harness will use the non-virtual baselines as a fallback.
237 However, the non-virtual expectations are not inherited: if
238 `fast/repaint/foo.html` is marked `[ Fail ]`, the test harness still expects
239 `virtual/blocking_repaint/fast/repaint/foo.html` to pass. If you expect the
240 virtual test to also fail, it needs its own suppression.
241
242 The "prefix" value does not have to be unique. This is useful if you want to
243 run multiple directories with the same flags (but see the notes below about
244 performance). Using the same prefix for different sets of flags is not
245 recommended.
246
247For flags whose implementation is still in progress, virtual test suites and
248flag-specific expectations represent two alternative strategies for testing.
249Consider the following when choosing between them:
250
251* The
252 [waterfall builders](https://dev.chromium.org/developers/testing/chromium-build-infrastructure/tour-of-the-chromium-buildbot)
253 and [try bots](https://dev.chromium.org/developers/testing/try-server-usage)
254 will run all virtual test suites in addition to the non-virtual tests.
255 Conversely, a flag-specific expectations file won't automatically cause the
256 bots to test your flag - if you want bot coverage without virtual test suites,
257 you will need to set up a dedicated bot for your flag.
258
259* Due to the above, virtual test suites incur a performance penalty for the
260 commit queue and the continuous build infrastructure. This is exacerbated by
261 the need to restart `content_shell` whenever flags change, which limits
262 parallelism. Therefore, you should avoid adding large numbers of virtual test
263 suites. They are well suited to running a subset of tests that are directly
264 related to the feature, but they don't scale to flags that make deep
265 architectural changes that potentially impact all of the tests.
266
Jeff Carpenter489d4022018-05-15 00:23:00267* Note that using wildcards in virtual test path names (e.g.
268 `virtual/blocking_repaint/fast/repaint/*`) is not supported.
269
pwnallae101a5f2016-11-08 00:24:38270## Tracking Test Failures
271
272All bugs, associated with layout test failures must have the
273[Test-Layout](https://crbug.com/?q=label:Test-Layout) label. Depending on how
274much you know about the bug, assign the status accordingly:
275
276* **Unconfirmed** -- You aren't sure if this is a simple rebaseline, possible
277 duplicate of an existing bug, or a real failure
278* **Untriaged** -- Confirmed but unsure of priority or root cause.
279* **Available** -- You know the root cause of the issue.
280* **Assigned** or **Started** -- You will fix this issue.
281
282When creating a new layout test bug, please set the following properties:
283
284* Components: a sub-component of Blink
285* OS: **All** (or whichever OS the failure is on)
286* Priority: 2 (1 if it's a crash)
287* Type: **Bug**
288* Labels: **Test-Layout**
289
Mathias Bynens172fc6b2018-09-05 09:39:43290You can also use the _Layout Test Failure_ template, which pre-sets these
pwnallae101a5f2016-11-08 00:24:38291labels for you.
292
pwnallae101a5f2016-11-08 00:24:38293## Debugging Layout Tests
294
Mathias Bynens172fc6b2018-09-05 09:39:43295After the layout tests run, you should get a summary of tests that pass or
296fail. If something fails unexpectedly (a new regression), you will get a
297`content_shell` window with a summary of the unexpected failures. Or you might
298have a failing test in mind to investigate. In any case, here are some steps and
299tips for finding the problem.
pwnallae101a5f2016-11-08 00:24:38300
301* Take a look at the result. Sometimes tests just need to be rebaselined (see
302 below) to account for changes introduced in your patch.
303 * Load the test into a trunk Chrome or content_shell build and look at its
304 result. (For tests in the http/ directory, start the http server first.
305 See above. Navigate to `http://localhost:8000/` and proceed from there.)
306 The best tests describe what they're looking for, but not all do, and
307 sometimes things they're not explicitly testing are still broken. Compare
308 it to Safari, Firefox, and IE if necessary to see if it's correct. If
309 you're still not sure, find the person who knows the most about it and
310 ask.
311 * Some tests only work properly in content_shell, not Chrome, because they
312 rely on extra APIs exposed there.
313 * Some tests only work properly when they're run in the layout-test
314 framework, not when they're loaded into content_shell directly. The test
315 should mention that in its visible text, but not all do. So try that too.
316 See "Running the tests", above.
317* If you think the test is correct, confirm your suspicion by looking at the
318 diffs between the expected result and the actual one.
319 * Make sure that the diffs reported aren't important. Small differences in
320 spacing or box sizes are often unimportant, especially around fonts and
321 form controls. Differences in wording of JS error messages are also
322 usually acceptable.
Mathias Bynens172fc6b2018-09-05 09:39:43323 * `python run_web_tests.py path/to/your/test.html --full-results-html`
324 produces a page including links to the expected result, actual result,
325 and diff.
326 * Add the `--sources` option to `run_web_tests.py` to see exactly which
pwnallae101a5f2016-11-08 00:24:38327 expected result it's comparing to (a file next to the test, something in
328 platform/mac/, something in platform/chromium-win/, etc.)
329 * If you're still sure it's correct, rebaseline the test (see below).
330 Otherwise...
331* If you're lucky, your test is one that runs properly when you navigate to it
332 in content_shell normally. In that case, build the Debug content_shell
333 project, fire it up in your favorite debugger, and load the test file either
qyearsley23599b72017-02-16 19:10:42334 from a `file:` URL.
pwnallae101a5f2016-11-08 00:24:38335 * You'll probably be starting and stopping the content_shell a lot. In VS,
336 to save navigating to the test every time, you can set the URL to your
qyearsley23599b72017-02-16 19:10:42337 test (`file:` or `http:`) as the command argument in the Debugging section of
pwnallae101a5f2016-11-08 00:24:38338 the content_shell project Properties.
339 * If your test contains a JS call, DOM manipulation, or other distinctive
340 piece of code that you think is failing, search for that in the Chrome
341 solution. That's a good place to put a starting breakpoint to start
342 tracking down the issue.
343 * Otherwise, you're running in a standard message loop just like in Chrome.
344 If you have no other information, set a breakpoint on page load.
345* If your test only works in full layout-test mode, or if you find it simpler to
346 debug without all the overhead of an interactive session, start the
Kent Tamuracd3ebc42018-05-16 06:44:22347 content_shell with the command-line flag `--run-web-tests`, followed by the
qyearsley23599b72017-02-16 19:10:42348 URL (`file:` or `http:`) to your test. More information about running layout tests
pwnalld8a250722016-11-09 18:24:03349 in content_shell can be found [here](./layout_tests_in_content_shell.md).
pwnallae101a5f2016-11-08 00:24:38350 * In VS, you can do this in the Debugging section of the content_shell
351 project Properties.
352 * Now you're running with exactly the same API, theme, and other setup that
353 the layout tests use.
354 * Again, if your test contains a JS call, DOM manipulation, or other
355 distinctive piece of code that you think is failing, search for that in
356 the Chrome solution. That's a good place to put a starting breakpoint to
357 start tracking down the issue.
358 * If you can't find any better place to set a breakpoint, start at the
359 `TestShell::RunFileTest()` call in `content_shell_main.cc`, or at
360 `shell->LoadURL() within RunFileTest()` in `content_shell_win.cc`.
361* Debug as usual. Once you've gotten this far, the failing layout test is just a
362 (hopefully) reduced test case that exposes a problem.
363
364### Debugging HTTP Tests
365
366To run the server manually to reproduce/debug a failure:
367
368```bash
Kent Tamurae81dbff2018-04-20 17:35:34369cd src/third_party/blink/tools
Mathias Bynens172fc6b2018-09-05 09:39:43370python run_blink_httpd.py
pwnallae101a5f2016-11-08 00:24:38371```
372
Mathias Bynens172fc6b2018-09-05 09:39:43373The layout tests are served from `http://127.0.0.1:8000/`. For example, to
pwnallae101a5f2016-11-08 00:24:38374run the test
375`LayoutTest/http/tests/serviceworker/chromium/service-worker-allowed.html`,
376navigate to
377`http://127.0.0.1:8000/serviceworker/chromium/service-worker-allowed.html`. Some
Mathias Bynens172fc6b2018-09-05 09:39:43378tests behave differently if you go to `127.0.0.1` vs. `localhost`, so use
379`127.0.0.1`.
pwnallae101a5f2016-11-08 00:24:38380
Kent Tamurae81dbff2018-04-20 17:35:34381To kill the server, hit any key on the terminal where `run_blink_httpd.py` is
Mathias Bynens172fc6b2018-09-05 09:39:43382running, use `taskkill` or the Task Manager on Windows, or `killall` or
383Activity Monitor on macOS.
pwnallae101a5f2016-11-08 00:24:38384
Mathias Bynens172fc6b2018-09-05 09:39:43385The test server sets up an alias to the `LayoutTests/resources` directory. For
386example, in HTTP tests, you can access the testing framework using
pwnallae101a5f2016-11-08 00:24:38387`src="/js-test-resources/js-test.js"`.
388
389### Tips
390
391Check https://test-results.appspot.com/ to see how a test did in the most recent
392~100 builds on each builder (as long as the page is being updated regularly).
393
394A timeout will often also be a text mismatch, since the wrapper script kills the
395content_shell before it has a chance to finish. The exception is if the test
396finishes loading properly, but somehow hangs before it outputs the bit of text
397that tells the wrapper it's done.
398
399Why might a test fail (or crash, or timeout) on buildbot, but pass on your local
400machine?
401* If the test finishes locally but is slow, more than 10 seconds or so, that
402 would be why it's called a timeout on the bot.
403* Otherwise, try running it as part of a set of tests; it's possible that a test
404 one or two (or ten) before this one is corrupting something that makes this
405 one fail.
406* If it consistently works locally, make sure your environment looks like the
407 one on the bot (look at the top of the stdio for the webkit_tests step to see
408 all the environment variables and so on).
409* If none of that helps, and you have access to the bot itself, you may have to
410 log in there and see if you can reproduce the problem manually.
411
Will Chen22b488502017-11-30 21:37:15412### Debugging DevTools Tests
pwnallae101a5f2016-11-08 00:24:38413
Mathias Bynens172fc6b2018-09-05 09:39:43414* Add `debug_devtools=true` to `args.gn` and compile: `autoninja -C out/Default devtools_frontend_resources`
Will Chen22b488502017-11-30 21:37:15415 > Debug DevTools lets you avoid having to recompile after every change to the DevTools front-end.
416* Do one of the following:
Mathias Bynens172fc6b2018-09-05 09:39:43417 * Option A) Run from the `chromium/src` folder:
Kent Tamuraa045a7f2018-04-25 05:08:11418 `third_party/blink/tools/run_web_tests.sh
Will Chen22b488502017-11-30 21:37:15419 --additional-driver-flag='--debug-devtools'
420 --additional-driver-flag='--remote-debugging-port=9222'
421 --time-out-ms=6000000`
422 * Option B) If you need to debug an http/tests/inspector test, start httpd
423 as described above. Then, run content_shell:
Kent Tamuracd3ebc42018-05-16 06:44:22424 `out/Default/content_shell --debug-devtools --remote-debugging-port=9222 --run-web-tests
Will Chen22b488502017-11-30 21:37:15425 http://127.0.0.1:8000/path/to/test.html`
426* Open `http://localhost:9222` in a stable/beta/canary Chrome, click the single
427 link to open the devtools with the test loaded.
428* In the loaded devtools, set any required breakpoints and execute `test()` in
429 the console to actually start the test.
430
431NOTE: If the test is an html file, this means it's a legacy test so you need to add:
pwnallae101a5f2016-11-08 00:24:38432* Add `window.debugTest = true;` to your test code as follows:
433
434 ```javascript
435 window.debugTest = true;
436 function test() {
437 /* TEST CODE */
438 }
Kim Paulhamus61d60c32018-02-09 18:03:49439 ```
pwnallae101a5f2016-11-08 00:24:38440
Steve Kobese123a3d42017-07-20 01:20:30441## Bisecting Regressions
442
443You can use [`git bisect`](https://git-scm.com/docs/git-bisect) to find which
444commit broke (or fixed!) a layout test in a fully automated way. Unlike
445[bisect-builds.py](http://dev.chromium.org/developers/bisect-builds-py), which
446downloads pre-built Chromium binaries, `git bisect` operates on your local
447checkout, so it can run tests with `content_shell`.
448
449Bisecting can take several hours, but since it is fully automated you can leave
450it running overnight and view the results the next day.
451
452To set up an automated bisect of a layout test regression, create a script like
453this:
454
Mathias Bynens172fc6b2018-09-05 09:39:43455```bash
Steve Kobese123a3d42017-07-20 01:20:30456#!/bin/bash
457
458# Exit code 125 tells git bisect to skip the revision.
459gclient sync || exit 125
Max Morozf5b31fcd2018-08-10 21:55:48460autoninja -C out/Debug -j100 blink_tests || exit 125
Steve Kobese123a3d42017-07-20 01:20:30461
Kent Tamuraa045a7f2018-04-25 05:08:11462third_party/blink/tools/run_web_tests.py -t Debug \
Steve Kobese123a3d42017-07-20 01:20:30463 --no-show-results --no-retry-failures \
464 path/to/layout/test.html
465```
466
467Modify the `out` directory, ninja args, and test name as appropriate, and save
468the script in `~/checkrev.sh`. Then run:
469
Mathias Bynens172fc6b2018-09-05 09:39:43470```bash
Steve Kobese123a3d42017-07-20 01:20:30471chmod u+x ~/checkrev.sh # mark script as executable
472git bisect start <badrev> <goodrev>
473git bisect run ~/checkrev.sh
474git bisect reset # quit the bisect session
475```
476
pwnallae101a5f2016-11-08 00:24:38477## Rebaselining Layout Tests
478
pwnalld8a250722016-11-09 18:24:03479*** promo
480To automatically re-baseline tests across all Chromium platforms, using the
Xianzhu Wangcacba482017-06-05 18:46:43481buildbot results, see [How to rebaseline](./layout_test_expectations.md#How-to-rebaseline).
pwnallae101a5f2016-11-08 00:24:38482Alternatively, to manually run and test and rebaseline it on your workstation,
pwnalld8a250722016-11-09 18:24:03483read on.
484***
pwnallae101a5f2016-11-08 00:24:38485
pwnallae101a5f2016-11-08 00:24:38486```bash
Kent Tamuraa045a7f2018-04-25 05:08:11487cd src/third_party/blink
Mathias Bynens172fc6b2018-09-05 09:39:43488python tools/run_web_tests.py --reset-results foo/bar/test.html
pwnallae101a5f2016-11-08 00:24:38489```
490
Xianzhu Wangcacba482017-06-05 18:46:43491If there are current expectation files for `LayoutTests/foo/bar/test.html`,
492the above command will overwrite the current baselines at their original
493locations with the actual results. The current baseline means the `-expected.*`
494file used to compare the actual result when the test is run locally, i.e. the
495first file found in the [baseline search path]
496(https://cs.chromium.org/search/?q=port/base.py+baseline_search_path).
497
498If there are no current baselines, the above command will create new baselines
499in the platform-independent directory, e.g.
500`LayoutTests/foo/bar/test-expected.{txt,png}`.
pwnallae101a5f2016-11-08 00:24:38501
502When you rebaseline a test, make sure your commit description explains why the
Xianzhu Wangcacba482017-06-05 18:46:43503test is being re-baselined.
pwnallae101a5f2016-11-08 00:24:38504
Xianzhu Wang95d0bac32017-06-05 21:09:39505### Rebaselining flag-specific expectations
506
507Though we prefer the Rebaseline Tool to local rebaselining, the Rebaseline Tool
508doesn't support rebaselining flag-specific expectations.
509
510```bash
Kent Tamuraa045a7f2018-04-25 05:08:11511cd src/third_party/blink
Mathias Bynens172fc6b2018-09-05 09:39:43512python tools/run_web_tests.py --additional-driver-flag=--enable-flag --reset-results foo/bar/test.html
Xianzhu Wang95d0bac32017-06-05 21:09:39513```
514
515New baselines will be created in the flag-specific baselines directory, e.g.
516`LayoutTests/flag-specific/enable-flag/foo/bar/test-expected.{txt,png}`.
517
518Then you can commit the new baselines and upload the patch for review.
519
520However, it's difficult for reviewers to review the patch containing only new
Xianzhu Wangd063968e2017-10-16 16:47:44521files. You can follow the steps below for easier review.
Xianzhu Wang95d0bac32017-06-05 21:09:39522
Xianzhu Wangd063968e2017-10-16 16:47:445231. Copy existing baselines to the flag-specific baselines directory for the
524 tests to be rebaselined:
525 ```bash
Kent Tamuraa045a7f2018-04-25 05:08:11526 third_party/blink/tools/run_web_tests.py --additional-driver-flag=--enable-flag --copy-baselines foo/bar/test.html
Xianzhu Wangd063968e2017-10-16 16:47:44527 ```
528 Then add the newly created baseline files, commit and upload the patch.
529 Note that the above command won't copy baselines for passing tests.
Xianzhu Wang95d0bac32017-06-05 21:09:39530
Xianzhu Wangd063968e2017-10-16 16:47:445312. Rebaseline the test locally:
532 ```bash
Kent Tamuraa045a7f2018-04-25 05:08:11533 third_party/blink/tools/run_web_tests.py --additional-driver-flag=--enable-flag --reset-results foo/bar/test.html
Xianzhu Wangd063968e2017-10-16 16:47:44534 ```
535 Commit the changes and upload the patch.
Xianzhu Wang95d0bac32017-06-05 21:09:39536
Xianzhu Wangd063968e2017-10-16 16:47:445373. Request review of the CL and tell the reviewer to compare the patch sets that
538 were uploaded in step 1 and step 2 to see the differences of the rebaselines.
Xianzhu Wang95d0bac32017-06-05 21:09:39539
foolipeda32ab2017-02-16 19:21:58540## web-platform-tests
pwnallae101a5f2016-11-08 00:24:38541
foolipbbd0f452017-02-11 00:09:53542In addition to layout tests developed and run just by the Blink team, there is
foolipeda32ab2017-02-16 19:21:58543also a shared test suite, see [web-platform-tests](./web_platform_tests.md).
pwnallae101a5f2016-11-08 00:24:38544
545## Known Issues
546
547See
548[bugs with the component Blink>Infra](https://bugs.chromium.org/p/chromium/issues/list?can=2&q=component%3ABlink%3EInfra)
549for issues related to Blink tools, include the layout test runner.
550
pwnallae101a5f2016-11-08 00:24:38551* If QuickTime is not installed, the plugin tests
552 `fast/dom/object-embed-plugin-scripting.html` and
553 `plugins/embed-attributes-setting.html` are expected to fail.