Move various scripts from //third_party/WebKit/Tools/Scripts to //third_party/blink/tools

* bisect-test-ordering -> bisect_web_test_ordering
* debug-renderer -> debug_renderer
* debug-webkit-tests -> debug_web_tests
* print-stale-test-expectations-entries ->
    print_stale_test_expectations_entries.py
* print-json-test-results -> print_web_test_json_results.py
* print-test-ordering -> print_web_test_ordering.py
* print-layout-test-times -> print_web_test_times.py
* print-layout-test-types -> print_web_test_types.py
* read-checksum-from-png -> read_checksum_from_png.py
* run-blink-httpd -> run_blink_httpd.py
* run-blink-websocketserver -> run_blink_websocketserver.py
* run-blink-wptserve -> run_blink_wptserve.py
* try-flag -> try_flag.py
* update-flaky-expectations -> update_flaky_expectations.py

Note that we decided to rename LayoutTests to web_tests.
https://groups.google.com/a/chromium.org/forum/#!msg/blink-dev/KKNbuzj-3HY/H8FWgtKrBgAJ

Bug: 829697
Change-Id: Ia3dd95f6b42e337deb79deb3e4524aded378d78f
Reviewed-on: https://chromium-review.googlesource.com/1018702
Reviewed-by: Quinten Yearsley <[email protected]>
Commit-Queue: Kent Tamura <[email protected]>
Cr-Commit-Position: refs/heads/master@{#552369}
diff --git a/docs/testing/identifying_tests_that_depend_on_order.md b/docs/testing/identifying_tests_that_depend_on_order.md
index e62ef1f..6225167 100644
--- a/docs/testing/identifying_tests_that_depend_on_order.md
+++ b/docs/testing/identifying_tests_that_depend_on_order.md
@@ -14,30 +14,30 @@
 ### Bisect test ordering
 
 1. Run the tests such that the test in question fails.
-2. Run `./Tools/Scripts/print-test-ordering` and save the output to a file. This
+2. Run `./tools/print_web_test_ordering.py` and save the output to a file. This
    outputs the tests run in the order they were run on each content_shell
    instance.
 3. Create a file that contains only the tests run on that worker in the same
    order as in your saved output file. The last line in the file should be the
    failing test.
 4. Run
-   `./Tools/Scripts/bisect-test-ordering --test-list=path/to/file/from/step/3`
+   `./tools/bisect_web_test_ordering.py --test-list=path/to/file/from/step/3`
 
-The bisect-test-ordering script should spit out a list of tests at the end that
-causes the test to fail.
+The bisect_web_test_ordering.py script should spit out a list of tests at the
+end that causes the test to fail.
 
 *** promo
-At the moment bisect-test-ordering only allows you to find tests that fail due
-to a previous test running. It's a small change to the script to make it work
-for tests that pass due to a previous test running (i.e. to figure out which
-test it depends on running before it). Contact ojan@chromium if you're
+At the moment bisect_web_test_ordering.py only allows you to find tests that
+fail due to a previous test running. It's a small change to the script to make
+it work for tests that pass due to a previous test running (i.e. to figure out
+which test it depends on running before it). Contact ojan@chromium if you're
 interested in adding that feature to the script.
 ***
 
 ### Manual bisect
 
-Instead of running `bisect-test-ordering`, you can manually do the work of step
-4 above.
+Instead of running `bisect_web_test_ordering.py`, you can manually do the work
+of step 4 above.
 
 1. `run-webkit-tests --child-processes=1 --order=none --test-list=path/to/file/from/step/3`
 2. If the test doesn't fail here, then the test itself is probably just flaky.
@@ -58,7 +58,7 @@
 #### Run tests in a random order and diagnose failures
 
 1. Run `run-webkit-tests --order=random --no-retry`
-2. Run `./Tools/Scripts/print-test-ordering` and save the output to a file. This
+2. Run `./tools/print_web_test_ordering.py` and save the output to a file. This
    outputs the tests run in the order they were run on each content_shell
    instance.
 3. Run the diagnosing steps from above to figure out which tests
diff --git a/docs/testing/layout_tests.md b/docs/testing/layout_tests.md
index 02e1251..f063a86c 100644
--- a/docs/testing/layout_tests.md
+++ b/docs/testing/layout_tests.md
@@ -356,8 +356,8 @@
 To run the server manually to reproduce/debug a failure:
 
 ```bash
-cd src/third_party/WebKit/Tools/Scripts
-./run-blink-httpd
+cd src/third_party/blink/tools
+./run_blink_httpd.py
 ```
 
 The layout tests will be served from `http://127.0.0.1:8000`. For example, to
@@ -368,7 +368,7 @@
 tests will behave differently if you go to 127.0.0.1 vs localhost, so use
 127.0.0.1.
 
-To kill the server, hit any key on the terminal where `run-blink-httpd` is
+To kill the server, hit any key on the terminal where `run_blink_httpd.py` is
 running, or just use `taskkill` or the Task Manager on Windows, and `killall` or
 Activity Monitor on MacOS.
 
diff --git a/docs/testing/writing_layout_tests.md b/docs/testing/writing_layout_tests.md
index a000b91..18d58e8 100644
--- a/docs/testing/writing_layout_tests.md
+++ b/docs/testing/writing_layout_tests.md
@@ -319,8 +319,8 @@
 manually to reproduce or debug a failure:
 
 ```bash
-cd src/third_party/WebKit/Tools/Scripts
-./run-blink-httpd
+cd src/third_party/blink/tools
+./run_blink_httpd.py
 ```
 
 The layout tests will be served from `http://127.0.0.1:8000`. For example, to
@@ -330,7 +330,7 @@
 tests will behave differently if you go to 127.0.0.1 instead of localhost, so
 use 127.0.0.1.
 
-To kill the server, hit any key on the terminal where `run-blink-httpd` is
+To kill the server, hit any key on the terminal where `run_blink_httpd.py` is
 running, or just use `taskkill` or the Task Manager on Windows, and `killall` or
 Activity Monitor on MacOS.