Commit 15355b21 authored by Xianzhu Wang's avatar Xianzhu Wang Committed by Commit Bot

web_tests/FlagSpecificConfig

It specifies the name of the flag-specific expectation file
under web_tests/FlagExpectations and the baseline directory under
web_tests/flag-specific, in the following format:

  {
    "name": "short-name",
    "args": ["--arg1", "--arg2"]
  }

When at least --additional-driver-flag=--arg1 and
--additional-driver-flag=--arg2 are in run_web_tests.py command line,
or --flag-specific=short-name is in the command line,
we will find web_tests/FlagExpectations/short-name for the additional
expectation file and web_tests/flag-specific/short-name for the
additional baseline directory.

Bug: 1019501
Change-Id: Idf0621abc89efc18cd17f9a9612074aa9d7de298
Reviewed-on: https://chromium-review.googlesource.com/c/chromium/src/+/1894069
Commit-Queue: Xianzhu Wang <wangxianzhu@chromium.org>
Reviewed-by: default avatarSteve Kobes <skobes@chromium.org>
Cr-Commit-Position: refs/heads/master@{#712013}
parent 4e88a3c4
......@@ -68,11 +68,12 @@ python third_party/blink/tools/run_web_tests.py -t android --android
Tests marked as `[ Skip ]` in
[TestExpectations](../../third_party/blink/web_tests/TestExpectations)
won't be run at all, generally because they cause some intractable tool error.
won't be run by default, generally because they cause some intractable tool error.
To force one of them to be run, either rename that file or specify the skipped
test as the only one on the command line (see below). Read the
[Web Test Expectations documentation](./web_test_expectations.md) to learn
more about TestExpectations and related files.
test on the command line (see below) or in a file specified with --test-list
(however, --skip=always can make the tests marked as `[ Skip ]` always skipped).
Read the [Web Test Expectations documentation](./web_test_expectations.md) to
learn more about TestExpectations and related files.
*** promo
Currently only the tests listed in
......@@ -220,6 +221,31 @@ There are two ways to run web tests with additional command-line arguments:
`web_tests/FlagExpectations/blocking-repaint`, if this file exists. The
suppressions in this file override the main TestExpectations file.
It will also look for baselines in `web_tests/flag-specific/blocking-repaint`.
The baselines in this directory override the fallback baselines.
By default, name of the expectation file name under
`web_tests/FlagExpectations` and name of the baseline directory under
`web_tests/flag-specific` uses the first flag of --additional-driver-flag
with leading '-'s stripped.
You can also customize the name in `web_tests/FlagSpecificConfig` when
the name is too long or when we need to match multiple additional args:
```json
{
"name": "short-name",
"args": ["--blocking-repaint", "--another-flag"]
}
```
When at least `--additional-driver-flag=--blocking-repaint` and
`--additional-driver-flag=--another-flag` are specified, `short-name` will
be used as name of the flag specific expectation file and the baseline directory.
With the config, you can also use `--flag-specific=short-name` as a shortcut
of `--additional-driver-flag=--blocking-repaint --additional-driver-flag=--another-flag`.
* Using a *virtual test suite* defined in
[web_tests/VirtualTestSuites](../../third_party/blink/web_tests/VirtualTestSuites).
A virtual test suite runs a subset of web tests with additional flags, with
......
......@@ -102,6 +102,8 @@ SXG_FINGERPRINT = '55qC1nKu2A88ESbFmk5sTPQS/ScG+8DD7P+2bgFA9iM='
# And one for external/wpt/signed-exchange/resources/127.0.0.1.sxg.pem
SXG_WPT_FINGERPRINT = '0Rt4mT6SJXojEMHTnKnlJ/hBKMBcI4kteBlhR1eTTdk='
# A convervative rule for names that are valid for file or directory names.
VALID_FILE_NAME_REGEX = re.compile(r'^[\w\-=]+$')
class Port(object):
"""Abstract class for Port-specific hooks for the web_test package."""
......@@ -251,26 +253,87 @@ class Port(object):
return 'Port{name=%s, version=%s, architecture=%s, test_configuration=%s}' % (
self._name, self._version, self._architecture, self._test_configuration)
def primary_driver_flag(self):
"""Returns the driver flag that is used for flag-specific expectations and baselines. This
is the flag in web_tests/additional-driver-flag.setting, if present, otherwise the
first flag passed by --additional-driver-flag.
@memoized
def _flag_specific_config_name(self):
"""Returns the name of the flag-specific configuration which best matches
self._specified_additional_driver_flags(), or the first specified flag
with leading '-'s stripped if no match in the configuration is found.
"""
specified_flags = self._specified_additional_driver_flags()
if not specified_flags:
return None
best_match = None
configs = self._flag_specific_configs()
for name in configs:
# To match, the specified flags must contain all config args.
args = configs[name]
if any(arg not in specified_flags for arg in args):
continue
# The first config matching the highest number of specified flags wins.
if not best_match or len(configs[best_match]) < len(args):
best_match = name
else:
assert len(configs[best_match]) > len(args), (
'Ambiguous flag-specific configs {} and {}: they match the same number of additional driver args.'
.format(best_match, name))
if best_match:
return best_match
# If no match, fallback to the old mode: using the name of the first specified flag.
return specified_flags[0].lstrip('-')
@memoized
def _flag_specific_configs(self):
"""Reads configuration from FlagSpecificConfig and returns a dictionary from name to args."""
config_file = self._filesystem.join(self.web_tests_dir(), 'FlagSpecificConfig')
if not self._filesystem.exists(config_file):
return {}
try:
json_configs = json.loads(self._filesystem.read_text_file(config_file))
except ValueError as error:
raise ValueError('{} is not a valid JSON file: {}'.format(config_file, error))
configs = {}
for config in json_configs:
name = config['name']
args = config['args']
if not VALID_FILE_NAME_REGEX.match(name):
raise ValueError('{}: name "{}" contains invalid characters'.format(config_file, name))
if name in configs:
raise ValueError('{} contains duplicated name {}.'.format(config_file, name))
if args in configs.itervalues():
raise ValueError('{}: name "{}" has the same args as another entry.'.format(config_file, name))
configs[name] = args
return configs
def _specified_additional_driver_flags(self):
"""Returns the list of additional driver flags specified by the user in
the following ways, concatenated:
1. A single flag in web_tests/additional-driver-flag.setting. If present,
this flag will be the first flag.
2. flags expanded from --flag-specific=<name> based on flag-specific config.
3. Zero or more flags passed by --additional-driver-flag.
"""
flags = []
flag_file = self._filesystem.join(self.web_tests_dir(), 'additional-driver-flag.setting')
if self._filesystem.exists(flag_file):
flag = self._filesystem.read_text_file(flag_file).strip()
if flag:
return flag
flags = self.get_option('additional_driver_flag', [])
if flags:
return flags[0]
flags = [flag]
def additional_driver_flags(self):
# Clone list to avoid mutating option state.
flags = list(self.get_option('additional_driver_flag', []))
flag_specific_option = self.get_option('flag_specific')
if flag_specific_option:
configs = self._flag_specific_configs()
assert flag_specific_option in configs, '{} is not defined in FlagSpecificConfig'.format(flag_specific_option)
flags += configs[flag_specific_option]
flags += self.get_option('additional_driver_flag', [])
return flags
if flags and flags[0] == self.primary_driver_flag():
flags = flags[1:]
def additional_driver_flags(self):
flags = self._specified_additional_driver_flags()
if self.driver_name() == self.CONTENT_SHELL_NAME:
flags += [
'--run-web-tests',
......@@ -1298,17 +1361,17 @@ class Port(object):
return test_configurations
def _flag_specific_expectations_path(self):
flag = self.primary_driver_flag()
if flag:
config_name = self._flag_specific_config_name()
if config_name:
return self._filesystem.join(
self.web_tests_dir(), self.FLAG_EXPECTATIONS_PREFIX, flag.lstrip('-'))
self.web_tests_dir(), self.FLAG_EXPECTATIONS_PREFIX, config_name)
def _flag_specific_baseline_search_path(self):
flag = self.primary_driver_flag()
if not flag:
config_name = self._flag_specific_config_name()
if not config_name:
return []
flag_dir = self._filesystem.join(
self.web_tests_dir(), 'flag-specific', flag.lstrip('-'))
self.web_tests_dir(), 'flag-specific', config_name)
platform_dirs = [
self._filesystem.join(flag_dir, 'platform', platform_dir)
for platform_dir in self.FALLBACK_PATHS[self.version()]]
......@@ -1349,7 +1412,7 @@ class Port(object):
"""Returns an OrderedDict of name -> expectations strings."""
expectations = self.expectations_dict()
flag_path = self._filesystem.join(self.web_tests_dir(), 'FlagExpectations')
flag_path = self._filesystem.join(self.web_tests_dir(), self.FLAG_EXPECTATIONS_PREFIX)
if not self._filesystem.exists(flag_path):
return expectations
......@@ -1801,10 +1864,10 @@ class Port(object):
class VirtualTestSuite(object):
def __init__(self, prefix=None, bases=None, args=None):
assert VALID_FILE_NAME_REGEX.match(prefix), "Virtual test suite prefix '{}' contains invalid characters".format(prefix)
assert isinstance(bases, list)
assert args
assert isinstance(args, list)
assert '/' not in prefix, "Virtual test suites prefixes cannot contain /'s: %s" % prefix
self.full_prefix = 'virtual/' + prefix + '/'
self.bases = bases
self.args = args
......
......@@ -461,9 +461,6 @@ class Driver(object):
cmd += self._base_cmd_line()
if self._no_timeout:
cmd.append('--no-timeout')
primary_driver_flag = self._port.primary_driver_flag()
if primary_driver_flag:
cmd.append(primary_driver_flag)
cmd.extend(self._port.additional_driver_flags())
if self._port.get_option('enable_leak_detection'):
cmd.append('--enable-leak-detection')
......
......@@ -160,6 +160,13 @@ def parse_args(args):
default=[],
help=('Additional command line flag to pass to the driver. Specify multiple '
'times to add multiple flags.')),
optparse.make_option(
'--flag-specific',
dest='flag_specific',
action='store',
default=None,
help=('Name of a flag-specific configuration defined in FlagSpecificConfig, '
' as a shortcut of --additional-driver-flag options.')),
optparse.make_option(
'--additional-expectations',
action='append',
......@@ -440,7 +447,7 @@ def parse_args(args):
'--test-list',
action='append',
metavar='FILE',
help='read list of tests to run from file'),
help='read list of tests to run from file, as if they were specified on the command line'),
optparse.make_option(
'--isolated-script-test-filter',
action='append',
......
FlagExpectations stores flag-specific test expectations. To run layout tests
with a flag, use:
web_tests/FlagExpectations stores flag-specific test expectations.
To run layout tests with a flag passed to content_shell, use:
run_web_tests.py --additional-driver-flag=--name-of-flag
......@@ -7,21 +7,20 @@ Create a new file:
FlagExpectations/name-of-flag
These are formatted:
The entries in the file is the same as the main TestExpectations file, e.g.
crbug.com/123456 path/to/your/test.html [ Expectation ]
Then run the tests with --additional-expectations:
This file will override the main TestExpectations file when the above command
is run.
run_web_tests.py --additional-driver-flag=--name-of-flag
--additional-expectations=path/to/FlagExpectations/name-of-flag
which will override the main TestExpectations file.
If the name-of-flag is too long, or when multiple additional flags are needed,
you can add an entry in web_tests/FlagSpecificConfig, like
{
"name": "short-name",
"args": ["--name-of-flag1", "--name-of-flag2"]
}
When passing a set of tests via the command line, such as using --test-list,
the SKIP expectation is not respected by default. The option --skipped=always
can be added in order to actually skip those tests.
And create a new file in the same format of the above
FlagExpectations/name-of-flag file:
To run a subset of all tests, with custom expectations, and to properly skip:
run_web_tests.py --additional-driver-flag=--name-of-flag
--additional-expectations=path/to/FlagExpectations/name-of-flag
--test-list=path/to/test-list-file --skipped=always
FlagExpectations/short-name
[
{
"name": "disable-blink-features=LayoutNG",
"args": ["--disable-blink-features=LayoutNG"]
},
{
"name": "disable-site-isolation-trials",
"args": ["--disable-site-isolation-trials"]
},
{
"name": "enable-blink-features=CompositeAfterPaint",
"args": ["--enable-blink-features=CompositeAfterPaint"]
},
{
"name": "enable-blink-features=HeapUnifiedGarbageCollection",
"args": ["--enable-blink-features=HeapUnifiedGarbageCollection"]
},
{
"name": "enable-blink-features=NewSystemColors",
"args": ["--enable-blink-features=NewSystemColors"]
},
{
"name": "enable-features=NetworkService",
"args": ["--enable-features=NetworkService"]
},
{
"name": "enable-features=OverflowIconsForMediaControls",
"args": ["--enable-features=OverflowIconsForMediaControls"]
},
{
"name": "enable-gpu-rasterization",
"args": ["--enable-gpu-rasterization"]
}
]
Tests V8 code cache for javascript resources
---First navigation - produce and consume code cache ------
v8.compile Properties:
{
data : {
columnNumber : 0
lineNumber : 0
notStreamedReason : "script too small"
streamed : <boolean>
url : .../devtools/resources/v8-cache-script.js
}
endTime : <number>
startTime : <number>
type : "v8.compile"
}
Text details for v8.compile: v8-cache-script.js:1
v8.compile Properties:
{
data : {
columnNumber : 0
lineNumber : 0
notStreamedReason : "already used streamed data"
streamed : <boolean>
url : .../devtools/resources/v8-cache-script.js
}
endTime : <number>
startTime : <number>
type : "v8.compile"
}
Text details for v8.compile: v8-cache-script.js:1
v8.compile Properties:
{
data : {
cacheProduceOptions : "code"
columnNumber : 0
lineNumber : 0
notStreamedReason : "already used streamed data"
producedCacheSize : <number>
streamed : <boolean>
url : .../devtools/resources/v8-cache-script.js
}
endTime : <number>
startTime : <number>
type : "v8.compile"
}
Text details for v8.compile: v8-cache-script.js:1
v8.compile Properties:
{
data : {
cacheConsumeOptions : "code"
cacheRejected : false
columnNumber : 0
consumedCacheSize : <number>
lineNumber : 0
notStreamedReason : "already used streamed data"
streamed : <boolean>
url : .../devtools/resources/v8-cache-script.js
}
endTime : <number>
startTime : <number>
type : "v8.compile"
}
Text details for v8.compile: v8-cache-script.js:1
--- Second navigation - from a different origin ------
v8.compile Properties:
{
data : {
columnNumber : 0
lineNumber : 0
notStreamedReason : "script too small"
streamed : <boolean>
url : .../devtools/resources/v8-cache-script.js
}
endTime : <number>
startTime : <number>
type : "v8.compile"
}
Text details for v8.compile: v8-cache-script.js:1
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment