Commit 15355b21 authored by Xianzhu Wang's avatar Xianzhu Wang Committed by Commit Bot

web_tests/FlagSpecificConfig

It specifies the name of the flag-specific expectation file
under web_tests/FlagExpectations and the baseline directory under
web_tests/flag-specific, in the following format:

  {
    "name": "short-name",
    "args": ["--arg1", "--arg2"]
  }

When at least --additional-driver-flag=--arg1 and
--additional-driver-flag=--arg2 are in run_web_tests.py command line,
or --flag-specific=short-name is in the command line,
we will find web_tests/FlagExpectations/short-name for the additional
expectation file and web_tests/flag-specific/short-name for the
additional baseline directory.

Bug: 1019501
Change-Id: Idf0621abc89efc18cd17f9a9612074aa9d7de298
Reviewed-on: https://chromium-review.googlesource.com/c/chromium/src/+/1894069
Commit-Queue: Xianzhu Wang <wangxianzhu@chromium.org>
Reviewed-by: default avatarSteve Kobes <skobes@chromium.org>
Cr-Commit-Position: refs/heads/master@{#712013}
parent 4e88a3c4
...@@ -68,11 +68,12 @@ python third_party/blink/tools/run_web_tests.py -t android --android ...@@ -68,11 +68,12 @@ python third_party/blink/tools/run_web_tests.py -t android --android
Tests marked as `[ Skip ]` in Tests marked as `[ Skip ]` in
[TestExpectations](../../third_party/blink/web_tests/TestExpectations) [TestExpectations](../../third_party/blink/web_tests/TestExpectations)
won't be run at all, generally because they cause some intractable tool error. won't be run by default, generally because they cause some intractable tool error.
To force one of them to be run, either rename that file or specify the skipped To force one of them to be run, either rename that file or specify the skipped
test as the only one on the command line (see below). Read the test on the command line (see below) or in a file specified with --test-list
[Web Test Expectations documentation](./web_test_expectations.md) to learn (however, --skip=always can make the tests marked as `[ Skip ]` always skipped).
more about TestExpectations and related files. Read the [Web Test Expectations documentation](./web_test_expectations.md) to
learn more about TestExpectations and related files.
*** promo *** promo
Currently only the tests listed in Currently only the tests listed in
...@@ -220,6 +221,31 @@ There are two ways to run web tests with additional command-line arguments: ...@@ -220,6 +221,31 @@ There are two ways to run web tests with additional command-line arguments:
`web_tests/FlagExpectations/blocking-repaint`, if this file exists. The `web_tests/FlagExpectations/blocking-repaint`, if this file exists. The
suppressions in this file override the main TestExpectations file. suppressions in this file override the main TestExpectations file.
It will also look for baselines in `web_tests/flag-specific/blocking-repaint`.
The baselines in this directory override the fallback baselines.
By default, name of the expectation file name under
`web_tests/FlagExpectations` and name of the baseline directory under
`web_tests/flag-specific` uses the first flag of --additional-driver-flag
with leading '-'s stripped.
You can also customize the name in `web_tests/FlagSpecificConfig` when
the name is too long or when we need to match multiple additional args:
```json
{
"name": "short-name",
"args": ["--blocking-repaint", "--another-flag"]
}
```
When at least `--additional-driver-flag=--blocking-repaint` and
`--additional-driver-flag=--another-flag` are specified, `short-name` will
be used as name of the flag specific expectation file and the baseline directory.
With the config, you can also use `--flag-specific=short-name` as a shortcut
of `--additional-driver-flag=--blocking-repaint --additional-driver-flag=--another-flag`.
* Using a *virtual test suite* defined in * Using a *virtual test suite* defined in
[web_tests/VirtualTestSuites](../../third_party/blink/web_tests/VirtualTestSuites). [web_tests/VirtualTestSuites](../../third_party/blink/web_tests/VirtualTestSuites).
A virtual test suite runs a subset of web tests with additional flags, with A virtual test suite runs a subset of web tests with additional flags, with
......
...@@ -102,6 +102,8 @@ SXG_FINGERPRINT = '55qC1nKu2A88ESbFmk5sTPQS/ScG+8DD7P+2bgFA9iM=' ...@@ -102,6 +102,8 @@ SXG_FINGERPRINT = '55qC1nKu2A88ESbFmk5sTPQS/ScG+8DD7P+2bgFA9iM='
# And one for external/wpt/signed-exchange/resources/127.0.0.1.sxg.pem # And one for external/wpt/signed-exchange/resources/127.0.0.1.sxg.pem
SXG_WPT_FINGERPRINT = '0Rt4mT6SJXojEMHTnKnlJ/hBKMBcI4kteBlhR1eTTdk=' SXG_WPT_FINGERPRINT = '0Rt4mT6SJXojEMHTnKnlJ/hBKMBcI4kteBlhR1eTTdk='
# A convervative rule for names that are valid for file or directory names.
VALID_FILE_NAME_REGEX = re.compile(r'^[\w\-=]+$')
class Port(object): class Port(object):
"""Abstract class for Port-specific hooks for the web_test package.""" """Abstract class for Port-specific hooks for the web_test package."""
...@@ -251,26 +253,87 @@ class Port(object): ...@@ -251,26 +253,87 @@ class Port(object):
return 'Port{name=%s, version=%s, architecture=%s, test_configuration=%s}' % ( return 'Port{name=%s, version=%s, architecture=%s, test_configuration=%s}' % (
self._name, self._version, self._architecture, self._test_configuration) self._name, self._version, self._architecture, self._test_configuration)
def primary_driver_flag(self): @memoized
"""Returns the driver flag that is used for flag-specific expectations and baselines. This def _flag_specific_config_name(self):
is the flag in web_tests/additional-driver-flag.setting, if present, otherwise the """Returns the name of the flag-specific configuration which best matches
first flag passed by --additional-driver-flag. self._specified_additional_driver_flags(), or the first specified flag
with leading '-'s stripped if no match in the configuration is found.
"""
specified_flags = self._specified_additional_driver_flags()
if not specified_flags:
return None
best_match = None
configs = self._flag_specific_configs()
for name in configs:
# To match, the specified flags must contain all config args.
args = configs[name]
if any(arg not in specified_flags for arg in args):
continue
# The first config matching the highest number of specified flags wins.
if not best_match or len(configs[best_match]) < len(args):
best_match = name
else:
assert len(configs[best_match]) > len(args), (
'Ambiguous flag-specific configs {} and {}: they match the same number of additional driver args.'
.format(best_match, name))
if best_match:
return best_match
# If no match, fallback to the old mode: using the name of the first specified flag.
return specified_flags[0].lstrip('-')
@memoized
def _flag_specific_configs(self):
"""Reads configuration from FlagSpecificConfig and returns a dictionary from name to args."""
config_file = self._filesystem.join(self.web_tests_dir(), 'FlagSpecificConfig')
if not self._filesystem.exists(config_file):
return {}
try:
json_configs = json.loads(self._filesystem.read_text_file(config_file))
except ValueError as error:
raise ValueError('{} is not a valid JSON file: {}'.format(config_file, error))
configs = {}
for config in json_configs:
name = config['name']
args = config['args']
if not VALID_FILE_NAME_REGEX.match(name):
raise ValueError('{}: name "{}" contains invalid characters'.format(config_file, name))
if name in configs:
raise ValueError('{} contains duplicated name {}.'.format(config_file, name))
if args in configs.itervalues():
raise ValueError('{}: name "{}" has the same args as another entry.'.format(config_file, name))
configs[name] = args
return configs
def _specified_additional_driver_flags(self):
"""Returns the list of additional driver flags specified by the user in
the following ways, concatenated:
1. A single flag in web_tests/additional-driver-flag.setting. If present,
this flag will be the first flag.
2. flags expanded from --flag-specific=<name> based on flag-specific config.
3. Zero or more flags passed by --additional-driver-flag.
""" """
flags = []
flag_file = self._filesystem.join(self.web_tests_dir(), 'additional-driver-flag.setting') flag_file = self._filesystem.join(self.web_tests_dir(), 'additional-driver-flag.setting')
if self._filesystem.exists(flag_file): if self._filesystem.exists(flag_file):
flag = self._filesystem.read_text_file(flag_file).strip() flag = self._filesystem.read_text_file(flag_file).strip()
if flag: if flag:
return flag flags = [flag]
flags = self.get_option('additional_driver_flag', [])
if flags:
return flags[0]
def additional_driver_flags(self): flag_specific_option = self.get_option('flag_specific')
# Clone list to avoid mutating option state. if flag_specific_option:
flags = list(self.get_option('additional_driver_flag', [])) configs = self._flag_specific_configs()
assert flag_specific_option in configs, '{} is not defined in FlagSpecificConfig'.format(flag_specific_option)
flags += configs[flag_specific_option]
flags += self.get_option('additional_driver_flag', [])
return flags
if flags and flags[0] == self.primary_driver_flag(): def additional_driver_flags(self):
flags = flags[1:] flags = self._specified_additional_driver_flags()
if self.driver_name() == self.CONTENT_SHELL_NAME: if self.driver_name() == self.CONTENT_SHELL_NAME:
flags += [ flags += [
'--run-web-tests', '--run-web-tests',
...@@ -1298,17 +1361,17 @@ class Port(object): ...@@ -1298,17 +1361,17 @@ class Port(object):
return test_configurations return test_configurations
def _flag_specific_expectations_path(self): def _flag_specific_expectations_path(self):
flag = self.primary_driver_flag() config_name = self._flag_specific_config_name()
if flag: if config_name:
return self._filesystem.join( return self._filesystem.join(
self.web_tests_dir(), self.FLAG_EXPECTATIONS_PREFIX, flag.lstrip('-')) self.web_tests_dir(), self.FLAG_EXPECTATIONS_PREFIX, config_name)
def _flag_specific_baseline_search_path(self): def _flag_specific_baseline_search_path(self):
flag = self.primary_driver_flag() config_name = self._flag_specific_config_name()
if not flag: if not config_name:
return [] return []
flag_dir = self._filesystem.join( flag_dir = self._filesystem.join(
self.web_tests_dir(), 'flag-specific', flag.lstrip('-')) self.web_tests_dir(), 'flag-specific', config_name)
platform_dirs = [ platform_dirs = [
self._filesystem.join(flag_dir, 'platform', platform_dir) self._filesystem.join(flag_dir, 'platform', platform_dir)
for platform_dir in self.FALLBACK_PATHS[self.version()]] for platform_dir in self.FALLBACK_PATHS[self.version()]]
...@@ -1349,7 +1412,7 @@ class Port(object): ...@@ -1349,7 +1412,7 @@ class Port(object):
"""Returns an OrderedDict of name -> expectations strings.""" """Returns an OrderedDict of name -> expectations strings."""
expectations = self.expectations_dict() expectations = self.expectations_dict()
flag_path = self._filesystem.join(self.web_tests_dir(), 'FlagExpectations') flag_path = self._filesystem.join(self.web_tests_dir(), self.FLAG_EXPECTATIONS_PREFIX)
if not self._filesystem.exists(flag_path): if not self._filesystem.exists(flag_path):
return expectations return expectations
...@@ -1801,10 +1864,10 @@ class Port(object): ...@@ -1801,10 +1864,10 @@ class Port(object):
class VirtualTestSuite(object): class VirtualTestSuite(object):
def __init__(self, prefix=None, bases=None, args=None): def __init__(self, prefix=None, bases=None, args=None):
assert VALID_FILE_NAME_REGEX.match(prefix), "Virtual test suite prefix '{}' contains invalid characters".format(prefix)
assert isinstance(bases, list) assert isinstance(bases, list)
assert args assert args
assert isinstance(args, list) assert isinstance(args, list)
assert '/' not in prefix, "Virtual test suites prefixes cannot contain /'s: %s" % prefix
self.full_prefix = 'virtual/' + prefix + '/' self.full_prefix = 'virtual/' + prefix + '/'
self.bases = bases self.bases = bases
self.args = args self.args = args
......
...@@ -350,9 +350,8 @@ class PortTest(LoggingTestCase): ...@@ -350,9 +350,8 @@ class PortTest(LoggingTestCase):
self.assertEqual('\n'.join(port.expectations_dict().keys()), self.assertEqual('\n'.join(port.expectations_dict().keys()),
MOCK_WEB_TESTS + 'platform/exists/TestExpectations') MOCK_WEB_TESTS + 'platform/exists/TestExpectations')
def test_additional_expectations(self): def _make_port_for_test_additional_expectations(self, options_dict={}):
port = self.make_port(port_name='foo') port = self.make_port(port_name='foo', options=optparse.Values(options_dict))
port.port_name = 'foo'
port.host.filesystem.write_text_file( port.host.filesystem.write_text_file(
MOCK_WEB_TESTS + 'platform/foo/TestExpectations', '') MOCK_WEB_TESTS + 'platform/foo/TestExpectations', '')
port.host.filesystem.write_text_file( port.host.filesystem.write_text_file(
...@@ -361,28 +360,38 @@ class PortTest(LoggingTestCase): ...@@ -361,28 +360,38 @@ class PortTest(LoggingTestCase):
'/tmp/additional-expectations-2.txt', 'content2\n') '/tmp/additional-expectations-2.txt', 'content2\n')
port.host.filesystem.write_text_file( port.host.filesystem.write_text_file(
MOCK_WEB_TESTS + 'FlagExpectations/special-flag', 'content3') MOCK_WEB_TESTS + 'FlagExpectations/special-flag', 'content3')
return port
self.assertEqual('\n'.join(port.expectations_dict().values()), '')
def test_additional_expectations_empty(self):
# pylint: disable=protected-access port = self._make_port_for_test_additional_expectations()
port._options.additional_expectations = [ self.assertEqual(port.expectations_dict().values(), [])
'/tmp/additional-expectations-1.txt']
self.assertEqual('\n'.join(port.expectations_dict().values()), 'content1\n') def test_additional_expectations_1(self):
port = self._make_port_for_test_additional_expectations(
port._options.additional_expectations = [ {'additional_expectations': ['/tmp/additional-expectations-1.txt']})
'/tmp/nonexistent-file', '/tmp/additional-expectations-1.txt'] self.assertEqual(port.expectations_dict().values(), ['content1\n'])
self.assertEqual('\n'.join(port.expectations_dict().values()), 'content1\n')
def test_additional_expectations_nonexistent_and_1(self):
port._options.additional_expectations = [ port = self._make_port_for_test_additional_expectations(
'/tmp/additional-expectations-1.txt', '/tmp/additional-expectations-2.txt'] {'additional_expectations': ['/tmp/nonexistent-file',
self.assertEqual('\n'.join(port.expectations_dict().values()), 'content1\n\ncontent2\n') '/tmp/additional-expectations-1.txt']})
self.assertEqual(port.expectations_dict().values(), ['content1\n'])
port._options.additional_driver_flag = ['--special-flag']
self.assertEqual('\n'.join(port.expectations_dict().values()), 'content3\ncontent1\n\ncontent2\n') def test_additional_expectations_2(self):
port = self._make_port_for_test_additional_expectations(
{'additional_expectations': ['/tmp/additional-expectations-1.txt',
'/tmp/additional-expectations-2.txt']})
self.assertEqual(port.expectations_dict().values(), ['content1\n', 'content2\n'])
def test_additional_expectations_additional_flag(self):
port = self._make_port_for_test_additional_expectations(
{'additional_expectations': ['/tmp/additional-expectations-1.txt',
'/tmp/additional-expectations-2.txt'],
'additional_driver_flag': ['--special-flag']})
self.assertEqual(port.expectations_dict().values(), ['content3', 'content1\n', 'content2\n'])
def test_flag_specific_expectations(self): def test_flag_specific_expectations(self):
port = self.make_port(port_name='foo') port = self.make_port(port_name='foo')
port.port_name = 'foo'
port.host.filesystem.write_text_file( port.host.filesystem.write_text_file(
MOCK_WEB_TESTS + 'FlagExpectations/special-flag-a', 'aa') MOCK_WEB_TESTS + 'FlagExpectations/special-flag-a', 'aa')
port.host.filesystem.write_text_file( port.host.filesystem.write_text_file(
...@@ -390,14 +399,13 @@ class PortTest(LoggingTestCase): ...@@ -390,14 +399,13 @@ class PortTest(LoggingTestCase):
port.host.filesystem.write_text_file( port.host.filesystem.write_text_file(
MOCK_WEB_TESTS + 'FlagExpectations/README.txt', 'cc') MOCK_WEB_TESTS + 'FlagExpectations/README.txt', 'cc')
self.assertEqual('\n'.join(port.expectations_dict().values()), '') self.assertEqual(port.expectations_dict().values(), [])
# all_expectations_dict() is an OrderedDict, but its order depends on # all_expectations_dict() is an OrderedDict, but its order depends on
# file system walking order. # file system walking order.
self.assertEqual('\n'.join(sorted(port.all_expectations_dict().values())), 'aa\nbb') self.assertEqual(sorted(port.all_expectations_dict().values()), ['aa', 'bb'])
def test_flag_specific_expectations_identify_unreadable_file(self): def test_flag_specific_expectations_identify_unreadable_file(self):
port = self.make_port(port_name='foo') port = self.make_port(port_name='foo')
port.port_name = 'foo'
non_utf8_file = MOCK_WEB_TESTS + 'FlagExpectations/non-utf8-file' non_utf8_file = MOCK_WEB_TESTS + 'FlagExpectations/non-utf8-file'
invalid_utf8 = '\xC0' invalid_utf8 = '\xC0'
...@@ -411,40 +419,138 @@ class PortTest(LoggingTestCase): ...@@ -411,40 +419,138 @@ class PortTest(LoggingTestCase):
self.assertLog(['ERROR: Failed to read expectations file: \'' + self.assertLog(['ERROR: Failed to read expectations file: \'' +
non_utf8_file + '\'\n']) non_utf8_file + '\'\n'])
def test_driver_flag_from_file(self): def test_flag_specific_config_name_from_options(self):
# primary_driver_flag() comes from additional-driver-flag.setting file or port_a = self.make_port(options=optparse.Values({}))
# --additional-driver-flag. additional_driver_flags() excludes primary_driver_flag(). # pylint: disable=protected-access
self.assertEqual(port_a._specified_additional_driver_flags(), []);
self.assertIsNone(port_a._flag_specific_config_name())
port_a = self.make_port(options=optparse.Values(
{'additional_driver_flag': []}))
port_b = self.make_port(options=optparse.Values( port_b = self.make_port(options=optparse.Values(
{'additional_driver_flag': ['--bb']})) {'additional_driver_flag': ['--bb']}))
port_c = self.make_port(options=optparse.Values( self.assertEqual(port_b._specified_additional_driver_flags(), ['--bb']);
{'additional_driver_flag': ['--bb', '--cc']})) self.assertEqual(port_b._flag_specific_config_name(), 'bb')
self.assertEqual(port_a.primary_driver_flag(), None)
self.assertEqual(port_b.primary_driver_flag(), '--bb')
self.assertEqual(port_c.primary_driver_flag(), '--bb')
default_flags = port_a.additional_driver_flags() port_c = self.make_port(options=optparse.Values(
self.assertEqual(port_b.additional_driver_flags(), default_flags) {'additional_driver_flag': ['--cc', '--dd']}))
self.assertEqual(port_c.additional_driver_flags(), self.assertEqual(port_c._specified_additional_driver_flags(), ['--cc', '--dd']);
['--cc'] + default_flags) self.assertEqual(port_c._flag_specific_config_name(), 'cc')
def test_flag_specific_config_name_from_options_and_file(self):
flag_file = MOCK_WEB_TESTS + 'additional-driver-flag.setting' flag_file = MOCK_WEB_TESTS + 'additional-driver-flag.setting'
port_a = self.make_port(options=optparse.Values({}))
port_a.host.filesystem.write_text_file(flag_file, '--aa') port_a.host.filesystem.write_text_file(flag_file, '--aa')
# pylint: disable=protected-access
self.assertEqual(port_a._specified_additional_driver_flags(), ['--aa']);
self.assertEqual(port_a._flag_specific_config_name(), 'aa')
port_b = self.make_port(options=optparse.Values(
{'additional_driver_flag': ['--bb']}))
port_b.host.filesystem.write_text_file(flag_file, '--aa') port_b.host.filesystem.write_text_file(flag_file, '--aa')
self.assertEqual(port_b._specified_additional_driver_flags(), ['--aa', '--bb']);
self.assertEqual(port_b._flag_specific_config_name(), 'aa')
port_c = self.make_port(options=optparse.Values(
{'additional_driver_flag': ['--bb', '--cc']}))
port_c.host.filesystem.write_text_file(flag_file, '--bb') port_c.host.filesystem.write_text_file(flag_file, '--bb')
# We don't remove duplicated flags at this time.
self.assertEqual(port_c._specified_additional_driver_flags(), ['--bb', '--bb', '--cc']);
self.assertEqual(port_c._flag_specific_config_name(), 'bb')
def _write_flag_specific_config(self, port):
port.host.filesystem.write_text_file(
port.host.filesystem.join(port.web_tests_dir(), 'FlagSpecificConfig'),
'['
' {"name": "a", "args": ["--aa"]},'
' {"name": "b", "args": ["--aa", "--bb"]},'
' {"name": "c", "args": ["--aa", "--cc"]}'
']')
def test_flag_specific_config_name_from_options_and_config(self):
port_a1 = self.make_port(options=optparse.Values(
{'additional_driver_flag': ['--aa']}))
self._write_flag_specific_config(port_a1)
# pylint: disable=protected-access
self.assertEqual(port_a1._flag_specific_config_name(), 'a')
port_a2 = self.make_port(options=optparse.Values(
{'additional_driver_flag': ['--aa', '--dd']}))
self._write_flag_specific_config(port_a2)
self.assertEqual(port_a2._flag_specific_config_name(), 'a')
port_b1 = self.make_port(options=optparse.Values(
{'additional_driver_flag': ['--bb']}))
self._write_flag_specific_config(port_b1)
# No match. Fallback to first specified flag.
self.assertEqual(port_b1._flag_specific_config_name(), 'bb')
port_b2 = self.make_port(options=optparse.Values(
{'additional_driver_flag': ['--aa', '--bb']}))
self._write_flag_specific_config(port_b2)
self.assertEqual(port_b2._flag_specific_config_name(), 'b')
port_b3 = self.make_port(options=optparse.Values(
{'additional_driver_flag': ['--bb', '--aa']}))
self._write_flag_specific_config(port_b3)
self.assertEqual(port_b3._flag_specific_config_name(), 'b')
port_b4 = self.make_port(options=optparse.Values(
{'additional_driver_flag': ['--bb', '--aa', '--dd']}))
self._write_flag_specific_config(port_b4)
self.assertEqual(port_b4._flag_specific_config_name(), 'b')
def test_ambigous_flag_specific_match(self):
port = self.make_port(options=optparse.Values(
{'additional_driver_flag': ['--aa', '--bb', '--cc']}))
self._write_flag_specific_config(port)
# pylint: disable=protected-access
self.assertRaises(AssertionError, port._flag_specific_config_name)
def test_flag_specific_fallback(self):
port = self.make_port(options=optparse.Values(
{'additional_driver_flag': ['--dd', '--ee']}))
self._write_flag_specific_config(port)
# pylint: disable=protected-access
self.assertEqual(port._flag_specific_config_name(), 'dd')
def test_flag_specific_option(self):
port_a = self.make_port(options=optparse.Values({'flag_specific': 'a'}))
self._write_flag_specific_config(port_a)
# pylint: disable=protected-access
self.assertEqual(port_a._flag_specific_config_name(), 'a')
port_b = self.make_port(options=optparse.Values(
{'flag_specific': 'a', 'additional_driver_flag': ['--bb']}))
self._write_flag_specific_config(port_b)
self.assertEqual(port_b._flag_specific_config_name(), 'b')
port_d = self.make_port(options=optparse.Values({'flag_specific': 'd'}))
self._write_flag_specific_config(port_d)
self.assertRaises(AssertionError, port_d._flag_specific_config_name)
def test_duplicate_flag_specific_name(self):
port = self.make_port()
port.host.filesystem.write_text_file(
port.host.filesystem.join(port.web_tests_dir(), 'FlagSpecificConfig'),
'[{"name": "a", "args": ["--aa"]}, {"name": "a", "args": ["--aa", "--bb"]}]')
# pylint: disable=protected-access
self.assertRaises(ValueError, port._flag_specific_configs)
self.assertEqual(port_a.primary_driver_flag(), '--aa') def test_duplicate_flag_specific_args(self):
self.assertEqual(port_b.primary_driver_flag(), '--aa') port = self.make_port()
self.assertEqual(port_c.primary_driver_flag(), '--bb') port.host.filesystem.write_text_file(
port.host.filesystem.join(port.web_tests_dir(), 'FlagSpecificConfig'),
'[{"name": "a", "args": ["--aa"]}, {"name": "b", "args": ["--aa"]}]')
# pylint: disable=protected-access
self.assertRaises(ValueError, port._flag_specific_configs)
self.assertEqual(port_a.additional_driver_flags(), default_flags) def test_invalid_flag_specific_name(self):
self.assertEqual(port_b.additional_driver_flags(), port = self.make_port()
['--bb'] + default_flags) port.host.filesystem.write_text_file(
self.assertEqual(port_c.additional_driver_flags(), port.host.filesystem.join(port.web_tests_dir(), 'FlagSpecificConfig'),
['--cc'] + default_flags) '[{"name": "a/", "args": ["--aa"]}]')
# pylint: disable=protected-access
self.assertRaises(ValueError, port._flag_specific_configs)
def test_additional_env_var(self): def test_additional_env_var(self):
port = self.make_port(options=optparse.Values({'additional_env_var': ['FOO=BAR', 'BAR=FOO']})) port = self.make_port(options=optparse.Values({'additional_env_var': ['FOO=BAR', 'BAR=FOO']}))
......
...@@ -461,9 +461,6 @@ class Driver(object): ...@@ -461,9 +461,6 @@ class Driver(object):
cmd += self._base_cmd_line() cmd += self._base_cmd_line()
if self._no_timeout: if self._no_timeout:
cmd.append('--no-timeout') cmd.append('--no-timeout')
primary_driver_flag = self._port.primary_driver_flag()
if primary_driver_flag:
cmd.append(primary_driver_flag)
cmd.extend(self._port.additional_driver_flags()) cmd.extend(self._port.additional_driver_flags())
if self._port.get_option('enable_leak_detection'): if self._port.get_option('enable_leak_detection'):
cmd.append('--enable-leak-detection') cmd.append('--enable-leak-detection')
......
...@@ -160,6 +160,13 @@ def parse_args(args): ...@@ -160,6 +160,13 @@ def parse_args(args):
default=[], default=[],
help=('Additional command line flag to pass to the driver. Specify multiple ' help=('Additional command line flag to pass to the driver. Specify multiple '
'times to add multiple flags.')), 'times to add multiple flags.')),
optparse.make_option(
'--flag-specific',
dest='flag_specific',
action='store',
default=None,
help=('Name of a flag-specific configuration defined in FlagSpecificConfig, '
' as a shortcut of --additional-driver-flag options.')),
optparse.make_option( optparse.make_option(
'--additional-expectations', '--additional-expectations',
action='append', action='append',
...@@ -440,7 +447,7 @@ def parse_args(args): ...@@ -440,7 +447,7 @@ def parse_args(args):
'--test-list', '--test-list',
action='append', action='append',
metavar='FILE', metavar='FILE',
help='read list of tests to run from file'), help='read list of tests to run from file, as if they were specified on the command line'),
optparse.make_option( optparse.make_option(
'--isolated-script-test-filter', '--isolated-script-test-filter',
action='append', action='append',
......
FlagExpectations stores flag-specific test expectations. To run layout tests web_tests/FlagExpectations stores flag-specific test expectations.
with a flag, use: To run layout tests with a flag passed to content_shell, use:
run_web_tests.py --additional-driver-flag=--name-of-flag run_web_tests.py --additional-driver-flag=--name-of-flag
...@@ -7,21 +7,20 @@ Create a new file: ...@@ -7,21 +7,20 @@ Create a new file:
FlagExpectations/name-of-flag FlagExpectations/name-of-flag
These are formatted: The entries in the file is the same as the main TestExpectations file, e.g.
crbug.com/123456 path/to/your/test.html [ Expectation ] crbug.com/123456 path/to/your/test.html [ Expectation ]
Then run the tests with --additional-expectations: This file will override the main TestExpectations file when the above command
is run.
run_web_tests.py --additional-driver-flag=--name-of-flag If the name-of-flag is too long, or when multiple additional flags are needed,
--additional-expectations=path/to/FlagExpectations/name-of-flag you can add an entry in web_tests/FlagSpecificConfig, like
{
which will override the main TestExpectations file. "name": "short-name",
"args": ["--name-of-flag1", "--name-of-flag2"]
}
When passing a set of tests via the command line, such as using --test-list, And create a new file in the same format of the above
the SKIP expectation is not respected by default. The option --skipped=always FlagExpectations/name-of-flag file:
can be added in order to actually skip those tests.
To run a subset of all tests, with custom expectations, and to properly skip: FlagExpectations/short-name
run_web_tests.py --additional-driver-flag=--name-of-flag
--additional-expectations=path/to/FlagExpectations/name-of-flag
--test-list=path/to/test-list-file --skipped=always
[
{
"name": "disable-blink-features=LayoutNG",
"args": ["--disable-blink-features=LayoutNG"]
},
{
"name": "disable-site-isolation-trials",
"args": ["--disable-site-isolation-trials"]
},
{
"name": "enable-blink-features=CompositeAfterPaint",
"args": ["--enable-blink-features=CompositeAfterPaint"]
},
{
"name": "enable-blink-features=HeapUnifiedGarbageCollection",
"args": ["--enable-blink-features=HeapUnifiedGarbageCollection"]
},
{
"name": "enable-blink-features=NewSystemColors",
"args": ["--enable-blink-features=NewSystemColors"]
},
{
"name": "enable-features=NetworkService",
"args": ["--enable-features=NetworkService"]
},
{
"name": "enable-features=OverflowIconsForMediaControls",
"args": ["--enable-features=OverflowIconsForMediaControls"]
},
{
"name": "enable-gpu-rasterization",
"args": ["--enable-gpu-rasterization"]
}
]
Tests V8 code cache for javascript resources
---First navigation - produce and consume code cache ------
v8.compile Properties:
{
data : {
columnNumber : 0
lineNumber : 0
notStreamedReason : "script too small"
streamed : <boolean>
url : .../devtools/resources/v8-cache-script.js
}
endTime : <number>
startTime : <number>
type : "v8.compile"
}
Text details for v8.compile: v8-cache-script.js:1
v8.compile Properties:
{
data : {
columnNumber : 0
lineNumber : 0
notStreamedReason : "already used streamed data"
streamed : <boolean>
url : .../devtools/resources/v8-cache-script.js
}
endTime : <number>
startTime : <number>
type : "v8.compile"
}
Text details for v8.compile: v8-cache-script.js:1
v8.compile Properties:
{
data : {
cacheProduceOptions : "code"
columnNumber : 0
lineNumber : 0
notStreamedReason : "already used streamed data"
producedCacheSize : <number>
streamed : <boolean>
url : .../devtools/resources/v8-cache-script.js
}
endTime : <number>
startTime : <number>
type : "v8.compile"
}
Text details for v8.compile: v8-cache-script.js:1
v8.compile Properties:
{
data : {
cacheConsumeOptions : "code"
cacheRejected : false
columnNumber : 0
consumedCacheSize : <number>
lineNumber : 0
notStreamedReason : "already used streamed data"
streamed : <boolean>
url : .../devtools/resources/v8-cache-script.js
}
endTime : <number>
startTime : <number>
type : "v8.compile"
}
Text details for v8.compile: v8-cache-script.js:1
--- Second navigation - from a different origin ------
v8.compile Properties:
{
data : {
columnNumber : 0
lineNumber : 0
notStreamedReason : "script too small"
streamed : <boolean>
url : .../devtools/resources/v8-cache-script.js
}
endTime : <number>
startTime : <number>
type : "v8.compile"
}
Text details for v8.compile: v8-cache-script.js:1
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment