Commit fc8e613a authored by Andrew Grieve's avatar Andrew Grieve Committed by Commit Bot

Revert "[SuperSize] Add multiple Container support using .ssargs files."

This reverts commit 19d7a780.

Reason for revert: Breaking CQ (see bug)

Original change's description:
> [SuperSize] Add multiple Container support using .ssargs files.
> 
> This CL adds SuperSize support for multiple containers. Details:
> * Introduce .ssargs file, which is a text file to specify multiple
>   containers for SuperSize-archive to process into a single .size file.
>   * Container file types are auto-detected.
>   * Container files are relative to .ssargs file location, unless
>     absolute paths are given.
> * Introduce Version 1.1 .size file format:
>   * Header fields contain Container specs, now absorbing metadata and
>     section_sizes.
>   * Per-section data become per-container-per-section.
> * For compatibility: Running SuperSize-archive on one container still
>   generates Version 1.0 .size file format.
> * Update SuperSize-console output (including diffs) to accommodate.
> 
> After this CL, basic Trichrome support is available. A recommended
> .ssargs file (3 lines) for the official build is as follows:
> 
> TrichromeLibraryGoogle.apk --name Library
> TrichromeChromeGoogle.minimal.apks --name Chrome --java-only
> TrichromeWebViewGoogle.apk --name WebView --java-only
> 
> Bug: 1040645
> Change-Id: I67df00731d07660163410b59c57854bf56ea651e
> Reviewed-on: https://chromium-review.googlesource.com/c/chromium/src/+/2248860
> Commit-Queue: Samuel Huang <huangs@chromium.org>
> Reviewed-by: Andrew Grieve <agrieve@chromium.org>
> Cr-Commit-Position: refs/heads/master@{#780594}

TBR=huangs@chromium.org,agrieve@chromium.org

Change-Id: Ic3175ea3e8ede41e3183c12b5a00a4a09be55a75
No-Try: True
Bug: 1040645, 1097572
Reviewed-on: https://chromium-review.googlesource.com/c/chromium/src/+/2256593Reviewed-by: default avatarAndrew Grieve <agrieve@chromium.org>
Commit-Queue: Andrew Grieve <agrieve@chromium.org>
Cr-Commit-Position: refs/heads/master@{#780639}
parent 540e2267
...@@ -15,7 +15,6 @@ import logging ...@@ -15,7 +15,6 @@ import logging
import os import os
import posixpath import posixpath
import re import re
import shlex
import string import string
import subprocess import subprocess
import sys import sys
...@@ -1132,8 +1131,6 @@ def _ParsePakSymbols(symbols_by_id, object_paths_by_pak_id): ...@@ -1132,8 +1131,6 @@ def _ParsePakSymbols(symbols_by_id, object_paths_by_pak_id):
full_name=symbol.full_name, object_path=path, aliases=aliases) full_name=symbol.full_name, object_path=path, aliases=aliases)
aliases.append(new_sym) aliases.append(new_sym)
raw_symbols.append(new_sym) raw_symbols.append(new_sym)
# Sorting can ignore containers because symbols created here are all in the
# same container.
raw_symbols.sort(key=lambda s: (s.section_name, s.address, s.object_path)) raw_symbols.sort(key=lambda s: (s.section_name, s.address, s.object_path))
raw_total = 0.0 raw_total = 0.0
int_total = 0 int_total = 0
...@@ -1750,47 +1747,50 @@ def _AutoIdentifyInputFile(args): ...@@ -1750,47 +1747,50 @@ def _AutoIdentifyInputFile(args):
elif args.f.endswith('.map') or args.f.endswith('.map.gz'): elif args.f.endswith('.map') or args.f.endswith('.map.gz'):
logging.info('Auto-identified --map-file.') logging.info('Auto-identified --map-file.')
args.map_file = args.f args.map_file = args.f
elif args.f.endswith('.ssargs'):
logging.info('Auto-identified --ssargs-file.')
args.ssargs_file = args.f
else: else:
return False return False
return True return True
def _AddContainerArguments(parser): def AddMainPathsArguments(parser):
"""Add arguments applicable to a single container.""" """Add arguments for _DeduceMainPaths()."""
# Main container file arguments. These are mutually-exclusive.
parser.add_argument('-f', metavar='FILE', parser.add_argument('-f', metavar='FILE',
help='Auto-identify input file type.') help='Auto-identify input file type.')
parser.add_argument('--apk-file', parser.add_argument('--apk-file',
help='.apk file to measure. Other flags can generally be ' help='.apk file to measure. Other flags can generally be '
'derived when this is used.') 'derived when this is used.')
parser.add_argument(
'--resources-pathmap-file',
help='.pathmap.txt file that contains a maping from '
'original resource paths to shortened resource paths.')
parser.add_argument('--minimal-apks-file', parser.add_argument('--minimal-apks-file',
help='.minimal.apks file to measure. Other flags can ' help='.minimal.apks file to measure. Other flags can '
'generally be derived when this is used.') 'generally be derived when this is used.')
parser.add_argument('--mapping-file',
help='Proguard .mapping file for deobfuscation.')
parser.add_argument('--elf-file', parser.add_argument('--elf-file',
help='Path to input ELF file. Currently used for ' help='Path to input ELF file. Currently used for '
'capturing metadata.') 'capturing metadata.')
# Auxiliary file arguments.
parser.add_argument('--map-file', parser.add_argument('--map-file',
help='Path to input .map(.gz) file. Defaults to ' help='Path to input .map(.gz) file. Defaults to '
'{{elf_file}}.map(.gz)?. If given without ' '{{elf_file}}.map(.gz)?. If given without '
'--elf-file, no size metadata will be recorded.') '--elf-file, no size metadata will be recorded.')
parser.add_argument('--mapping-file', parser.add_argument('--no-source-paths', action='store_true',
help='Proguard .mapping file for deobfuscation.') help='Do not use .ninja files to map '
parser.add_argument('--resources-pathmap-file', 'object_path -> source_path')
help='.pathmap.txt file that contains a maping from ' parser.add_argument('--output-directory',
'original resource paths to shortened resource paths.') help='Path to the root build directory.')
parser.add_argument('--tool-prefix',
help='Path prefix for c++filt, nm, readelf.')
def AddArguments(parser):
parser.add_argument('size_file', help='Path to output .size file.')
parser.add_argument('--pak-file', action='append', parser.add_argument('--pak-file', action='append',
help='Paths to pak files.') help='Paths to pak files.')
parser.add_argument('--pak-info-file', parser.add_argument('--pak-info-file',
help='This file should contain all ids found in the pak ' help='This file should contain all ids found in the pak '
'files that have been passed in.') 'files that have been passed in.')
# Non-file argument.
parser.add_argument('--no-string-literals', dest='track_string_literals', parser.add_argument('--no-string-literals', dest='track_string_literals',
default=True, action='store_false', default=True, action='store_false',
help='Disable breaking down "** merge strings" into more ' help='Disable breaking down "** merge strings" into more '
...@@ -1800,10 +1800,8 @@ def _AddContainerArguments(parser): ...@@ -1800,10 +1800,8 @@ def _AddContainerArguments(parser):
action='store_true', action='store_true',
help='Instead of counting binary size, count number of relative' help='Instead of counting binary size, count number of relative'
'relocation instructions in ELF code.') 'relocation instructions in ELF code.')
parser.add_argument('--no-source-paths', parser.add_argument('--source-directory',
action='store_true', help='Custom path to the root source directory.')
help='Do not use .ninja files to map '
'object_path -> source_path')
parser.add_argument( parser.add_argument(
'--java-only', action='store_true', help='Run on only Java symbols') '--java-only', action='store_true', help='Run on only Java symbols')
parser.add_argument( parser.add_argument(
...@@ -1817,63 +1815,7 @@ def _AddContainerArguments(parser): ...@@ -1817,63 +1815,7 @@ def _AddContainerArguments(parser):
action='store_true', action='store_true',
help='Include a padding field for each symbol, instead of rederiving ' help='Include a padding field for each symbol, instead of rederiving '
'from consecutive symbols on file load.') 'from consecutive symbols on file load.')
AddMainPathsArguments(parser)
def AddArguments(parser):
parser.add_argument('size_file', help='Path to output .size file.')
parser.add_argument('--source-directory',
help='Custom path to the root source directory.')
parser.add_argument('--output-directory',
help='Path to the root build directory.')
parser.add_argument('--tool-prefix',
help='Path prefix for c++filt, nm, readelf.')
_AddContainerArguments(parser)
parser.add_argument('--ssargs-file',
help='Path to SuperSize multi-container arguments file.')
def _ParseSsargs(lines):
"""Parses .ssargs data.
An .ssargs file is a text file to specify multiple containers as input to
SuperSize-archive. After '#'-based comments, start / end whitespaces, and
empty lines are stripped, each line specifies a distinct container. Format:
* Positional argument: |name| for the container.
* Main input file specified by -f, --apk-file, --elf-file, etc.:
* Can be an absolute path.
* Can be a relative path. In this case, it's up to the caller to supply the
base directory.
* -f switch must not specify another .ssargs file.
* For supported switches: See _AddContainerArguments().
Args:
lines: An iterator containing lines of .ssargs data.
Returns:
A list of arguments, one for each container.
Raises:
ValueError: Parse error, including input line number.
"""
container_args_list = []
parser = argparse.ArgumentParser(add_help=False)
parser.error = lambda msg: (_ for _ in ()).throw(ValueError(msg))
parser.add_argument('name')
_AddContainerArguments(parser)
try:
for lineno, line in enumerate(lines, 1):
toks = shlex.split(line, comments=True)
if not toks: # Skip if line is empty after stripping comments.
continue
container_args = parser.parse_args(toks)
if set(container_args.name) & set('<>'):
parser.error('container name cannot have characters in "<>"')
if container_args.f and container_args.f.endswith('.ssargs'):
parser.error('cannot nest .ssargs files')
container_args_list.append(container_args)
except ValueError as e:
e.args = ('Line %d: %s' % (lineno, e.args[0]), )
raise e
return container_args_list
def _DeduceNativeInfo(tentative_output_dir, apk_path, elf_path, map_path, def _DeduceNativeInfo(tentative_output_dir, apk_path, elf_path, map_path,
...@@ -1941,57 +1883,6 @@ def _DeduceAuxPaths(args, apk_prefix): ...@@ -1941,57 +1883,6 @@ def _DeduceAuxPaths(args, apk_prefix):
return mapping_path, resources_pathmap_path return mapping_path, resources_pathmap_path
def _DeduceDerivedArgs(args, is_top_level_args, on_config_error):
setattr(args, 'is_bundle', args.minimal_apks_file is not None)
if is_top_level_args:
any_path = (args.apk_file or args.minimal_apks_file or args.elf_file
or args.map_file or args.ssargs_file)
if any_path is None:
on_config_error(
'Must pass at least one of --apk-file, --minimal-apks-file, '
'--elf-file, --map-file, --ssargs-file')
setattr(args, 'any_path_within_output_directory', any_path)
def _ReadMultipleArgsFromStream(lines, base_dir, err_prefix, args,
on_config_error):
try:
container_args_list = _ParseSsargs(lines)
except ValueError as e:
on_config_error('%s: %s' % (err_prefix, e.args[0]))
sub_args_list = []
for container_args in container_args_list:
# Clone |args| keys but assign empty values.
sub_args = argparse.Namespace(**{k: None for k in vars(args)})
# Copy parsed values to |sub_args|.
for k, v in container_args.__dict__.items():
# Translate ile arguments to be relative to |sub_dir|.
if (k.endswith('_file') or k == 'f') and v is not None:
v = os.path.join(base_dir, v)
sub_args.__dict__[k] = v
if sub_args.f is not None:
_AutoIdentifyInputFile(sub_args)
_DeduceDerivedArgs(sub_args,
is_top_level_args=False,
on_config_error=on_config_error)
logging.info('Container: %r' %
{k: v
for k, v in sub_args.__dict__.items() if v is not None})
sub_args_list.append(sub_args)
return sub_args_list
def _ReadMultipleArgsFromFile(args, on_config_error):
with open(args.ssargs_file, 'r') as fh:
lines = list(fh)
err_prefix = 'In file ' + args.ssargs_file
# Supply |base_dir| as the directory containing the .ssargs file, to ensure
# consistent behavior wherever SuperSize-archive runs.
base_dir = os.path.dirname(os.path.abspath(args.ssargs_file))
return _ReadMultipleArgsFromStream(lines, base_dir, err_prefix, args,
on_config_error)
def _DeduceMainPaths(args, on_config_error): def _DeduceMainPaths(args, on_config_error):
"""Generates main paths (may be deduced) for each containers given by input. """Generates main paths (may be deduced) for each containers given by input.
...@@ -2013,12 +1904,13 @@ def _DeduceMainPaths(args, on_config_error): ...@@ -2013,12 +1904,13 @@ def _DeduceMainPaths(args, on_config_error):
apk_path: Path to .apk file that can be opened for processing, but whose apk_path: Path to .apk file that can be opened for processing, but whose
filename is unimportant (e.g., can be a temp file). filename is unimportant (e.g., can be a temp file).
""" """
# TODO(huangs): Assign distinct names for multiple containers.
assert idx == 0
container_name = ''
output_directory = output_directory_finder.Tentative() output_directory = output_directory_finder.Tentative()
opts = ContainerArchiveOptions(sub_args, output_directory=output_directory) opts = ContainerArchiveOptions(sub_args, output_directory=output_directory)
container_name = sub_args.name if hasattr(sub_args, 'name') else None
if apk_prefix: if apk_prefix:
if not container_name:
container_name = apk_prefix
# Allow either .minimal.apks or just .apks. # Allow either .minimal.apks or just .apks.
apk_prefix = apk_prefix.replace('.minimal.apks', '.aab') apk_prefix = apk_prefix.replace('.minimal.apks', '.aab')
apk_prefix = apk_prefix.replace('.apks', '.aab') apk_prefix = apk_prefix.replace('.apks', '.aab')
...@@ -2039,8 +1931,6 @@ def _DeduceMainPaths(args, on_config_error): ...@@ -2039,8 +1931,6 @@ def _DeduceMainPaths(args, on_config_error):
output_directory_finder=output_directory_finder, output_directory_finder=output_directory_finder,
linker_name=linker_name) linker_name=linker_name)
tool_prefix = tool_prefix_finder.Finalized() tool_prefix = tool_prefix_finder.Finalized()
if not container_name and elf_path:
container_name = elf_path
else: else:
# Trust that these values will not be used, and set to None. # Trust that these values will not be used, and set to None.
elf_path = None elf_path = None
...@@ -2057,19 +1947,14 @@ def _DeduceMainPaths(args, on_config_error): ...@@ -2057,19 +1947,14 @@ def _DeduceMainPaths(args, on_config_error):
size_info_prefix = os.path.join(output_directory, 'size-info', size_info_prefix = os.path.join(output_directory, 'size-info',
os.path.basename(apk_prefix)) os.path.basename(apk_prefix))
if not container_name:
container_name = 'Container %d' % idx
return (opts, output_directory, tool_prefix, container_name, apk_path, return (opts, output_directory, tool_prefix, container_name, apk_path,
mapping_path, apk_so_path, elf_path, map_path, mapping_path, apk_so_path, elf_path, map_path,
resources_pathmap_path, linker_name, size_info_prefix) resources_pathmap_path, linker_name, size_info_prefix)
if args.ssargs_file: # One for each container.
sub_args_list = _ReadMultipleArgsFromFile(args, on_config_error) # TODO(huangs): Add support for multiple containers
else: sub_args_list = [args]
sub_args_list = [args]
# Each element in |sub_args_list| specifies a container.
for idx, sub_args in enumerate(sub_args_list): for idx, sub_args in enumerate(sub_args_list):
# If needed, extract .apk file to a temp file and process that instead. # If needed, extract .apk file to a temp file and process that instead.
if sub_args.minimal_apks_file: if sub_args.minimal_apks_file:
...@@ -2089,9 +1974,17 @@ def Run(args, on_config_error): ...@@ -2089,9 +1974,17 @@ def Run(args, on_config_error):
on_config_error('Cannot identify file %s' % args.f) on_config_error('Cannot identify file %s' % args.f)
if args.apk_file and args.minimal_apks_file: if args.apk_file and args.minimal_apks_file:
on_config_error('Cannot use both --apk-file and --minimal-apks-file.') on_config_error('Cannot use both --apk-file and --minimal-apks-file.')
_DeduceDerivedArgs(args,
is_top_level_args=True, # Deduce arguments.
on_config_error=on_config_error) setattr(args, 'is_bundle', args.minimal_apks_file is not None)
any_path = (args.apk_file or args.minimal_apks_file or args.elf_file
or args.map_file)
if any_path is None:
on_config_error(
'Must pass at least one of --apk-file, --minimal-apks-file, '
'--elf-file, --map-file')
setattr(args, 'any_path_within_output_directory', any_path)
knobs = SectionSizeKnobs() knobs = SectionSizeKnobs()
build_config = {} build_config = {}
......
...@@ -177,33 +177,30 @@ class DescriberText(Describer): ...@@ -177,33 +177,30 @@ class DescriberText(Describer):
self.recursive = recursive self.recursive = recursive
self.summarize = summarize self.summarize = summarize
def _DescribeSectionSizes(self, def _DescribeSectionSizes(self, unsummed_sections, summed_sections,
unsummed_sections, section_sizes):
summed_sections,
section_sizes,
indent=''):
total_bytes, section_names = _GetSectionSizeInfo(unsummed_sections, total_bytes, section_names = _GetSectionSizeInfo(unsummed_sections,
summed_sections, summed_sections,
section_sizes) section_sizes)
yield '' yield ''
yield '{}Section Sizes (Total={} ({} bytes)):'.format( yield 'Section Sizes (Total={} ({} bytes)):'.format(
indent, _PrettySize(total_bytes), total_bytes) _PrettySize(total_bytes), total_bytes)
for name in section_names: for name in section_names:
size = section_sizes[name] size = section_sizes[name]
if name in unsummed_sections: if name in unsummed_sections:
yield '{} {}: {} ({} bytes) (not included in totals)'.format( yield ' {}: {} ({} bytes) (not included in totals)'.format(
indent, name, _PrettySize(size), size) name, _PrettySize(size), size)
else: else:
notes = '' notes = ''
if name not in summed_sections: if name not in summed_sections:
notes = ' (counted in .other)' notes = ' (counted in .other)'
percent = _Divide(size, total_bytes) percent = _Divide(size, total_bytes)
yield '{} {}: {} ({} bytes) ({:.1%}){}'.format( yield ' {}: {} ({} bytes) ({:.1%}){}'.format(name, _PrettySize(size),
indent, name, _PrettySize(size), size, percent, notes) size, percent, notes)
if self.verbose: if self.verbose:
yield '' yield ''
yield '{}Other section sizes:'.format(indent) yield 'Other section sizes:'
section_names = sorted( section_names = sorted(
k for k in section_sizes.keys() if k not in section_names) k for k in section_sizes.keys() if k not in section_names)
for name in section_names: for name in section_names:
...@@ -212,15 +209,12 @@ class DescriberText(Describer): ...@@ -212,15 +209,12 @@ class DescriberText(Describer):
notes = ' (not included in totals)' notes = ' (not included in totals)'
elif name not in summed_sections: elif name not in summed_sections:
notes = ' (counted in .other)' notes = ' (counted in .other)'
yield '{} {}: {} ({} bytes){}'.format( yield ' {}: {} ({} bytes){}'.format(name,
indent, name, _PrettySize(section_sizes[name]), section_sizes[name], _PrettySize(section_sizes[name]),
notes) section_sizes[name], notes)
def _DescribeSymbol(self, sym, single_line=False): def _DescribeSymbol(self, sym, single_line=False):
container_str = sym.container_short_name # TODO(huangs): Render and display container name.
if container_str:
container_str = '<{}>'.format(container_str)
address = 'Group' if sym.IsGroup() else hex(sym.address) address = 'Group' if sym.IsGroup() else hex(sym.address)
last_field = '' last_field = ''
...@@ -248,13 +242,11 @@ class DescriberText(Describer): ...@@ -248,13 +242,11 @@ class DescriberText(Describer):
if last_field: if last_field:
last_field = ' ' + last_field last_field = ' ' + last_field
if sym.IsDelta(): if sym.IsDelta():
yield '{}{}@{:<9s} {}{}'.format(container_str, sym.section, address, yield '{}@{:<9s} {}{}'.format(
pss_field, last_field) sym.section, address, pss_field, last_field)
else: else:
l = '{}{}@{:<9s} pss={} padding={}{}'.format(container_str, l = '{}@{:<9s} pss={} padding={}{}'.format(
sym.section, address, sym.section, address, pss_field, sym.padding, last_field)
pss_field, sym.padding,
last_field)
yield l yield l
yield ' source_path={} \tobject_path={}'.format( yield ' source_path={} \tobject_path={}'.format(
sym.source_path, sym.object_path) sym.source_path, sym.object_path)
...@@ -273,17 +265,16 @@ class DescriberText(Describer): ...@@ -273,17 +265,16 @@ class DescriberText(Describer):
else: else:
pss_field = '{:<14}'.format(pss_field) pss_field = '{:<14}'.format(pss_field)
if single_line: if single_line:
yield '{}{}@{:<9s} {} {}{}'.format(container_str, sym.section, yield '{}@{:<9s} {} {}{}'.format(
address, pss_field, sym.name, sym.section, address, pss_field, sym.name, last_field)
last_field)
else: else:
path = sym.source_path or sym.object_path path = sym.source_path or sym.object_path
if path and sym.generated_source: if path and sym.generated_source:
path = '$root_gen_dir/' + path path = '$root_gen_dir/' + path
path = path or '{no path}' path = path or '{no path}'
yield '{}{}@{:<9s} {} {}'.format(container_str, sym.section, address, yield '{}@{:<9s} {} {}'.format(
pss_field, path) sym.section, address, pss_field, path)
if sym.name: if sym.name:
yield ' {}{}'.format(sym.name, last_field) yield ' {}{}'.format(sym.name, last_field)
...@@ -407,7 +398,7 @@ class DescriberText(Describer): ...@@ -407,7 +398,7 @@ class DescriberText(Describer):
if group.container_name == '': if group.container_name == '':
title_parts.append('Section@Address') title_parts.append('Section@Address')
else: else:
title_parts.append('<Container>Section@Address') raise ValueError('Multiple container not yet supported.')
if self.verbose: if self.verbose:
title_parts.append('...') title_parts.append('...')
else: else:
...@@ -506,7 +497,7 @@ class DescriberText(Describer): ...@@ -506,7 +497,7 @@ class DescriberText(Describer):
group_desc = self._DescribeSymbolGroup(delta_group) group_desc = self._DescribeSymbolGroup(delta_group)
return itertools.chain(diff_summary_desc, path_delta_desc, group_desc) return itertools.chain(diff_summary_desc, path_delta_desc, group_desc)
def _DescribeDeltaDict(self, data_name, before_dict, after_dict, indent=''): def _DescribeDeltaDict(self, data_name, before_dict, after_dict):
common_items = { common_items = {
k: v k: v
for k, v in before_dict.items() if after_dict.get(k) == v for k, v in before_dict.items() if after_dict.get(k) == v
...@@ -517,41 +508,17 @@ class DescriberText(Describer): ...@@ -517,41 +508,17 @@ class DescriberText(Describer):
} }
after_items = {k: v for k, v in after_dict.items() if k not in common_items} after_items = {k: v for k, v in after_dict.items() if k not in common_items}
return itertools.chain( return itertools.chain(
(indent + 'Common %s:' % data_name, ), ('Common %s:' % data_name, ), (' %s' % line
(indent + ' %s' % line for line in DescribeDict(common_items)), for line in DescribeDict(common_items)),
(indent + 'Old %s:' % data_name, ), ('Old %s:' % data_name, ), (' %s' % line
(indent + ' %s' % line for line in DescribeDict(before_items)), for line in DescribeDict(before_items)),
(indent + 'New %s:' % data_name, ), ('New %s:' % data_name, ), (' %s' % line
(indent + ' %s' % line for line in DescribeDict(after_items))) for line in DescribeDict(after_items)))
def _DescribeDeltaSizeInfo(self, diff): def _DescribeDeltaSizeInfo(self, diff):
desc_list = [] desc_list = []
# Describe |build_config| and each container. If there is only one container
# then support legacy output by reporting |build_config| as part of the
# first container's metadata.
if len(diff.containers) > 1: if len(diff.containers) > 1:
desc_list.append( raise ValueError('Multiple container not yet supported.')
self._DescribeDeltaDict('Build config', diff.before.build_config,
diff.after.build_config))
for c in diff.containers:
name = c.name
desc_list.append(('', ))
desc_list.append(('Container: <%s>' % name, ))
c_before = diff.before.ContainerForName(
name, default=models.Container.Empty())
c_after = diff.after.ContainerForName(name,
default=models.Container.Empty())
desc_list.append(
self._DescribeDeltaDict('Metadata',
c_before.metadata,
c_after.metadata,
indent=' '))
unsummed_sections, summed_sections = c.ClassifySections()
desc_list.append(
self._DescribeSectionSizes(unsummed_sections,
summed_sections,
c.section_sizes,
indent=' '))
else: # Legacy output for single Container case. else: # Legacy output for single Container case.
desc_list.append( desc_list.append(
self._DescribeDeltaDict('Metadata', diff.before.metadata_legacy, self._DescribeDeltaDict('Metadata', diff.before.metadata_legacy,
...@@ -567,26 +534,14 @@ class DescriberText(Describer): ...@@ -567,26 +534,14 @@ class DescriberText(Describer):
def _DescribeSizeInfo(self, size_info): def _DescribeSizeInfo(self, size_info):
desc_list = [] desc_list = []
# Describe |build_config| and each container. If there is only one container
# then support legacy output by reporting |build_config| as part of the
# first container's metadata.
if len(size_info.containers) > 1: if len(size_info.containers) > 1:
desc_list.append(('Build Configs:', )) raise ValueError('Multiple container not yet supported.')
desc_list.append(' %s' % line
for line in DescribeDict(size_info.build_config))
containers = size_info.containers
else: else:
containers = [ # Support legacy output by reporting |build_config| as part of metadata.
models.Container(name='',
metadata=size_info.metadata_legacy,
section_sizes=size_info.containers[0].section_sizes)
]
for c in containers:
if c.name:
desc_list.append(('', ))
desc_list.append(('Container <%s>' % c.name, ))
desc_list.append(('Metadata:', )) desc_list.append(('Metadata:', ))
desc_list.append(' %s' % line for line in DescribeDict(c.metadata)) desc_list.append(' %s' % line
for line in DescribeDict(size_info.metadata_legacy))
c = size_info.containers[0]
unsummed_sections, summed_sections = c.ClassifySections() unsummed_sections, summed_sections = c.ClassifySections()
desc_list.append( desc_list.append(
self._DescribeSectionSizes(unsummed_sections, summed_sections, self._DescribeSectionSizes(unsummed_sections, summed_sections,
...@@ -604,7 +559,8 @@ def _DescribeSizeInfoContainerCoverage(raw_symbols, container): ...@@ -604,7 +559,8 @@ def _DescribeSizeInfoContainerCoverage(raw_symbols, container):
"""Yields lines describing how accurate |size_info| is.""" """Yields lines describing how accurate |size_info| is."""
for section, section_name in models.SECTION_TO_SECTION_NAME.items(): for section, section_name in models.SECTION_TO_SECTION_NAME.items():
expected_size = container.section_sizes.get(section_name) expected_size = container.section_sizes.get(section_name)
in_section = raw_symbols.WhereInSection(section_name, container=container) # TODO(huangs): Also filter by container.
in_section = raw_symbols.WhereInSection(section_name)
actual_size = in_section.size actual_size = in_section.size
if expected_size is None: if expected_size is None:
...@@ -694,15 +650,12 @@ def _DescribeSizeInfoContainerCoverage(raw_symbols, container): ...@@ -694,15 +650,12 @@ def _DescribeSizeInfoContainerCoverage(raw_symbols, container):
def DescribeSizeInfoCoverage(size_info): def DescribeSizeInfoCoverage(size_info):
for i, container in enumerate(size_info.containers): # TODO(huangs): Add support for multiple containers.
if i > 0: assert len(size_info.containers) == 1
yield '' c = size_info.containers[0]
if container.name: # TODO(huangs): Change to use "yield from" once linters allow this.
yield 'Container <%s>' % container.name for line in _DescribeSizeInfoContainerCoverage(size_info.raw_symbols, c):
# TODO(huangs): Change to use "yield from" once linters allow this. yield line
for line in _DescribeSizeInfoContainerCoverage(size_info.raw_symbols,
container):
yield line
class DescriberCsv(Describer): class DescriberCsv(Describer):
......
...@@ -150,25 +150,19 @@ def _DiffObj(before_obj, after_obj): ...@@ -150,25 +150,19 @@ def _DiffObj(before_obj, after_obj):
def _DiffContainerLists(before_containers, after_containers): def _DiffContainerLists(before_containers, after_containers):
"""Computes diff of Containers lists, matching names.""" """Computes diffs between two lists of Containers."""
# Find ordered unique names, preferring order of |container_after|. # TODO(huangs): Add support for multiple containers (needs name matching).
pairs = collections.OrderedDict() assert len(before_containers) == 1
for c in after_containers: assert len(after_containers) == 1
pairs[c.name] = [models.Container.Empty(), c]
for c in before_containers:
if c.name in pairs:
pairs[c.name][0] = c
else:
pairs[c.name] = [c, models.Container.Empty()]
ret = [] ret = []
for name, [before_c, after_c] in pairs.items(): for (before_c, after_c) in zip(before_containers, after_containers):
name = after_c.name
assert before_c.name == name
ret.append( ret.append(
models.Container(name=name, models.Container(name=name,
metadata=_DiffObj(before_c.metadata, after_c.metadata), metadata=_DiffObj(before_c.metadata, after_c.metadata),
section_sizes=_DiffObj(before_c.section_sizes, section_sizes=_DiffObj(before_c.section_sizes,
after_c.section_sizes))) after_c.section_sizes)))
# This update newly created diff Containers, not existing ones or EMPTY.
models.Container.AssignShortNames(ret)
return ret return ret
......
...@@ -37,7 +37,6 @@ def _CreateSizeInfo(aliases=None): ...@@ -37,7 +37,6 @@ def _CreateSizeInfo(aliases=None):
containers = [ containers = [
models.Container(name='', metadata=metadata, section_sizes=section_sizes) models.Container(name='', metadata=metadata, section_sizes=section_sizes)
] ]
models.Container.AssignShortNames(containers)
TEXT = models.SECTION_TEXT TEXT = models.SECTION_TEXT
symbols = [ symbols = [
_MakeSym(models.SECTION_DEX_METHOD, 10, 'a', 'com.Foo#bar()'), _MakeSym(models.SECTION_DEX_METHOD, 10, 'a', 'com.Foo#bar()'),
......
...@@ -5,9 +5,9 @@ ...@@ -5,9 +5,9 @@
"""Deals with loading & saving .size and .sizediff files. """Deals with loading & saving .size and .sizediff files.
The .size file is written in the following format. There are no section The .size file is written in the following format. There are no section
delimiters, instead the end of a section is usually determined by a row count on delimiters, instead the end of a section is usually determined by a row count
the first line of a section, followed by that number of rows. In other cases, on the first line of a section, followed by that number of rows. In other
the sections have a known size. cases, the sections have a known size.
Header Header
------ ------
...@@ -116,8 +116,7 @@ import parallel ...@@ -116,8 +116,7 @@ import parallel
# File format version for .size files. # File format version for .size files.
_SERIALIZATION_VERSION_SINGLE_CONTAINER = 'Size File Format v1' _SERIALIZATION_VERSION = 'Size File Format v1'
_SERIALIZATION_VERSION_MULTI_CONTAINER = 'Size File Format v1.1'
# Header for .sizediff files # Header for .sizediff files
_SIZEDIFF_HEADER = '# Created by //tools/binary_size\nDIFF\n' _SIZEDIFF_HEADER = '# Created by //tools/binary_size\nDIFF\n'
...@@ -172,18 +171,18 @@ def CalculatePadding(raw_symbols): ...@@ -172,18 +171,18 @@ def CalculatePadding(raw_symbols):
# Padding not really required, but it is useful to check for large padding and # Padding not really required, but it is useful to check for large padding and
# log a warning. # log a warning.
seen_container_and_sections = set() # TODO(huangs): Add support for multiple containers: Need to group by
# container names and sections.
seen_sections = set()
for i, symbol in enumerate(raw_symbols[1:]): for i, symbol in enumerate(raw_symbols[1:]):
prev_symbol = raw_symbols[i] prev_symbol = raw_symbols[i]
if symbol.IsOverhead(): if symbol.IsOverhead():
# Overhead symbols are not actionable so should be padding-only. # Overhead symbols are not actionable so should be padding-only.
symbol.padding = symbol.size symbol.padding = symbol.size
if (prev_symbol.container.name != symbol.container.name if prev_symbol.section_name != symbol.section_name:
or prev_symbol.section_name != symbol.section_name): assert symbol.section_name not in seen_sections, (
container_and_section = (symbol.container.name, symbol.section_name) 'Input symbols must be sorted by section, then address.')
assert container_and_section not in seen_container_and_sections, ( seen_sections.add(symbol.section_name)
'Input symbols must be sorted by container, section, then address.')
seen_container_and_sections.add(container_and_section)
continue continue
if (symbol.address <= 0 or prev_symbol.address <= 0 if (symbol.address <= 0 or prev_symbol.address <= 0
or not symbol.IsNative() or not prev_symbol.IsNative()): or not symbol.IsNative() or not prev_symbol.IsNative()):
...@@ -249,34 +248,21 @@ def _SaveSizeInfoToFile(size_info, ...@@ -249,34 +248,21 @@ def _SaveSizeInfoToFile(size_info,
else: else:
raw_symbols = size_info.raw_symbols raw_symbols = size_info.raw_symbols
num_containers = len(size_info.containers)
has_multi_containers = (num_containers > 1)
w = _Writer(file_obj) w = _Writer(file_obj)
# "Created by SuperSize" header # Created by supersize header
w.WriteLine('# Created by //tools/binary_size') w.WriteLine('# Created by //tools/binary_size')
if has_multi_containers: w.WriteLine(_SERIALIZATION_VERSION)
w.WriteLine(_SERIALIZATION_VERSION_MULTI_CONTAINER)
else:
w.WriteLine(_SERIALIZATION_VERSION_SINGLE_CONTAINER)
# JSON header fields # JSON header fields
fields = { fields = {
'has_components': True, 'has_components': True,
'has_padding': include_padding, 'has_padding': include_padding,
} }
num_containers = len(size_info.containers)
has_multi_containers = (num_containers > 1)
if has_multi_containers: if has_multi_containers:
# Write using new format. raise ValueError('Multiple container not yet supported.')
assert len(set(c.name for c in size_info.containers)) == num_containers, (
'Container names must be distinct.')
fields['build_config'] = size_info.build_config
fields['containers'] = [{
'name': c.name,
'metadata': c.metadata,
'section_sizes': c.section_sizes,
} for c in size_info.containers]
else: else:
# Write using old format. # Write using old format.
fields['metadata'] = size_info.metadata_legacy fields['metadata'] = size_info.metadata_legacy
...@@ -304,18 +290,12 @@ def _SaveSizeInfoToFile(size_info, ...@@ -304,18 +290,12 @@ def _SaveSizeInfoToFile(size_info,
w.WriteLine(comp) w.WriteLine(comp)
w.LogSize('components') w.LogSize('components')
# Symbol counts by container and section. # Symbol counts by section.
symbol_group_by_section = raw_symbols.GroupedByContainerAndSectionName() symbol_group_by_section = raw_symbols.GroupedBySectionName()
if has_multi_containers: if has_multi_containers:
container_name_to_index = { raise ValueError('Multiple container not yet supported.')
c.name: i
for i, c in enumerate(size_info.containers)
}
w.WriteLine('\t'.join('<%d>%s' %
(container_name_to_index[g.name[0]], g.name[1])
for g in symbol_group_by_section))
else: else:
w.WriteLine('\t'.join(g.name[1] for g in symbol_group_by_section)) w.WriteLine('\t'.join(g.name for g in symbol_group_by_section))
w.WriteLine('\t'.join(str(len(g)) for g in symbol_group_by_section)) w.WriteLine('\t'.join(str(len(g)) for g in symbol_group_by_section))
def gen_delta(gen, prev_value=0): def gen_delta(gen, prev_value=0):
...@@ -404,35 +384,23 @@ def _LoadSizeInfoFromFile(file_obj, size_path): ...@@ -404,35 +384,23 @@ def _LoadSizeInfoFromFile(file_obj, size_path):
""" """
# Split lines on '\n', since '\r' can appear in some lines! # Split lines on '\n', since '\r' can appear in some lines!
lines = io.TextIOWrapper(file_obj, newline='\n') lines = io.TextIOWrapper(file_obj, newline='\n')
_ReadLine(lines) # Line 0: "Created by SuperSize" header _ReadLine(lines) # Line 0: Created by supersize header
actual_version = _ReadLine(lines) actual_version = _ReadLine(lines)
if actual_version == _SERIALIZATION_VERSION_SINGLE_CONTAINER: assert actual_version == _SERIALIZATION_VERSION, (
has_multi_containers = False 'Version mismatch. Need to write some upgrade code.')
elif actual_version == _SERIALIZATION_VERSION_MULTI_CONTAINER:
has_multi_containers = True
else:
raise ValueError('Version mismatch. Need to write some upgrade code.')
# JSON header fields # JSON header fields
json_len = int(_ReadLine(lines)) json_len = int(_ReadLine(lines))
json_str = lines.read(json_len) json_str = lines.read(json_len)
fields = json.loads(json_str) fields = json.loads(json_str)
assert ('containers' in fields) == has_multi_containers
assert ('build_config' in fields) == has_multi_containers has_multi_containers = False
assert ('containers' in fields) == has_multi_containers
assert ('metadata' not in fields) == has_multi_containers
assert ('section_sizes' not in fields) == has_multi_containers
containers = [] containers = []
if has_multi_containers: # New format. if has_multi_containers: # New format.
build_config = fields['build_config'] raise ValueError('Multiple container not yet supported.')
for cfield in fields['containers']: else:
c = models.Container(name=cfield['name'], # Parse old format, but separate data into build_config and metadata.
metadata=cfield['metadata'],
section_sizes=cfield['section_sizes'])
containers.append(c)
else: # Old format.
build_config = {} build_config = {}
metadata = fields.get('metadata') metadata = fields.get('metadata')
if metadata: if metadata:
...@@ -445,7 +413,6 @@ def _LoadSizeInfoFromFile(file_obj, size_path): ...@@ -445,7 +413,6 @@ def _LoadSizeInfoFromFile(file_obj, size_path):
models.Container(name='', models.Container(name='',
metadata=metadata, metadata=metadata,
section_sizes=section_sizes)) section_sizes=section_sizes))
models.Container.AssignShortNames(containers)
has_components = fields.get('has_components', False) has_components = fields.get('has_components', False)
has_padding = fields.get('has_padding', False) has_padding = fields.get('has_padding', False)
...@@ -466,7 +433,7 @@ def _LoadSizeInfoFromFile(file_obj, size_path): ...@@ -466,7 +433,7 @@ def _LoadSizeInfoFromFile(file_obj, size_path):
components = [_ReadLine(lines) for _ in range(num_components)] components = [_ReadLine(lines) for _ in range(num_components)]
# Symbol counts by section. # Symbol counts by section.
container_and_section_names = _ReadValuesFromLine(lines, split='\t') section_names = _ReadValuesFromLine(lines, split='\t')
symbol_counts = [int(c) for c in _ReadValuesFromLine(lines, split='\t')] symbol_counts = [int(c) for c in _ReadValuesFromLine(lines, split='\t')]
# Addresses, sizes, paddings, path indices, component indices # Addresses, sizes, paddings, path indices, component indices
...@@ -493,28 +460,23 @@ def _LoadSizeInfoFromFile(file_obj, size_path): ...@@ -493,28 +460,23 @@ def _LoadSizeInfoFromFile(file_obj, size_path):
if has_padding: if has_padding:
paddings = read_numeric(delta=False) paddings = read_numeric(delta=False)
else: else:
paddings = [None] * len(container_and_section_names) paddings = [None] * len(section_names)
path_indices = read_numeric(delta=True) path_indices = read_numeric(delta=True)
if has_components: if has_components:
component_indices = read_numeric(delta=True) component_indices = read_numeric(delta=True)
else: else:
component_indices = [None] * len(container_and_section_names) component_indices = [None] * len(section_names)
raw_symbols = [None] * sum(symbol_counts) raw_symbols = [None] * sum(symbol_counts)
symbol_idx = 0 symbol_idx = 0
for (cur_container_and_section_name, cur_symbol_count, cur_addresses, for (cur_section_name, cur_symbol_count, cur_addresses, cur_sizes,
cur_sizes, cur_paddings, cur_path_indices, cur_paddings, cur_path_indices,
cur_component_indices) in zip(container_and_section_names, symbol_counts, cur_component_indices) in zip(section_names, symbol_counts, addresses,
addresses, sizes, paddings, path_indices, sizes, paddings, path_indices,
component_indices): component_indices):
if has_multi_containers: if has_multi_containers:
# Extract '<cur_container_idx_str>cur_section_name'. raise ValueError('Multiple container not yet supported.')
assert cur_container_and_section_name.startswith('<')
cur_container_idx_str, cur_section_name = (
cur_container_and_section_name[1:].split('>', 1))
cur_container = containers[int(cur_container_idx_str)]
else: else:
cur_section_name = cur_container_and_section_name
cur_container = containers[0] cur_container = containers[0]
alias_counter = 0 alias_counter = 0
for i in range(cur_symbol_count): for i in range(cur_symbol_count):
......
...@@ -309,10 +309,7 @@ class IntegrationTest(unittest.TestCase): ...@@ -309,10 +309,7 @@ class IntegrationTest(unittest.TestCase):
merged_data_desc = describe.DescribeDict(size_info.metadata_legacy) merged_data_desc = describe.DescribeDict(size_info.metadata_legacy)
return itertools.chain(merged_data_desc, stats, sym_strs) return itertools.chain(merged_data_desc, stats, sym_strs)
else: else:
build_config = describe.DescribeDict(size_info.build_config) raise ValueError('Multiple container not yet supported.')
metadata = itertools.chain.from_iterable(
describe.DescribeDict(c.metadata) for c in size_info.containers)
return itertools.chain(build_config, metadata, stats, sym_strs)
@_CompareWithGolden() @_CompareWithGolden()
def test_Archive(self): def test_Archive(self):
......
...@@ -102,8 +102,6 @@ PAK_SECTIONS = ( ...@@ -102,8 +102,6 @@ PAK_SECTIONS = (
SECTION_PAK_TRANSLATIONS, SECTION_PAK_TRANSLATIONS,
) )
CONTAINER_MULTIPLE = '*'
SECTION_NAME_TO_SECTION = { SECTION_NAME_TO_SECTION = {
SECTION_BSS: 'b', SECTION_BSS: 'b',
SECTION_BSS_REL_RO: 'b', SECTION_BSS_REL_RO: 'b',
...@@ -203,19 +201,16 @@ def ClassifySections(section_names): ...@@ -203,19 +201,16 @@ def ClassifySections(section_names):
class Container(object): class Container(object):
"""Info for a single SuperSize input file (e.g., APK file). """Info for a a single input file for SuperSize, e.g., an APK file.
Fields: Fields:
name: Container name. Must be unique among containers, and can be ''. name: Container name. Must be unique among containers, and can be ''.
short_name: Short container name for compact display. This, also needs to be
unique among containers in the same SizeInfo, and can be ''.
metadata: A dict. metadata: A dict.
section_sizes: A dict of section_name -> size. section_sizes: A dict of section_name -> size.
classified_sections: Cache for ClassifySections(). classified_sections: Cache for ClassifySections().
""" """
__slots__ = ( __slots__ = (
'name', 'name',
'short_name',
'metadata', 'metadata',
'section_sizes', 'section_sizes',
'_classified_sections', '_classified_sections',
...@@ -225,31 +220,15 @@ class Container(object): ...@@ -225,31 +220,15 @@ class Container(object):
# name == '' hints that only one container exists, and there's no need to # name == '' hints that only one container exists, and there's no need to
# distinguish them. This can affect console output. # distinguish them. This can affect console output.
self.name = name self.name = name
self.short_name = None # Assigned by AssignShortNames().
self.metadata = metadata or {} self.metadata = metadata or {}
self.section_sizes = section_sizes # E.g. {SECTION_TEXT: 0} self.section_sizes = section_sizes # E.g. {SECTION_TEXT: 0}
self._classified_sections = None self._classified_sections = None
@staticmethod
def AssignShortNames(containers):
for i, c in enumerate(containers):
c.short_name = str(i) if c.name else ''
def ClassifySections(self): def ClassifySections(self):
if not self._classified_sections: if not self._classified_sections:
self._classified_sections = ClassifySections(self.section_sizes.keys()) self._classified_sections = ClassifySections(self.section_sizes.keys())
return self._classified_sections return self._classified_sections
@staticmethod
def Empty():
"""Returns a placeholder Container that should be read-only.
For simplicity, we're not enforcing read-only checks (frozenmap does not
exist, unfortunately). Creating a new instance instead of using a global
singleton for robustness.
"""
return Container(name='(empty)', metadata={}, section_sizes={})
class BaseSizeInfo(object): class BaseSizeInfo(object):
"""Base class for SizeInfo and DeltaSizeInfo. """Base class for SizeInfo and DeltaSizeInfo.
...@@ -282,7 +261,6 @@ class BaseSizeInfo(object): ...@@ -282,7 +261,6 @@ class BaseSizeInfo(object):
self._symbols = symbols self._symbols = symbols
self._native_symbols = None self._native_symbols = None
self._pak_symbols = None self._pak_symbols = None
Container.AssignShortNames(self.containers)
@property @property
def symbols(self): def symbols(self):
...@@ -318,9 +296,6 @@ class BaseSizeInfo(object): ...@@ -318,9 +296,6 @@ class BaseSizeInfo(object):
def metadata(self): def metadata(self):
return [c.metadata for c in self.containers] return [c.metadata for c in self.containers]
def ContainerForName(self, name, default=None):
return next((c for c in containers if c.name == name), default)
class SizeInfo(BaseSizeInfo): class SizeInfo(BaseSizeInfo):
"""Represents all size information for a single binary. """Represents all size information for a single binary.
...@@ -387,10 +362,6 @@ class BaseSymbol(object): ...@@ -387,10 +362,6 @@ class BaseSymbol(object):
def container_name(self): def container_name(self):
return self.container.name if self.container else '' return self.container.name if self.container else ''
@property
def container_short_name(self):
return self.container.short_name if self.container else ''
@property @property
def section(self): def section(self):
"""Returns the one-letter section.""" """Returns the one-letter section."""
...@@ -545,17 +516,14 @@ class Symbol(BaseSymbol): ...@@ -545,17 +516,14 @@ class Symbol(BaseSymbol):
self.component = '' self.component = ''
def __repr__(self): def __repr__(self):
if self.container and self.container.name: # TODO(huangs): If container name is nonempty then display it.
container_str = '<{}>'.format(self.container.name) template = ('{}@{:x}(size_without_padding={},padding={},full_name={},'
else:
container_str = ''
template = ('{}{}@{:x}(size_without_padding={},padding={},full_name={},'
'object_path={},source_path={},flags={},num_aliases={},' 'object_path={},source_path={},flags={},num_aliases={},'
'component={})') 'component={})')
return template.format(container_str, self.section_name, self.address, return template.format(
self.size_without_padding, self.padding, self.section_name, self.address, self.size_without_padding,
self.full_name, self.object_path, self.source_path, self.padding, self.full_name, self.object_path, self.source_path,
self.FlagsString(), self.num_aliases, self.component) self.FlagsString(), self.num_aliases, self.component)
def SetName(self, full_name, template_name=None, name=None): def SetName(self, full_name, template_name=None, name=None):
# Note that _NormalizeNames() will clobber these values. # Note that _NormalizeNames() will clobber these values.
...@@ -594,21 +562,14 @@ class DeltaSymbol(BaseSymbol): ...@@ -594,21 +562,14 @@ class DeltaSymbol(BaseSymbol):
self.after_symbol = after_symbol self.after_symbol = after_symbol
def __repr__(self): def __repr__(self):
before_container_name = self.before_symbol.container_name # TODO(huangs): If container name is nonempty then display it.
after_container_name = self.after_symbol.container_name template = ('{}{}@{:x}(size_without_padding={},padding={},full_name={},'
if before_container_name != after_container_name:
container_str = '<~{}>'.format(after_container_name)
elif after_container_name != '':
container_str = '<{}>'.format(after_container_name)
else:
container_str = ''
template = ('{}{}{}@{:x}(size_without_padding={},padding={},full_name={},'
'object_path={},source_path={},flags={})') 'object_path={},source_path={},flags={})')
return template.format(DIFF_PREFIX_BY_STATUS[self.diff_status], return template.format(
container_str, self.section_name, self.address, DIFF_PREFIX_BY_STATUS[self.diff_status], self.section_name,
self.size_without_padding, self.padding, self.address, self.size_without_padding, self.padding,
self.full_name, self.object_path, self.source_path, self.full_name, self.object_path, self.source_path,
self.FlagsString()) self.FlagsString())
def IsDelta(self): def IsDelta(self):
return True return True
...@@ -751,11 +712,11 @@ class SymbolGroup(BaseSymbol): ...@@ -751,11 +712,11 @@ class SymbolGroup(BaseSymbol):
'full_name', 'full_name',
'template_name', 'template_name',
'name', 'name',
'container_name',
'section_name', 'section_name',
'is_default_sorted', # True for groups created by Sorted() 'is_default_sorted', # True for groups created by Sorted()
) )
# template_name and full_name are useful when clustering symbol clones. # template_name and full_name are useful when clustering symbol clones.
def __init__(self, def __init__(self,
symbols, symbols,
...@@ -774,6 +735,8 @@ class SymbolGroup(BaseSymbol): ...@@ -774,6 +735,8 @@ class SymbolGroup(BaseSymbol):
self.full_name = full_name if full_name is not None else name self.full_name = full_name if full_name is not None else name
self.template_name = template_name if template_name is not None else name self.template_name = template_name if template_name is not None else name
self.name = name or '' self.name = name or ''
# TODO(huangs): Add support for multiple containers.
self.container_name = ''
self.section_name = section_name or SECTION_MULTIPLE self.section_name = section_name or SECTION_MULTIPLE
self.is_default_sorted = is_default_sorted self.is_default_sorted = is_default_sorted
...@@ -820,20 +783,6 @@ class SymbolGroup(BaseSymbol): ...@@ -820,20 +783,6 @@ class SymbolGroup(BaseSymbol):
def index(self, item): def index(self, item):
return self._symbols.index(item) return self._symbols.index(item)
@property
def container_name(self):
ret = set(s.container_name for s in self._symbols)
if ret:
return CONTAINER_MULTIPLE if len(ret) > 1 else (ret.pop() or '')
return ''
@property
def container_short_name(self):
ret = set(s.container_short_name for s in self._symbols)
if ret:
return CONTAINER_MULTIPLE if len(ret) > 1 else (ret.pop() or '')
return ''
@property @property
def address(self): def address(self):
first = self._symbols[0].address if self else 0 first = self._symbols[0].address if self else 0
...@@ -989,23 +938,13 @@ class SymbolGroup(BaseSymbol): ...@@ -989,23 +938,13 @@ class SymbolGroup(BaseSymbol):
def WherePssBiggerThan(self, min_pss): def WherePssBiggerThan(self, min_pss):
return self.Filter(lambda s: s.pss >= min_pss) return self.Filter(lambda s: s.pss >= min_pss)
def WhereInSection(self, section, container=None): def WhereInSection(self, section):
"""|section| can be section_name ('.bss'), or section chars ('bdr').""" """|section| can be section_name ('.bss'), or section chars ('bdr')."""
if section.startswith('.'): if section.startswith('.'):
if container: ret = self.Filter(lambda s: s.section_name == section)
short_name = container.short_name
ret = self.Filter(lambda s: (s.container.short_name == short_name and s.
section_name == section))
else:
ret = self.Filter(lambda s: s.section_name == section)
ret.section_name = section ret.section_name = section
else: else:
if container: ret = self.Filter(lambda s: s.section in section)
short_name = container.short_name
ret = self.Filter(lambda s: (s.container.short_name == short_name and s.
section in section))
else:
ret = self.Filter(lambda s: s.section in section)
if section in SECTION_TO_SECTION_NAME: if section in SECTION_TO_SECTION_NAME:
ret.section_name = SECTION_TO_SECTION_NAME[section] ret.section_name = SECTION_TO_SECTION_NAME[section]
return ret return ret
...@@ -1254,9 +1193,6 @@ class SymbolGroup(BaseSymbol): ...@@ -1254,9 +1193,6 @@ class SymbolGroup(BaseSymbol):
lambda s: (same_name_only and s.full_name, id(s.aliases or s)), lambda s: (same_name_only and s.full_name, id(s.aliases or s)),
min_count=min_count, group_factory=group_factory) min_count=min_count, group_factory=group_factory)
def GroupedByContainerAndSectionName(self):
return self.GroupedBy(lambda s: (s.container.name, s.section_name))
def GroupedBySectionName(self): def GroupedBySectionName(self):
return self.GroupedBy(lambda s: s.section_name) return self.GroupedBy(lambda s: s.section_name)
......
******************************************************************************** ********************************************************************************
Entering interactive Python shell. Quick reference: Entering interactive Python shell. Quick reference:
SizeInfo: ContainerForName, all_section_sizes, build_config, containers, metadata, metadata_legacy, native_symbols, pak_symbols, raw_symbols, size_path, symbols SizeInfo: all_section_sizes, build_config, containers, metadata, metadata_legacy, native_symbols, pak_symbols, raw_symbols, size_path, symbols
Symbol: FlagsString, IsBss, IsDelta, IsDex, IsGeneratedByToolchain, IsGroup, IsNameUnique, IsNative, IsOther, IsOverhead, IsPak, IsStringLiteral, IterLeafSymbols, SetName, address, aliases, component, container, container_name, container_short_name, end_address, flags, full_name, generated_source, is_anonymous, name, num_aliases, object_path, padding, padding_pss, pss, pss_without_padding, section, section_name, size, size_without_padding, source_path, template_name Symbol: FlagsString, IsBss, IsDelta, IsDex, IsGeneratedByToolchain, IsGroup, IsNameUnique, IsNative, IsOther, IsOverhead, IsPak, IsStringLiteral, IterLeafSymbols, SetName, address, aliases, component, container, container_name, end_address, flags, full_name, generated_source, is_anonymous, name, num_aliases, object_path, padding, padding_pss, pss, pss_without_padding, section, section_name, size, size_without_padding, source_path, template_name
SymbolGroup (extends Symbol): CountUniqueSymbols, Filter, GroupedBy, GroupedByAliases, GroupedByComponent, GroupedByContainerAndSectionName, GroupedByFullName, GroupedByName, GroupedByPath, GroupedBySectionName, Inverted, IterUniqueSymbols, Sorted, SortedByAddress, SortedByCount, SortedByName, WhereAddressInRange, WhereComponentMatches, WhereFullNameMatches, WhereGeneratedByToolchain, WhereHasAnyAttribution, WhereHasComponent, WhereHasFlag, WhereHasPath, WhereInSection, WhereIsDex, WhereIsGroup, WhereIsNative, WhereIsPak, WhereIsTemplate, WhereMatches, WhereNameMatches, WhereObjectPathMatches, WherePathMatches, WherePssBiggerThan, WhereSizeBiggerThan, WhereSourceIsGenerated, WhereSourcePathMatches, WhereTemplateNameMatches, index, is_default_sorted SymbolGroup (extends Symbol): CountUniqueSymbols, Filter, GroupedBy, GroupedByAliases, GroupedByComponent, GroupedByFullName, GroupedByName, GroupedByPath, GroupedBySectionName, Inverted, IterUniqueSymbols, Sorted, SortedByAddress, SortedByCount, SortedByName, WhereAddressInRange, WhereComponentMatches, WhereFullNameMatches, WhereGeneratedByToolchain, WhereHasAnyAttribution, WhereHasComponent, WhereHasFlag, WhereHasPath, WhereInSection, WhereIsDex, WhereIsGroup, WhereIsNative, WhereIsPak, WhereIsTemplate, WhereMatches, WhereNameMatches, WhereObjectPathMatches, WherePathMatches, WherePssBiggerThan, WhereSizeBiggerThan, WhereSourceIsGenerated, WhereSourcePathMatches, WhereTemplateNameMatches, index, is_default_sorted
DeltaSizeInfo: ContainerForName, after, all_section_sizes, before, build_config, containers, metadata, native_symbols, pak_symbols, raw_symbols, symbols DeltaSizeInfo: after, all_section_sizes, before, build_config, containers, metadata, native_symbols, pak_symbols, raw_symbols, symbols
DeltaSymbol (extends Symbol): after_symbol, before_symbol, diff_status DeltaSymbol (extends Symbol): after_symbol, before_symbol, diff_status
DeltaSymbolGroup (extends SymbolGroup): CountsByDiffStatus, WhereDiffStatusIs, diff_status DeltaSymbolGroup (extends SymbolGroup): CountsByDiffStatus, WhereDiffStatusIs, diff_status
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment