diff -r 000000000000 -r 6474c204b198 tools/performance/layout/perf-doc.html
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/tools/performance/layout/perf-doc.html Wed Dec 31 06:09:35 2014 +0100
@@ -0,0 +1,281 @@
+
+
+
+
+
+
+
+
+ Performance Tools for Gecko
+
+
+
+
+
+
+
+
+
+ |
+
+
+
+
+
+
+
+
+Performance Monitoring for Gecko
+
+
+-
+maintainer: marc attinasi
+
+
+-
+attinasi@netscape.com
+ |
+
+
+
+
+-
+
+
+
+
+Brief Overview
+Gecko should be fast. To help us make sure that it is we monitor
+the performance of the system, specifically in terms of Parsing, Content
+Creation, Frame Creation and Style Resolution - the core aspects of layout.
+Facilitating the monitoring of performance across build cycles is a small
+set of tools that work in conjunction with program output coming from the
+Mozilla or Viewer applications to produce tables of performance values
+and historical comparisons of builds analysed in the past. The tools, their
+dependencies and their general care and feeding are the topics of this
+document.
+
+Usage: A five-step plan to enlightenment
+
+
+-
+First, the tools are all designed to run only on Windows. That is really
+a bummer, but since most of what we are measuring is XP it should not really
+matter. Get a Windows NT machine if you want to run the tools.
+
+-
+Next, you need a build that was created with performance monitoring enabled.
+To create such a build you must compile the Mozilla source with a special
+environment variable set. This environment variable turns on code that
+accumulates and dumps performance metrics data. The environment variable
+is: MOZ_PERF=1. Set this environment variable and then build all
+of Mozilla. If you can obtain a build that was built with MOZ_PERF=1 set
+then you can just use that build.
+
+-
+Third, run the script perf.pl to execute Viewer and run through
+the test sites gathering performance data.
+
+-
+Fourth, make sure the script completed and then open the resultant HTML
+files which is dropped in the Tables subdirectory.
+
+-
+Lasty, stare at the table and the values in it and decide if performance
+is geting better, worse, or staying the same.
+
+
+
+The PerfTools
+IMPORTANT: The tools created for monitoring performance
+are very tightly coupled to output from the layout engine. As Viewer (or
+Mozilla) is run it spits-out various timing values to the console. These
+values are captured to files, parsed and assembled into HTML tables showing
+the amount of CPU time dedicated to parsing the document, creating the
+content model, building the frame model, and resolving style during the
+building of the frame model. All of the scripts that make up the perftool
+are locate in the directory \mozilla\tools\performance\layout.
+Running them from another location may work, but it is best to run
+from there.
+The perl script, perf.pl, is used to invoke Viewer and direct
+it to load various URLs. The URLs to load are contained in a text file,
+on per line. The file 40-URL.txt is the baseline file and contains
+a listing of file-URLs that are static, meaning they never change, because
+they are snapshots of popular sites. As the script executes it does two
+things:
+
+-
+Invoke Viewer and feed it the URL-file, capturing the output to another
+file
+
+-
+Invoke other perl scripts to process the Viewer output into HTML tables
+
+A set of perl scripts are used to parse the output of the Viewer application.
+These scripts expect the format of the performance data to be intransient,
+in other words, it should not change or the scripts need to be updated.
+Here are the files involved in parsing the data and generating the HTML
+table:
+
+-
+perf.pl : The main script that orchestrates the running
+of viewer and the invocation of other scripts, and finally copies files
+to their correct final locations. An example of an invocation of the perf.pl
+script is: 'perl perf.pl
+Daily-0215 s:\mozilla\0215 cpu'
+
+
+-
+Daily-0215 is
+the name of the build and can be anything you like.
+
+-
+s:\mozilla\0215 is
+the location of the build. There must be a bin directory under the directory
+you specified, and it must contain the MOZ_PERF enabled build.
+
+-
+cpu indicates
+that we are timing CPU time. The other option is clock but that is not
+currently functional because of the clock resolution.
+
+
+-
+Header.pl : a simple script that generates the initial
+portion of the HTML file that will show the performance data for the current
+build.
+
+-
+AverageTable2.pl : a slightly more complicated script that
+parses the output from Viewer, accumulates data for averaging, and generates
+a row in the HTML table initialized by header.pl. This file is must
+be modified if the performance data output fromat changes.
+
+-
+Footer.pl : a simple script that inserts the last row in
+the HTML table, the averages row. It also terminates the table and ends
+the HTML tag.
+
+-
+GenFromLogs.pl : a script that generates the HTML table
+from already existing logs. This is used to regenerate a table after the
+QA Partner script has run, in case the table file is lost or otherwise
+needs to be recreated. Also, if old logs are kept, they can be used to
+regenerate their corresponding table.
+
+-
+Uncombine.pl : a script that breaks up a single text file
+containing all of the timing data for all of the sites into a separate
+file for each individual site.
+
+-
+History.pl : a script that generates an HTML file showing
+historical comparison of average performance values for current and previous
+builds.
+
+
+
+The URLs
+It is critical that the URLs that we load while measuring performance do
+not change. This is because we want to compare performance characteristics
+across builds, and if the URLs changed we could not really make valid comparisons.
+Also, as URLs change, they exercise different parts of the application,
+so we really want a consistent set or pages to measure performance against.
+The builds change, the pages do not.
+On February 3, 2000 the top 40 sites were 'snaked' using the tool WebSnake.
+These sites now reside in disk-files and are loaded from those files during
+the load test. The file 40-URL.txt contains a listing of the file-URLs
+created from the web sites. The original web sites should be obvious from
+the file-URLs.
+
+
NOTE: There are some links to external images in
+the local websites. These should have been resolved by WebSnake but were
+not for some reason. These should be made local at some point so we can
+run without a connection to the internet.
+
+
+Historical Data and Trending
+Historical data will be gathered and presented to make it easy for those
+concerned to see how the relative performance of various parts of the product
+change over time. This historical data is kept in a flat file of comma-delimited
+values where each record is indexed by the pull-date/milestone and buildID
+(note that the buildID is not always reliable, however the pull-date/milestone
+is provided by the user when the performance package is run, so it can
+be made to be unique). The Historical Data and Trending table will show
+the averages for Parsing, Content Creation, Frame Creation, Style Resolution,
+Reflow, Total Layout and Total Page Load time for each build, along with
+a simple bar graph representation of each records weight relative to the
+other records in the table. At a later this can be extended to trend individual
+sites, however for most purposes the roll-up of overall averages is sufficient
+to track the performance trends of the engine.
+
+The Execution Plan
+Performance monitoring will be run on a weekly basis, and against all Milestone
+builds. The results of the runs will be published for all interested parties
+to see. Interested and/or responsible individuals will review the performance
+data to raise or lower developer awareness of performance problems and
+issues as they arise.
+Currently, the results are published weekly at http://techno/users/attinasi/publish
+
+Revision Control and Archiving
+The scripts are checked into cvs in the directory \mozilla\tools\performance\layout.
+The history.txt file is also checked in to cvs after every run, as are
+the tables produced by the run. Commiting the files to cvs is a manual
+operation and should be completed only when the data has been analysed
+and appears valid. Be sure to:
+
+-
+Commit history.txt after each successful run.
+
+-
+Add / commit the new table and new trend-table after each successful run
+(in the Tables subdirectory).
+
+-
+Commit any chages to the sciripts or this document.
+
+
+
+
+History:
+
+
+
+02/04/2000 |
+
+Created - attinasi |
+
+
+
+03/17/2000 |
+
+Removed QA Partner stuff - no longer used |
+
+
+
+ |
+
+ |
+
+
+
+ |
+
+ |
+
+
+
+ |
+
+ |
+
+
+
+
+