diff -r 000000000000 -r 6474c204b198 tools/performance/layout/perf-doc.html --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/tools/performance/layout/perf-doc.html Wed Dec 31 06:09:35 2014 +0100 @@ -0,0 +1,281 @@ + + + + + + + + + Performance Tools for Gecko + + + + +
  + + + + +
+
+
+ +
+ + + +
+
+

+Performance Monitoring for Gecko

+ +
+
+maintainer:  marc attinasi 
+ +
+
+attinasi@netscape.com
+
+ +
+
+
+
+ +

+Brief Overview

+Gecko should be fast. To help us make sure that it is we monitor +the performance of the system, specifically in terms of Parsing, Content +Creation, Frame Creation and Style Resolution - the core aspects of layout. +Facilitating the monitoring of performance across build cycles is a small +set of tools that work in conjunction with program output coming from the +Mozilla or Viewer applications to produce tables of performance values +and historical comparisons of builds analysed in the past. The tools, their +dependencies and their general care and feeding are the topics of this +document. +

+Usage: A five-step plan to enlightenment

+ + + +

+The PerfTools

+IMPORTANT: The tools created for monitoring performance +are very tightly coupled to output from the layout engine. As Viewer (or +Mozilla) is run it spits-out various timing values to the console. These +values are captured to files, parsed and assembled into HTML tables showing +the amount of CPU time dedicated to parsing the document, creating the +content model, building the frame model, and resolving style during the +building of the frame model. All of the scripts that make up the perftool +are locate in the directory \mozilla\tools\performance\layout. +Running them from another location may work, but it is best to run +from there. +

The perl script, perf.pl, is used to invoke Viewer and direct +it to load various URLs. The URLs to load are contained in a text file, +on per line. The file 40-URL.txt is the baseline file and contains +a listing of file-URLs that are static, meaning they never change, because +they are snapshots of popular sites. As the script executes it does two +things: +

    +
  1. +Invoke Viewer and feed it the URL-file, capturing the output to another +file
  2. + +
  3. +Invoke other perl scripts to process the Viewer output into HTML tables
  4. +
+A set of perl scripts are used to parse the output of the Viewer application. +These scripts expect the format of the performance data to be intransient, +in other words, it should not change or the scripts need to be updated. +Here are the files involved in parsing the data and generating the HTML +table: + + +

+The URLs

+It is critical that the URLs that we load while measuring performance do +not change. This is because we want to compare performance characteristics +across builds, and if the URLs changed we could not really make valid comparisons. +Also, as URLs change, they exercise different parts of the application, +so we really want a consistent set or pages to measure performance against. +The builds change, the pages do not. +

On February 3, 2000 the top 40 sites were 'snaked' using the tool WebSnake. +These sites now reside in disk-files and are loaded from those files during +the load test. The file 40-URL.txt contains a listing of the file-URLs +created from the web sites. The original web sites should be obvious from +the file-URLs. +
  +

NOTE: There are some links to external images in +the local websites. These should have been resolved by WebSnake but were +not for some reason. These should be made local at some point so we can +run without a connection to the internet.
+ +

+Historical Data and Trending

+Historical data will be gathered and presented to make it easy for those +concerned to see how the relative performance of various parts of the product +change over time. This historical data is kept in a flat file of comma-delimited +values where each record is indexed by the pull-date/milestone and buildID +(note that the buildID is not always reliable, however the pull-date/milestone +is provided by the user when the performance package is run, so it can +be made to be unique). The Historical Data and Trending table will show +the averages for Parsing, Content Creation, Frame Creation, Style Resolution, +Reflow, Total Layout and Total Page Load time for each build, along with +a simple bar graph representation of each records weight relative to the +other records in the table. At a later this can be extended to trend individual +sites, however for most purposes the roll-up of overall averages is sufficient +to track the performance trends of the engine. +

+The Execution Plan

+Performance monitoring will be run on a weekly basis, and against all Milestone +builds. The results of the runs will be published for all interested parties +to see. Interested and/or responsible individuals will review the performance +data to raise or lower developer awareness of performance problems and +issues as they arise. +

Currently, the results are published weekly at http://techno/users/attinasi/publish +

+Revision Control and Archiving

+The scripts are checked into cvs in the directory \mozilla\tools\performance\layout. +The history.txt file is also checked in to cvs after every run, as are +the tables produced by the run. Commiting the files to cvs is a manual +operation and should be completed only when the data has been analysed +and appears valid. Be sure to: +
    +
  1. +Commit history.txt after each successful run.
  2. + +
  3. +Add / commit the new table and new trend-table after each successful run +(in the Tables subdirectory).
  4. + +
  5. +Commit any chages to the sciripts or this document.
  6. +
+ +
+

+History:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
02/04/2000Created - attinasi
03/17/2000Removed QA Partner stuff - no longer used
+ + +