xpcom/base/nsCycleCollector.cpp

Sat, 03 Jan 2015 20:18:00 +0100

author
Michael Schloh von Bennewitz <michael@schloh.com>
date
Sat, 03 Jan 2015 20:18:00 +0100
branch
TOR_BUG_3246
changeset 7
129ffea94266
permissions
-rw-r--r--

Conditionally enable double key logic according to:
private browsing mode or privacy.thirdparty.isolate preference and
implement in GetCookieStringCommon and FindCookie where it counts...
With some reservations of how to convince FindCookie users to test
condition and pass a nullptr when disabling double key logic.

     1 /* -*- Mode: C++; tab-width: 8; indent-tabs-mode: nil; c-basic-offset: 4 -*- */
     2 /* vim: set ts=8 sts=4 et sw=4 tw=80: */
     3 /* This Source Code Form is subject to the terms of the Mozilla Public
     4  * License, v. 2.0. If a copy of the MPL was not distributed with this
     5  * file, You can obtain one at http://mozilla.org/MPL/2.0/. */
     7 //
     8 // This file implements a garbage-cycle collector based on the paper
     9 //
    10 //   Concurrent Cycle Collection in Reference Counted Systems
    11 //   Bacon & Rajan (2001), ECOOP 2001 / Springer LNCS vol 2072
    12 //
    13 // We are not using the concurrent or acyclic cases of that paper; so
    14 // the green, red and orange colors are not used.
    15 //
    16 // The collector is based on tracking pointers of four colors:
    17 //
    18 // Black nodes are definitely live. If we ever determine a node is
    19 // black, it's ok to forget about, drop from our records.
    20 //
    21 // White nodes are definitely garbage cycles. Once we finish with our
    22 // scanning, we unlink all the white nodes and expect that by
    23 // unlinking them they will self-destruct (since a garbage cycle is
    24 // only keeping itself alive with internal links, by definition).
    25 //
    26 // Snow-white is an addition to the original algorithm. Snow-white object
    27 // has reference count zero and is just waiting for deletion.
    28 //
    29 // Grey nodes are being scanned. Nodes that turn grey will turn
    30 // either black if we determine that they're live, or white if we
    31 // determine that they're a garbage cycle. After the main collection
    32 // algorithm there should be no grey nodes.
    33 //
    34 // Purple nodes are *candidates* for being scanned. They are nodes we
    35 // haven't begun scanning yet because they're not old enough, or we're
    36 // still partway through the algorithm.
    37 //
    38 // XPCOM objects participating in garbage-cycle collection are obliged
    39 // to inform us when they ought to turn purple; that is, when their
    40 // refcount transitions from N+1 -> N, for nonzero N. Furthermore we
    41 // require that *after* an XPCOM object has informed us of turning
    42 // purple, they will tell us when they either transition back to being
    43 // black (incremented refcount) or are ultimately deleted.
    45 // Incremental cycle collection
    46 //
    47 // Beyond the simple state machine required to implement incremental
    48 // collection, the CC needs to be able to compensate for things the browser
    49 // is doing during the collection. There are two kinds of problems. For each
    50 // of these, there are two cases to deal with: purple-buffered C++ objects
    51 // and JS objects.
    53 // The first problem is that an object in the CC's graph can become garbage.
    54 // This is bad because the CC touches the objects in its graph at every
    55 // stage of its operation.
    56 //
    57 // All cycle collected C++ objects that die during a cycle collection
    58 // will end up actually getting deleted by the SnowWhiteKiller. Before
    59 // the SWK deletes an object, it checks if an ICC is running, and if so,
    60 // if the object is in the graph. If it is, the CC clears mPointer and
    61 // mParticipant so it does not point to the raw object any more. Because
    62 // objects could die any time the CC returns to the mutator, any time the CC
    63 // accesses a PtrInfo it must perform a null check on mParticipant to
    64 // ensure the object has not gone away.
    65 //
    66 // JS objects don't always run finalizers, so the CC can't remove them from
    67 // the graph when they die. Fortunately, JS objects can only die during a GC,
    68 // so if a GC is begun during an ICC, the browser synchronously finishes off
    69 // the ICC, which clears the entire CC graph. If the GC and CC are scheduled
    70 // properly, this should be rare.
    71 //
    72 // The second problem is that objects in the graph can be changed, say by
    73 // being addrefed or released, or by having a field updated, after the object
    74 // has been added to the graph. The problem is that ICC can miss a newly
    75 // created reference to an object, and end up unlinking an object that is
    76 // actually alive.
    77 //
    78 // The basic idea of the solution, from "An on-the-fly Reference Counting
    79 // Garbage Collector for Java" by Levanoni and Petrank, is to notice if an
    80 // object has had an additional reference to it created during the collection,
    81 // and if so, don't collect it during the current collection. This avoids having
    82 // to rerun the scan as in Bacon & Rajan 2001.
    83 //
    84 // For cycle collected C++ objects, we modify AddRef to place the object in
    85 // the purple buffer, in addition to Release. Then, in the CC, we treat any
    86 // objects in the purple buffer as being alive, after graph building has
    87 // completed. Because they are in the purple buffer, they will be suspected
    88 // in the next CC, so there's no danger of leaks. This is imprecise, because
    89 // we will treat as live an object that has been Released but not AddRefed
    90 // during graph building, but that's probably rare enough that the additional
    91 // bookkeeping overhead is not worthwhile.
    92 //
    93 // For JS objects, the cycle collector is only looking at gray objects. If a
    94 // gray object is touched during ICC, it will be made black by UnmarkGray.
    95 // Thus, if a JS object has become black during the ICC, we treat it as live.
    96 // Merged JS zones have to be handled specially: we scan all zone globals.
    97 // If any are black, we treat the zone as being black.
   100 // Safety
   101 //
   102 // An XPCOM object is either scan-safe or scan-unsafe, purple-safe or
   103 // purple-unsafe.
   104 //
   105 // An nsISupports object is scan-safe if:
   106 //
   107 //  - It can be QI'ed to |nsXPCOMCycleCollectionParticipant|, though
   108 //    this operation loses ISupports identity (like nsIClassInfo).
   109 //  - Additionally, the operation |traverse| on the resulting
   110 //    nsXPCOMCycleCollectionParticipant does not cause *any* refcount
   111 //    adjustment to occur (no AddRef / Release calls).
   112 //
   113 // A non-nsISupports ("native") object is scan-safe by explicitly
   114 // providing its nsCycleCollectionParticipant.
   115 //
   116 // An object is purple-safe if it satisfies the following properties:
   117 //
   118 //  - The object is scan-safe.
   119 //
   120 // When we receive a pointer |ptr| via
   121 // |nsCycleCollector::suspect(ptr)|, we assume it is purple-safe. We
   122 // can check the scan-safety, but have no way to ensure the
   123 // purple-safety; objects must obey, or else the entire system falls
   124 // apart. Don't involve an object in this scheme if you can't
   125 // guarantee its purple-safety. The easiest way to ensure that an
   126 // object is purple-safe is to use nsCycleCollectingAutoRefCnt.
   127 //
   128 // When we have a scannable set of purple nodes ready, we begin
   129 // our walks. During the walks, the nodes we |traverse| should only
   130 // feed us more scan-safe nodes, and should not adjust the refcounts
   131 // of those nodes.
   132 //
   133 // We do not |AddRef| or |Release| any objects during scanning. We
   134 // rely on the purple-safety of the roots that call |suspect| to
   135 // hold, such that we will clear the pointer from the purple buffer
   136 // entry to the object before it is destroyed. The pointers that are
   137 // merely scan-safe we hold only for the duration of scanning, and
   138 // there should be no objects released from the scan-safe set during
   139 // the scan.
   140 //
   141 // We *do* call |Root| and |Unroot| on every white object, on
   142 // either side of the calls to |Unlink|. This keeps the set of white
   143 // objects alive during the unlinking.
   144 //
   146 #if !defined(__MINGW32__)
   147 #ifdef WIN32
   148 #include <crtdbg.h>
   149 #include <errno.h>
   150 #endif
   151 #endif
   153 #include "base/process_util.h"
   155 #include "mozilla/ArrayUtils.h"
   156 #include "mozilla/AutoRestore.h"
   157 #include "mozilla/CycleCollectedJSRuntime.h"
   158 #include "mozilla/HoldDropJSObjects.h"
   159 /* This must occur *after* base/process_util.h to avoid typedefs conflicts. */
   160 #include "mozilla/MemoryReporting.h"
   161 #include "mozilla/LinkedList.h"
   163 #include "nsCycleCollectionParticipant.h"
   164 #include "nsCycleCollectionNoteRootCallback.h"
   165 #include "nsDeque.h"
   166 #include "nsCycleCollector.h"
   167 #include "nsThreadUtils.h"
   168 #include "nsXULAppAPI.h"
   169 #include "prenv.h"
   170 #include "nsPrintfCString.h"
   171 #include "nsTArray.h"
   172 #include "nsIConsoleService.h"
   173 #include "mozilla/Attributes.h"
   174 #include "nsICycleCollectorListener.h"
   175 #include "nsIMemoryReporter.h"
   176 #include "nsIFile.h"
   177 #include "nsDumpUtils.h"
   178 #include "xpcpublic.h"
   179 #include "GeckoProfiler.h"
   180 #include "js/SliceBudget.h"
   181 #include <stdint.h>
   182 #include <stdio.h>
   184 #include "mozilla/Likely.h"
   185 #include "mozilla/PoisonIOInterposer.h"
   186 #include "mozilla/Telemetry.h"
   187 #include "mozilla/ThreadLocal.h"
   189 using namespace mozilla;
   191 //#define COLLECT_TIME_DEBUG
   193 // Enable assertions that are useful for diagnosing errors in graph construction.
   194 //#define DEBUG_CC_GRAPH
   196 #define DEFAULT_SHUTDOWN_COLLECTIONS 5
   198 // One to do the freeing, then another to detect there is no more work to do.
   199 #define NORMAL_SHUTDOWN_COLLECTIONS 2
   201 // Cycle collector environment variables
   202 //
   203 // MOZ_CC_LOG_ALL: If defined, always log cycle collector heaps.
   204 //
   205 // MOZ_CC_LOG_SHUTDOWN: If defined, log cycle collector heaps at shutdown.
   206 //
   207 // MOZ_CC_LOG_THREAD: If set to "main", only automatically log main thread
   208 // CCs. If set to "worker", only automatically log worker CCs. If set to "all",
   209 // log either. The default value is "all". This must be used with either
   210 // MOZ_CC_LOG_ALL or MOZ_CC_LOG_SHUTDOWN for it to do anything.
   211 //
   212 // MOZ_CC_LOG_PROCESS: If set to "main", only automatically log main process
   213 // CCs. If set to "content", only automatically log tab CCs. If set to
   214 // "plugins", only automatically log plugin CCs. If set to "all", log
   215 // everything. The default value is "all". This must be used with either
   216 // MOZ_CC_LOG_ALL or MOZ_CC_LOG_SHUTDOWN for it to do anything.
   217 //
   218 // MOZ_CC_ALL_TRACES: If set to "all", any cycle collector
   219 // logging done will be WantAllTraces, which disables
   220 // various cycle collector optimizations to give a fuller picture of
   221 // the heap. If set to "shutdown", only shutdown logging will be WantAllTraces.
   222 // The default is none.
   223 //
   224 // MOZ_CC_RUN_DURING_SHUTDOWN: In non-DEBUG or builds, if this is set,
   225 // run cycle collections at shutdown.
   226 //
   227 // MOZ_CC_LOG_DIRECTORY: The directory in which logs are placed (such as
   228 // logs from MOZ_CC_LOG_ALL and MOZ_CC_LOG_SHUTDOWN, or other uses
   229 // of nsICycleCollectorListener)
   231 // Various parameters of this collector can be tuned using environment
   232 // variables.
   234 struct nsCycleCollectorParams
   235 {
   236     bool mLogAll;
   237     bool mLogShutdown;
   238     bool mAllTracesAll;
   239     bool mAllTracesShutdown;
   240     bool mLogThisThread;
   242     nsCycleCollectorParams() :
   243         mLogAll      (PR_GetEnv("MOZ_CC_LOG_ALL") != nullptr),
   244         mLogShutdown (PR_GetEnv("MOZ_CC_LOG_SHUTDOWN") != nullptr),
   245         mAllTracesAll(false),
   246         mAllTracesShutdown(false)
   247     {
   248         const char* logThreadEnv = PR_GetEnv("MOZ_CC_LOG_THREAD");
   249         bool threadLogging = true;
   250         if (logThreadEnv && !!strcmp(logThreadEnv, "all")) {
   251             if (NS_IsMainThread()) {
   252                 threadLogging = !strcmp(logThreadEnv, "main");
   253             } else {
   254                 threadLogging = !strcmp(logThreadEnv, "worker");
   255             }
   256         }
   258         const char* logProcessEnv = PR_GetEnv("MOZ_CC_LOG_PROCESS");
   259         bool processLogging = true;
   260         if (logProcessEnv && !!strcmp(logProcessEnv, "all")) {
   261             switch (XRE_GetProcessType()) {
   262                 case GeckoProcessType_Default:
   263                     processLogging = !strcmp(logProcessEnv, "main");
   264                     break;
   265                 case GeckoProcessType_Plugin:
   266                     processLogging = !strcmp(logProcessEnv, "plugins");
   267                     break;
   268                 case GeckoProcessType_Content:
   269                     processLogging = !strcmp(logProcessEnv, "content");
   270                     break;
   271                 default:
   272                     processLogging = false;
   273                     break;
   274             }
   275         }
   276         mLogThisThread = threadLogging && processLogging;
   278         const char* allTracesEnv = PR_GetEnv("MOZ_CC_ALL_TRACES");
   279         if (allTracesEnv) {
   280             if (!strcmp(allTracesEnv, "all")) {
   281                 mAllTracesAll = true;
   282             } else if (!strcmp(allTracesEnv, "shutdown")) {
   283                 mAllTracesShutdown = true;
   284             }
   285         }
   286     }
   288     bool LogThisCC(bool aIsShutdown)
   289     {
   290         return (mLogAll || (aIsShutdown && mLogShutdown)) && mLogThisThread;
   291     }
   293     bool AllTracesThisCC(bool aIsShutdown)
   294     {
   295         return mAllTracesAll || (aIsShutdown && mAllTracesShutdown);
   296     }
   297 };
   299 #ifdef COLLECT_TIME_DEBUG
   300 class TimeLog
   301 {
   302 public:
   303     TimeLog() : mLastCheckpoint(TimeStamp::Now()) {}
   305     void
   306     Checkpoint(const char* aEvent)
   307     {
   308         TimeStamp now = TimeStamp::Now();
   309         uint32_t dur = (uint32_t) ((now - mLastCheckpoint).ToMilliseconds());
   310         if (dur > 0) {
   311             printf("cc: %s took %dms\n", aEvent, dur);
   312         }
   313         mLastCheckpoint = now;
   314     }
   316 private:
   317     TimeStamp mLastCheckpoint;
   318 };
   319 #else
   320 class TimeLog
   321 {
   322 public:
   323     TimeLog() {}
   324     void Checkpoint(const char* aEvent) {}
   325 };
   326 #endif
   329 ////////////////////////////////////////////////////////////////////////
   330 // Base types
   331 ////////////////////////////////////////////////////////////////////////
   333 struct PtrInfo;
   335 class EdgePool
   336 {
   337 public:
   338     // EdgePool allocates arrays of void*, primarily to hold PtrInfo*.
   339     // However, at the end of a block, the last two pointers are a null
   340     // and then a void** pointing to the next block.  This allows
   341     // EdgePool::Iterators to be a single word but still capable of crossing
   342     // block boundaries.
   344     EdgePool()
   345     {
   346         mSentinelAndBlocks[0].block = nullptr;
   347         mSentinelAndBlocks[1].block = nullptr;
   348     }
   350     ~EdgePool()
   351     {
   352         MOZ_ASSERT(!mSentinelAndBlocks[0].block &&
   353                    !mSentinelAndBlocks[1].block,
   354                    "Didn't call Clear()?");
   355     }
   357     void Clear()
   358     {
   359         Block *b = Blocks();
   360         while (b) {
   361             Block *next = b->Next();
   362             delete b;
   363             b = next;
   364         }
   366         mSentinelAndBlocks[0].block = nullptr;
   367         mSentinelAndBlocks[1].block = nullptr;
   368     }
   370 #ifdef DEBUG
   371     bool IsEmpty()
   372     {
   373         return !mSentinelAndBlocks[0].block &&
   374                !mSentinelAndBlocks[1].block;
   375     }
   376 #endif
   378 private:
   379     struct Block;
   380     union PtrInfoOrBlock {
   381         // Use a union to avoid reinterpret_cast and the ensuing
   382         // potential aliasing bugs.
   383         PtrInfo *ptrInfo;
   384         Block *block;
   385     };
   386     struct Block {
   387         enum { BlockSize = 16 * 1024 };
   389         PtrInfoOrBlock mPointers[BlockSize];
   390         Block() {
   391             mPointers[BlockSize - 2].block = nullptr; // sentinel
   392             mPointers[BlockSize - 1].block = nullptr; // next block pointer
   393         }
   394         Block*& Next()          { return mPointers[BlockSize - 1].block; }
   395         PtrInfoOrBlock* Start() { return &mPointers[0]; }
   396         PtrInfoOrBlock* End()   { return &mPointers[BlockSize - 2]; }
   397     };
   399     // Store the null sentinel so that we can have valid iterators
   400     // before adding any edges and without adding any blocks.
   401     PtrInfoOrBlock mSentinelAndBlocks[2];
   403     Block*& Blocks()       { return mSentinelAndBlocks[1].block; }
   404     Block*  Blocks() const { return mSentinelAndBlocks[1].block; }
   406 public:
   407     class Iterator
   408     {
   409     public:
   410         Iterator() : mPointer(nullptr) {}
   411         Iterator(PtrInfoOrBlock *aPointer) : mPointer(aPointer) {}
   412         Iterator(const Iterator& aOther) : mPointer(aOther.mPointer) {}
   414         Iterator& operator++()
   415         {
   416             if (mPointer->ptrInfo == nullptr) {
   417                 // Null pointer is a sentinel for link to the next block.
   418                 mPointer = (mPointer + 1)->block->mPointers;
   419             }
   420             ++mPointer;
   421             return *this;
   422         }
   424         PtrInfo* operator*() const
   425         {
   426             if (mPointer->ptrInfo == nullptr) {
   427                 // Null pointer is a sentinel for link to the next block.
   428                 return (mPointer + 1)->block->mPointers->ptrInfo;
   429             }
   430             return mPointer->ptrInfo;
   431         }
   432         bool operator==(const Iterator& aOther) const
   433             { return mPointer == aOther.mPointer; }
   434         bool operator!=(const Iterator& aOther) const
   435             { return mPointer != aOther.mPointer; }
   437 #ifdef DEBUG_CC_GRAPH
   438         bool Initialized() const
   439         {
   440             return mPointer != nullptr;
   441         }
   442 #endif
   444     private:
   445         PtrInfoOrBlock *mPointer;
   446     };
   448     class Builder;
   449     friend class Builder;
   450     class Builder {
   451     public:
   452         Builder(EdgePool &aPool)
   453             : mCurrent(&aPool.mSentinelAndBlocks[0]),
   454               mBlockEnd(&aPool.mSentinelAndBlocks[0]),
   455               mNextBlockPtr(&aPool.Blocks())
   456         {
   457         }
   459         Iterator Mark() { return Iterator(mCurrent); }
   461         void Add(PtrInfo* aEdge) {
   462             if (mCurrent == mBlockEnd) {
   463                 Block *b = new Block();
   464                 *mNextBlockPtr = b;
   465                 mCurrent = b->Start();
   466                 mBlockEnd = b->End();
   467                 mNextBlockPtr = &b->Next();
   468             }
   469             (mCurrent++)->ptrInfo = aEdge;
   470         }
   471     private:
   472         // mBlockEnd points to space for null sentinel
   473         PtrInfoOrBlock *mCurrent, *mBlockEnd;
   474         Block **mNextBlockPtr;
   475     };
   477     size_t SizeOfExcludingThis(MallocSizeOf aMallocSizeOf) const {
   478         size_t n = 0;
   479         Block *b = Blocks();
   480         while (b) {
   481             n += aMallocSizeOf(b);
   482             b = b->Next();
   483         }
   484         return n;
   485     }
   486 };
   488 #ifdef DEBUG_CC_GRAPH
   489 #define CC_GRAPH_ASSERT(b) MOZ_ASSERT(b)
   490 #else
   491 #define CC_GRAPH_ASSERT(b)
   492 #endif
   494 #define CC_TELEMETRY(_name, _value)                                            \
   495     PR_BEGIN_MACRO                                                             \
   496     if (NS_IsMainThread()) {                                                   \
   497       Telemetry::Accumulate(Telemetry::CYCLE_COLLECTOR##_name, _value);        \
   498     } else {                                                                   \
   499       Telemetry::Accumulate(Telemetry::CYCLE_COLLECTOR_WORKER##_name, _value); \
   500     }                                                                          \
   501     PR_END_MACRO
   503 enum NodeColor { black, white, grey };
   505 // This structure should be kept as small as possible; we may expect
   506 // hundreds of thousands of them to be allocated and touched
   507 // repeatedly during each cycle collection.
   509 struct PtrInfo
   510 {
   511     void *mPointer;
   512     nsCycleCollectionParticipant *mParticipant;
   513     uint32_t mColor : 2;
   514     uint32_t mInternalRefs : 30;
   515     uint32_t mRefCount;
   516 private:
   517     EdgePool::Iterator mFirstChild;
   519 public:
   521     PtrInfo(void *aPointer, nsCycleCollectionParticipant *aParticipant)
   522         : mPointer(aPointer),
   523           mParticipant(aParticipant),
   524           mColor(grey),
   525           mInternalRefs(0),
   526           mRefCount(UINT32_MAX - 1),
   527           mFirstChild()
   528     {
   529         // We initialize mRefCount to a large non-zero value so
   530         // that it doesn't look like a JS object to the cycle collector
   531         // in the case where the object dies before being traversed.
   533         MOZ_ASSERT(aParticipant);
   534     }
   536     // Allow NodePool::Block's constructor to compile.
   537     PtrInfo() {
   538         NS_NOTREACHED("should never be called");
   539     }
   541     EdgePool::Iterator FirstChild()
   542     {
   543         CC_GRAPH_ASSERT(mFirstChild.Initialized());
   544         return mFirstChild;
   545     }
   547     // this PtrInfo must be part of a NodePool
   548     EdgePool::Iterator LastChild()
   549     {
   550         CC_GRAPH_ASSERT((this + 1)->mFirstChild.Initialized());
   551         return (this + 1)->mFirstChild;
   552     }
   554     void SetFirstChild(EdgePool::Iterator aFirstChild)
   555     {
   556         CC_GRAPH_ASSERT(aFirstChild.Initialized());
   557         mFirstChild = aFirstChild;
   558     }
   560     // this PtrInfo must be part of a NodePool
   561     void SetLastChild(EdgePool::Iterator aLastChild)
   562     {
   563         CC_GRAPH_ASSERT(aLastChild.Initialized());
   564         (this + 1)->mFirstChild = aLastChild;
   565     }
   566 };
   568 /**
   569  * A structure designed to be used like a linked list of PtrInfo, except
   570  * that allocates the PtrInfo 32K-at-a-time.
   571  */
   572 class NodePool
   573 {
   574 private:
   575     enum { BlockSize = 8 * 1024 }; // could be int template parameter
   577     struct Block {
   578         // We create and destroy Block using NS_Alloc/NS_Free rather
   579         // than new and delete to avoid calling its constructor and
   580         // destructor.
   581         Block()  { NS_NOTREACHED("should never be called"); }
   582         ~Block() { NS_NOTREACHED("should never be called"); }
   584         Block* mNext;
   585         PtrInfo mEntries[BlockSize + 1]; // +1 to store last child of last node
   586     };
   588 public:
   589     NodePool()
   590         : mBlocks(nullptr),
   591           mLast(nullptr)
   592     {
   593     }
   595     ~NodePool()
   596     {
   597         MOZ_ASSERT(!mBlocks, "Didn't call Clear()?");
   598     }
   600     void Clear()
   601     {
   602         Block *b = mBlocks;
   603         while (b) {
   604             Block *n = b->mNext;
   605             NS_Free(b);
   606             b = n;
   607         }
   609         mBlocks = nullptr;
   610         mLast = nullptr;
   611     }
   613 #ifdef DEBUG
   614     bool IsEmpty()
   615     {
   616         return !mBlocks && !mLast;
   617     }
   618 #endif
   620     class Builder;
   621     friend class Builder;
   622     class Builder {
   623     public:
   624         Builder(NodePool& aPool)
   625             : mNextBlock(&aPool.mBlocks),
   626               mNext(aPool.mLast),
   627               mBlockEnd(nullptr)
   628         {
   629             MOZ_ASSERT(aPool.mBlocks == nullptr && aPool.mLast == nullptr,
   630                        "pool not empty");
   631         }
   632         PtrInfo *Add(void *aPointer, nsCycleCollectionParticipant *aParticipant)
   633         {
   634             if (mNext == mBlockEnd) {
   635                 Block *block = static_cast<Block*>(NS_Alloc(sizeof(Block)));
   636                 *mNextBlock = block;
   637                 mNext = block->mEntries;
   638                 mBlockEnd = block->mEntries + BlockSize;
   639                 block->mNext = nullptr;
   640                 mNextBlock = &block->mNext;
   641             }
   642             return new (mNext++) PtrInfo(aPointer, aParticipant);
   643         }
   644     private:
   645         Block **mNextBlock;
   646         PtrInfo *&mNext;
   647         PtrInfo *mBlockEnd;
   648     };
   650     class Enumerator;
   651     friend class Enumerator;
   652     class Enumerator {
   653     public:
   654         Enumerator(NodePool& aPool)
   655             : mFirstBlock(aPool.mBlocks),
   656               mCurBlock(nullptr),
   657               mNext(nullptr),
   658               mBlockEnd(nullptr),
   659               mLast(aPool.mLast)
   660         {
   661         }
   663         bool IsDone() const
   664         {
   665             return mNext == mLast;
   666         }
   668         bool AtBlockEnd() const
   669         {
   670             return mNext == mBlockEnd;
   671         }
   673         PtrInfo* GetNext()
   674         {
   675             MOZ_ASSERT(!IsDone(), "calling GetNext when done");
   676             if (mNext == mBlockEnd) {
   677                 Block *nextBlock = mCurBlock ? mCurBlock->mNext : mFirstBlock;
   678                 mNext = nextBlock->mEntries;
   679                 mBlockEnd = mNext + BlockSize;
   680                 mCurBlock = nextBlock;
   681             }
   682             return mNext++;
   683         }
   684     private:
   685         // mFirstBlock is a reference to allow an Enumerator to be constructed
   686         // for an empty graph.
   687         Block *&mFirstBlock;
   688         Block *mCurBlock;
   689         // mNext is the next value we want to return, unless mNext == mBlockEnd
   690         // NB: mLast is a reference to allow enumerating while building!
   691         PtrInfo *mNext, *mBlockEnd, *&mLast;
   692     };
   694     size_t SizeOfExcludingThis(MallocSizeOf aMallocSizeOf) const {
   695         // We don't measure the things pointed to by mEntries[] because those
   696         // pointers are non-owning.
   697         size_t n = 0;
   698         Block *b = mBlocks;
   699         while (b) {
   700             n += aMallocSizeOf(b);
   701             b = b->mNext;
   702         }
   703         return n;
   704     }
   706 private:
   707     Block *mBlocks;
   708     PtrInfo *mLast;
   709 };
   712 // Declarations for mPtrToNodeMap.
   714 struct PtrToNodeEntry : public PLDHashEntryHdr
   715 {
   716     // The key is mNode->mPointer
   717     PtrInfo *mNode;
   718 };
   720 static bool
   721 PtrToNodeMatchEntry(PLDHashTable *table,
   722                     const PLDHashEntryHdr *entry,
   723                     const void *key)
   724 {
   725     const PtrToNodeEntry *n = static_cast<const PtrToNodeEntry*>(entry);
   726     return n->mNode->mPointer == key;
   727 }
   729 static PLDHashTableOps PtrNodeOps = {
   730     PL_DHashAllocTable,
   731     PL_DHashFreeTable,
   732     PL_DHashVoidPtrKeyStub,
   733     PtrToNodeMatchEntry,
   734     PL_DHashMoveEntryStub,
   735     PL_DHashClearEntryStub,
   736     PL_DHashFinalizeStub,
   737     nullptr
   738 };
   741 struct WeakMapping
   742 {
   743     // map and key will be null if the corresponding objects are GC marked
   744     PtrInfo *mMap;
   745     PtrInfo *mKey;
   746     PtrInfo *mKeyDelegate;
   747     PtrInfo *mVal;
   748 };
   750 class GCGraphBuilder;
   752 struct GCGraph
   753 {
   754     NodePool mNodes;
   755     EdgePool mEdges;
   756     nsTArray<WeakMapping> mWeakMaps;
   757     uint32_t mRootCount;
   759 private:
   760     PLDHashTable mPtrToNodeMap;
   762 public:
   763     GCGraph() : mRootCount(0)
   764     {
   765         mPtrToNodeMap.ops = nullptr;
   766     }
   768     ~GCGraph()
   769     {
   770         if (mPtrToNodeMap.ops) {
   771             PL_DHashTableFinish(&mPtrToNodeMap);
   772         }
   773     }
   775     void Init()
   776     {
   777         MOZ_ASSERT(IsEmpty(), "Failed to call GCGraph::Clear");
   778         PL_DHashTableInit(&mPtrToNodeMap, &PtrNodeOps, nullptr,
   779                           sizeof(PtrToNodeEntry), 32768);
   780     }
   782     void Clear()
   783     {
   784         mNodes.Clear();
   785         mEdges.Clear();
   786         mWeakMaps.Clear();
   787         mRootCount = 0;
   788         PL_DHashTableFinish(&mPtrToNodeMap);
   789         mPtrToNodeMap.ops = nullptr;
   790     }
   792 #ifdef DEBUG
   793     bool IsEmpty()
   794     {
   795         return mNodes.IsEmpty() && mEdges.IsEmpty() &&
   796             mWeakMaps.IsEmpty() && mRootCount == 0 &&
   797             !mPtrToNodeMap.ops;
   798     }
   799 #endif
   801     PtrInfo* FindNode(void *aPtr);
   802     PtrToNodeEntry* AddNodeToMap(void *aPtr);
   803     void RemoveNodeFromMap(void *aPtr);
   805     uint32_t MapCount() const
   806     {
   807         return mPtrToNodeMap.entryCount;
   808     }
   810     void SizeOfExcludingThis(MallocSizeOf aMallocSizeOf,
   811                              size_t *aNodesSize, size_t *aEdgesSize,
   812                              size_t *aWeakMapsSize) const {
   813         *aNodesSize = mNodes.SizeOfExcludingThis(aMallocSizeOf);
   814         *aEdgesSize = mEdges.SizeOfExcludingThis(aMallocSizeOf);
   816         // We don't measure what the WeakMappings point to, because the
   817         // pointers are non-owning.
   818         *aWeakMapsSize = mWeakMaps.SizeOfExcludingThis(aMallocSizeOf);
   819     }
   820 };
   822 PtrInfo*
   823 GCGraph::FindNode(void *aPtr)
   824 {
   825     PtrToNodeEntry *e = static_cast<PtrToNodeEntry*>(PL_DHashTableOperate(&mPtrToNodeMap, aPtr, PL_DHASH_LOOKUP));
   826     if (!PL_DHASH_ENTRY_IS_BUSY(e)) {
   827         return nullptr;
   828     }
   829     return e->mNode;
   830 }
   832 PtrToNodeEntry*
   833 GCGraph::AddNodeToMap(void *aPtr)
   834 {
   835     PtrToNodeEntry *e = static_cast<PtrToNodeEntry*>(PL_DHashTableOperate(&mPtrToNodeMap, aPtr, PL_DHASH_ADD));
   836     if (!e) {
   837         // Caller should track OOMs
   838         return nullptr;
   839     }
   840     return e;
   841 }
   843 void
   844 GCGraph::RemoveNodeFromMap(void *aPtr)
   845 {
   846     PL_DHashTableOperate(&mPtrToNodeMap, aPtr, PL_DHASH_REMOVE);
   847 }
   850 static nsISupports *
   851 CanonicalizeXPCOMParticipant(nsISupports *in)
   852 {
   853     nsISupports* out;
   854     in->QueryInterface(NS_GET_IID(nsCycleCollectionISupports),
   855                        reinterpret_cast<void**>(&out));
   856     return out;
   857 }
   859 static inline void
   860 ToParticipant(nsISupports *s, nsXPCOMCycleCollectionParticipant **cp);
   862 static void
   863 CanonicalizeParticipant(void **parti, nsCycleCollectionParticipant **cp)
   864 {
   865     // If the participant is null, this is an nsISupports participant,
   866     // so we must QI to get the real participant.
   868     if (!*cp) {
   869         nsISupports *nsparti = static_cast<nsISupports*>(*parti);
   870         nsparti = CanonicalizeXPCOMParticipant(nsparti);
   871         NS_ASSERTION(nsparti,
   872                      "Don't add objects that don't participate in collection!");
   873         nsXPCOMCycleCollectionParticipant *xcp;
   874         ToParticipant(nsparti, &xcp);
   875         *parti = nsparti;
   876         *cp = xcp;
   877     }
   878 }
   880 struct nsPurpleBufferEntry {
   881   union {
   882     void *mObject;                        // when low bit unset
   883     nsPurpleBufferEntry *mNextInFreeList; // when low bit set
   884   };
   886   nsCycleCollectingAutoRefCnt *mRefCnt;
   888   nsCycleCollectionParticipant *mParticipant; // nullptr for nsISupports
   889 };
   891 class nsCycleCollector;
   893 struct nsPurpleBuffer
   894 {
   895 private:
   896     struct Block {
   897         Block *mNext;
   898         // Try to match the size of a jemalloc bucket, to minimize slop bytes.
   899         // - On 32-bit platforms sizeof(nsPurpleBufferEntry) is 12, so mEntries
   900         //   is 16,380 bytes, which leaves 4 bytes for mNext.
   901         // - On 64-bit platforms sizeof(nsPurpleBufferEntry) is 24, so mEntries
   902         //   is 32,544 bytes, which leaves 8 bytes for mNext.
   903         nsPurpleBufferEntry mEntries[1365];
   905         Block() : mNext(nullptr) {
   906             // Ensure Block is the right size (see above).
   907             static_assert(
   908                 sizeof(Block) == 16384 ||       // 32-bit
   909                 sizeof(Block) == 32768,         // 64-bit
   910                 "ill-sized nsPurpleBuffer::Block"
   911             );
   912         }
   914         template <class PurpleVisitor>
   915         void VisitEntries(nsPurpleBuffer &aBuffer, PurpleVisitor &aVisitor)
   916         {
   917             nsPurpleBufferEntry *eEnd = ArrayEnd(mEntries);
   918             for (nsPurpleBufferEntry *e = mEntries; e != eEnd; ++e) {
   919                 if (!(uintptr_t(e->mObject) & uintptr_t(1))) {
   920                     aVisitor.Visit(aBuffer, e);
   921                 }
   922             }
   923         }
   924     };
   925     // This class wraps a linked list of the elements in the purple
   926     // buffer.
   928     uint32_t mCount;
   929     Block mFirstBlock;
   930     nsPurpleBufferEntry *mFreeList;
   932 public:
   933     nsPurpleBuffer()
   934     {
   935         InitBlocks();
   936     }
   938     ~nsPurpleBuffer()
   939     {
   940         FreeBlocks();
   941     }
   943     template <class PurpleVisitor>
   944     void VisitEntries(PurpleVisitor &aVisitor)
   945     {
   946         for (Block *b = &mFirstBlock; b; b = b->mNext) {
   947             b->VisitEntries(*this, aVisitor);
   948         }
   949     }
   951     void InitBlocks()
   952     {
   953         mCount = 0;
   954         mFreeList = nullptr;
   955         StartBlock(&mFirstBlock);
   956     }
   958     void StartBlock(Block *aBlock)
   959     {
   960         NS_ABORT_IF_FALSE(!mFreeList, "should not have free list");
   962         // Put all the entries in the block on the free list.
   963         nsPurpleBufferEntry *entries = aBlock->mEntries;
   964         mFreeList = entries;
   965         for (uint32_t i = 1; i < ArrayLength(aBlock->mEntries); ++i) {
   966             entries[i - 1].mNextInFreeList =
   967                 (nsPurpleBufferEntry*)(uintptr_t(entries + i) | 1);
   968         }
   969         entries[ArrayLength(aBlock->mEntries) - 1].mNextInFreeList =
   970             (nsPurpleBufferEntry*)1;
   971     }
   973     void FreeBlocks()
   974     {
   975         if (mCount > 0)
   976             UnmarkRemainingPurple(&mFirstBlock);
   977         Block *b = mFirstBlock.mNext;
   978         while (b) {
   979             if (mCount > 0)
   980                 UnmarkRemainingPurple(b);
   981             Block *next = b->mNext;
   982             delete b;
   983             b = next;
   984         }
   985         mFirstBlock.mNext = nullptr;
   986     }
   988     struct UnmarkRemainingPurpleVisitor
   989     {
   990         void
   991         Visit(nsPurpleBuffer &aBuffer, nsPurpleBufferEntry *aEntry)
   992         {
   993             if (aEntry->mRefCnt) {
   994                 aEntry->mRefCnt->RemoveFromPurpleBuffer();
   995                 aEntry->mRefCnt = nullptr;
   996             }
   997             aEntry->mObject = nullptr;
   998             --aBuffer.mCount;
   999         }
  1000     };
  1002     void UnmarkRemainingPurple(Block *b)
  1004         UnmarkRemainingPurpleVisitor visitor;
  1005         b->VisitEntries(*this, visitor);
  1008     void SelectPointers(GCGraphBuilder &builder);
  1010     // RemoveSkippable removes entries from the purple buffer synchronously
  1011     // (1) if aAsyncSnowWhiteFreeing is false and nsPurpleBufferEntry::mRefCnt is 0 or
  1012     // (2) if the object's nsXPCOMCycleCollectionParticipant::CanSkip() returns true or
  1013     // (3) if nsPurpleBufferEntry::mRefCnt->IsPurple() is false.
  1014     // (4) If removeChildlessNodes is true, then any nodes in the purple buffer
  1015     //     that will have no children in the cycle collector graph will also be
  1016     //     removed. CanSkip() may be run on these children.
  1017     void RemoveSkippable(nsCycleCollector* aCollector,
  1018                          bool removeChildlessNodes,
  1019                          bool aAsyncSnowWhiteFreeing,
  1020                          CC_ForgetSkippableCallback aCb);
  1022     MOZ_ALWAYS_INLINE nsPurpleBufferEntry* NewEntry()
  1024         if (MOZ_UNLIKELY(!mFreeList)) {
  1025             Block *b = new Block;
  1026             StartBlock(b);
  1028             // Add the new block as the second block in the list.
  1029             b->mNext = mFirstBlock.mNext;
  1030             mFirstBlock.mNext = b;
  1033         nsPurpleBufferEntry *e = mFreeList;
  1034         mFreeList = (nsPurpleBufferEntry*)
  1035             (uintptr_t(mFreeList->mNextInFreeList) & ~uintptr_t(1));
  1036         return e;
  1039     MOZ_ALWAYS_INLINE void Put(void *p, nsCycleCollectionParticipant *cp,
  1040                                nsCycleCollectingAutoRefCnt *aRefCnt)
  1042         nsPurpleBufferEntry *e = NewEntry();
  1044         ++mCount;
  1046         e->mObject = p;
  1047         e->mRefCnt = aRefCnt;
  1048         e->mParticipant = cp;
  1051     void Remove(nsPurpleBufferEntry *e)
  1053         MOZ_ASSERT(mCount != 0, "must have entries");
  1055         if (e->mRefCnt) {
  1056             e->mRefCnt->RemoveFromPurpleBuffer();
  1057             e->mRefCnt = nullptr;
  1059         e->mNextInFreeList =
  1060             (nsPurpleBufferEntry*)(uintptr_t(mFreeList) | uintptr_t(1));
  1061         mFreeList = e;
  1063         --mCount;
  1066     uint32_t Count() const
  1068         return mCount;
  1071     size_t SizeOfExcludingThis(MallocSizeOf aMallocSizeOf) const
  1073         size_t n = 0;
  1075         // Don't measure mFirstBlock because it's within |this|.
  1076         const Block *block = mFirstBlock.mNext;
  1077         while (block) {
  1078             n += aMallocSizeOf(block);
  1079             block = block->mNext;
  1082         // mFreeList is deliberately not measured because it points into
  1083         // the purple buffer, which is within mFirstBlock and thus within |this|.
  1084         //
  1085         // We also don't measure the things pointed to by mEntries[] because
  1086         // those pointers are non-owning.
  1088         return n;
  1090 };
  1092 static bool
  1093 AddPurpleRoot(GCGraphBuilder &aBuilder, void *aRoot, nsCycleCollectionParticipant *aParti);
  1095 struct SelectPointersVisitor
  1097     SelectPointersVisitor(GCGraphBuilder &aBuilder)
  1098         : mBuilder(aBuilder)
  1099     {}
  1101     void
  1102     Visit(nsPurpleBuffer &aBuffer, nsPurpleBufferEntry *aEntry)
  1104         MOZ_ASSERT(aEntry->mObject, "Null object in purple buffer");
  1105         MOZ_ASSERT(aEntry->mRefCnt->get() != 0,
  1106                    "SelectPointersVisitor: snow-white object in the purple buffer");
  1107         if (!aEntry->mRefCnt->IsPurple() ||
  1108             AddPurpleRoot(mBuilder, aEntry->mObject, aEntry->mParticipant)) {
  1109             aBuffer.Remove(aEntry);
  1113 private:
  1114     GCGraphBuilder &mBuilder;
  1115 };
  1117 void
  1118 nsPurpleBuffer::SelectPointers(GCGraphBuilder &aBuilder)
  1120     SelectPointersVisitor visitor(aBuilder);
  1121     VisitEntries(visitor);
  1123     NS_ASSERTION(mCount == 0, "AddPurpleRoot failed");
  1124     if (mCount == 0) {
  1125         FreeBlocks();
  1126         InitBlocks();
  1130 enum ccPhase {
  1131     IdlePhase,
  1132     GraphBuildingPhase,
  1133     ScanAndCollectWhitePhase,
  1134     CleanupPhase
  1135 };
  1137 enum ccType {
  1138     SliceCC,     /* If a CC is in progress, continue it. Otherwise, start a new one. */
  1139     ManualCC,    /* Explicitly triggered. */
  1140     ShutdownCC   /* Shutdown CC, used for finding leaks. */
  1141 };
  1143 #ifdef MOZ_NUWA_PROCESS
  1144 #include "ipc/Nuwa.h"
  1145 #endif
  1147 ////////////////////////////////////////////////////////////////////////
  1148 // Top level structure for the cycle collector.
  1149 ////////////////////////////////////////////////////////////////////////
  1151 typedef js::SliceBudget SliceBudget;
  1153 class JSPurpleBuffer;
  1155 class nsCycleCollector : public nsIMemoryReporter
  1157     NS_DECL_ISUPPORTS
  1158     NS_DECL_NSIMEMORYREPORTER
  1160     bool mActivelyCollecting;
  1161     bool mFreeingSnowWhite;
  1162     // mScanInProgress should be false when we're collecting white objects.
  1163     bool mScanInProgress;
  1164     CycleCollectorResults mResults;
  1165     TimeStamp mCollectionStart;
  1167     CycleCollectedJSRuntime *mJSRuntime;
  1169     ccPhase mIncrementalPhase;
  1170     GCGraph mGraph;
  1171     nsAutoPtr<GCGraphBuilder> mBuilder;
  1172     nsAutoPtr<NodePool::Enumerator> mCurrNode;
  1173     nsCOMPtr<nsICycleCollectorListener> mListener;
  1175     nsIThread* mThread;
  1177     nsCycleCollectorParams mParams;
  1179     uint32_t mWhiteNodeCount;
  1181     CC_BeforeUnlinkCallback mBeforeUnlinkCB;
  1182     CC_ForgetSkippableCallback mForgetSkippableCB;
  1184     nsPurpleBuffer mPurpleBuf;
  1186     uint32_t mUnmergedNeeded;
  1187     uint32_t mMergedInARow;
  1189     JSPurpleBuffer* mJSPurpleBuffer;
  1191 public:
  1192     nsCycleCollector();
  1193     virtual ~nsCycleCollector();
  1195     void RegisterJSRuntime(CycleCollectedJSRuntime *aJSRuntime);
  1196     void ForgetJSRuntime();
  1198     void SetBeforeUnlinkCallback(CC_BeforeUnlinkCallback aBeforeUnlinkCB)
  1200         CheckThreadSafety();
  1201         mBeforeUnlinkCB = aBeforeUnlinkCB;
  1204     void SetForgetSkippableCallback(CC_ForgetSkippableCallback aForgetSkippableCB)
  1206         CheckThreadSafety();
  1207         mForgetSkippableCB = aForgetSkippableCB;
  1210     void Suspect(void *n, nsCycleCollectionParticipant *cp,
  1211                  nsCycleCollectingAutoRefCnt *aRefCnt);
  1212     uint32_t SuspectedCount();
  1213     void ForgetSkippable(bool aRemoveChildlessNodes, bool aAsyncSnowWhiteFreeing);
  1214     bool FreeSnowWhite(bool aUntilNoSWInPurpleBuffer);
  1216     // This method assumes its argument is already canonicalized.
  1217     void RemoveObjectFromGraph(void *aPtr);
  1219     void PrepareForGarbageCollection();
  1221     bool Collect(ccType aCCType,
  1222                  SliceBudget &aBudget,
  1223                  nsICycleCollectorListener *aManualListener);
  1224     void Shutdown();
  1226     void SizeOfIncludingThis(mozilla::MallocSizeOf aMallocSizeOf,
  1227                              size_t *aObjectSize,
  1228                              size_t *aGraphNodesSize,
  1229                              size_t *aGraphEdgesSize,
  1230                              size_t *aWeakMapsSize,
  1231                              size_t *aPurpleBufferSize) const;
  1233     JSPurpleBuffer* GetJSPurpleBuffer();
  1234 private:
  1235     void CheckThreadSafety();
  1236     void ShutdownCollect();
  1238     void FixGrayBits(bool aForceGC);
  1239     bool ShouldMergeZones(ccType aCCType);
  1241     void BeginCollection(ccType aCCType, nsICycleCollectorListener *aManualListener);
  1242     void MarkRoots(SliceBudget &aBudget);
  1243     void ScanRoots(bool aFullySynchGraphBuild);
  1244     void ScanIncrementalRoots();
  1245     void ScanWeakMaps();
  1247     // returns whether anything was collected
  1248     bool CollectWhite();
  1250     void CleanupAfterCollection();
  1251 };
  1253 NS_IMPL_ISUPPORTS(nsCycleCollector, nsIMemoryReporter)
  1255 /**
  1256  * GraphWalker is templatized over a Visitor class that must provide
  1257  * the following two methods:
  1259  * bool ShouldVisitNode(PtrInfo const *pi);
  1260  * void VisitNode(PtrInfo *pi);
  1261  */
  1262 template <class Visitor>
  1263 class GraphWalker
  1265 private:
  1266     Visitor mVisitor;
  1268     void DoWalk(nsDeque &aQueue);
  1270     void CheckedPush(nsDeque &aQueue, PtrInfo *pi)
  1272         if (!pi) {
  1273             MOZ_CRASH();
  1275         if (!aQueue.Push(pi, fallible_t())) {
  1276             mVisitor.Failed();
  1280 public:
  1281     void Walk(PtrInfo *s0);
  1282     void WalkFromRoots(GCGraph &aGraph);
  1283     // copy-constructing the visitor should be cheap, and less
  1284     // indirection than using a reference
  1285     GraphWalker(const Visitor aVisitor) : mVisitor(aVisitor) {}
  1286 };
  1289 ////////////////////////////////////////////////////////////////////////
  1290 // The static collector struct
  1291 ////////////////////////////////////////////////////////////////////////
  1293 struct CollectorData {
  1294   nsRefPtr<nsCycleCollector> mCollector;
  1295   CycleCollectedJSRuntime* mRuntime;
  1296 };
  1298 static mozilla::ThreadLocal<CollectorData*> sCollectorData;
  1300 ////////////////////////////////////////////////////////////////////////
  1301 // Utility functions
  1302 ////////////////////////////////////////////////////////////////////////
  1304 MOZ_NEVER_INLINE static void
  1305 Fault(const char *msg, const void *ptr=nullptr)
  1307     if (ptr)
  1308         printf("Fault in cycle collector: %s (ptr: %p)\n", msg, ptr);
  1309     else
  1310         printf("Fault in cycle collector: %s\n", msg);
  1312     NS_RUNTIMEABORT("cycle collector fault");
  1315 static void
  1316 Fault(const char *msg, PtrInfo *pi)
  1318     Fault(msg, pi->mPointer);
  1321 static inline void
  1322 ToParticipant(nsISupports *s, nsXPCOMCycleCollectionParticipant **cp)
  1324     // We use QI to move from an nsISupports to an
  1325     // nsXPCOMCycleCollectionParticipant, which is a per-class singleton helper
  1326     // object that implements traversal and unlinking logic for the nsISupports
  1327     // in question.
  1328     CallQueryInterface(s, cp);
  1331 template <class Visitor>
  1332 MOZ_NEVER_INLINE void
  1333 GraphWalker<Visitor>::Walk(PtrInfo *s0)
  1335     nsDeque queue;
  1336     CheckedPush(queue, s0);
  1337     DoWalk(queue);
  1340 template <class Visitor>
  1341 MOZ_NEVER_INLINE void
  1342 GraphWalker<Visitor>::WalkFromRoots(GCGraph& aGraph)
  1344     nsDeque queue;
  1345     NodePool::Enumerator etor(aGraph.mNodes);
  1346     for (uint32_t i = 0; i < aGraph.mRootCount; ++i) {
  1347         CheckedPush(queue, etor.GetNext());
  1349     DoWalk(queue);
  1352 template <class Visitor>
  1353 MOZ_NEVER_INLINE void
  1354 GraphWalker<Visitor>::DoWalk(nsDeque &aQueue)
  1356     // Use a aQueue to match the breadth-first traversal used when we
  1357     // built the graph, for hopefully-better locality.
  1358     while (aQueue.GetSize() > 0) {
  1359         PtrInfo *pi = static_cast<PtrInfo*>(aQueue.PopFront());
  1361         if (pi->mParticipant && mVisitor.ShouldVisitNode(pi)) {
  1362             mVisitor.VisitNode(pi);
  1363             for (EdgePool::Iterator child = pi->FirstChild(),
  1364                                 child_end = pi->LastChild();
  1365                  child != child_end; ++child) {
  1366                 CheckedPush(aQueue, *child);
  1372 struct CCGraphDescriber : public LinkedListElement<CCGraphDescriber>
  1374   CCGraphDescriber()
  1375   : mAddress("0x"), mCnt(0), mType(eUnknown) {}
  1377   enum Type
  1379     eRefCountedObject,
  1380     eGCedObject,
  1381     eGCMarkedObject,
  1382     eEdge,
  1383     eRoot,
  1384     eGarbage,
  1385     eUnknown
  1386   };
  1388   nsCString mAddress;
  1389   nsCString mName;
  1390   nsCString mCompartmentOrToAddress;
  1391   uint32_t mCnt;
  1392   Type mType;
  1393 };
  1395 class nsCycleCollectorLogger MOZ_FINAL : public nsICycleCollectorListener
  1397 public:
  1398     nsCycleCollectorLogger() :
  1399       mStream(nullptr), mWantAllTraces(false),
  1400       mDisableLog(false), mWantAfterProcessing(false)
  1403     ~nsCycleCollectorLogger()
  1405         ClearDescribers();
  1406         if (mStream) {
  1407             MozillaUnRegisterDebugFILE(mStream);
  1408             fclose(mStream);
  1411     NS_DECL_ISUPPORTS
  1413     void SetAllTraces()
  1415         mWantAllTraces = true;
  1418     NS_IMETHOD AllTraces(nsICycleCollectorListener** aListener)
  1420         SetAllTraces();
  1421         NS_ADDREF(*aListener = this);
  1422         return NS_OK;
  1425     NS_IMETHOD GetWantAllTraces(bool* aAllTraces)
  1427         *aAllTraces = mWantAllTraces;
  1428         return NS_OK;
  1431     NS_IMETHOD GetDisableLog(bool* aDisableLog)
  1433         *aDisableLog = mDisableLog;
  1434         return NS_OK;
  1437     NS_IMETHOD SetDisableLog(bool aDisableLog)
  1439         mDisableLog = aDisableLog;
  1440         return NS_OK;
  1443     NS_IMETHOD GetWantAfterProcessing(bool* aWantAfterProcessing)
  1445         *aWantAfterProcessing = mWantAfterProcessing;
  1446         return NS_OK;
  1449     NS_IMETHOD SetWantAfterProcessing(bool aWantAfterProcessing)
  1451         mWantAfterProcessing = aWantAfterProcessing;
  1452         return NS_OK;
  1455     NS_IMETHOD GetFilenameIdentifier(nsAString& aIdentifier)
  1457         aIdentifier = mFilenameIdentifier;
  1458         return NS_OK;
  1461     NS_IMETHOD SetFilenameIdentifier(const nsAString& aIdentifier)
  1463         mFilenameIdentifier = aIdentifier;
  1464         return NS_OK;
  1467     NS_IMETHOD GetGcLogPath(nsAString &aPath)
  1469       aPath = mGCLogPath;
  1470       return NS_OK;
  1473     NS_IMETHOD GetCcLogPath(nsAString &aPath)
  1475       aPath = mCCLogPath;
  1476       return NS_OK;
  1479     NS_IMETHOD Begin()
  1481         mCurrentAddress.AssignLiteral("0x");
  1482         ClearDescribers();
  1483         if (mDisableLog) {
  1484             return NS_OK;
  1487         // Initially create the log in a file starting with
  1488         // "incomplete-gc-edges".  We'll move the file and strip off the
  1489         // "incomplete-" once the dump completes.  (We do this because we don't
  1490         // want scripts which poll the filesystem looking for gc/cc dumps to
  1491         // grab a file before we're finished writing to it.)
  1492         nsCOMPtr<nsIFile> gcLogFile = CreateTempFile("incomplete-gc-edges");
  1493         if (NS_WARN_IF(!gcLogFile))
  1494             return NS_ERROR_UNEXPECTED;
  1496         // Dump the JS heap.
  1497         FILE* gcLogANSIFile = nullptr;
  1498         gcLogFile->OpenANSIFileDesc("w", &gcLogANSIFile);
  1499         if (NS_WARN_IF(!gcLogANSIFile))
  1500             return NS_ERROR_UNEXPECTED;
  1501         MozillaRegisterDebugFILE(gcLogANSIFile);
  1502         CollectorData *data = sCollectorData.get();
  1503         if (data && data->mRuntime)
  1504             data->mRuntime->DumpJSHeap(gcLogANSIFile);
  1505         MozillaUnRegisterDebugFILE(gcLogANSIFile);
  1506         fclose(gcLogANSIFile);
  1508         // Strip off "incomplete-".
  1509         nsCOMPtr<nsIFile> gcLogFileFinalDestination =
  1510             CreateTempFile("gc-edges");
  1511         if (NS_WARN_IF(!gcLogFileFinalDestination))
  1512             return NS_ERROR_UNEXPECTED;
  1514         nsAutoString gcLogFileFinalDestinationName;
  1515         gcLogFileFinalDestination->GetLeafName(gcLogFileFinalDestinationName);
  1516         if (NS_WARN_IF(gcLogFileFinalDestinationName.IsEmpty()))
  1517             return NS_ERROR_UNEXPECTED;
  1519         gcLogFile->MoveTo(/* directory */ nullptr, gcLogFileFinalDestinationName);
  1521         // Log to the error console.
  1522         nsCOMPtr<nsIConsoleService> cs =
  1523             do_GetService(NS_CONSOLESERVICE_CONTRACTID);
  1524         if (cs) {
  1525             nsAutoString gcLogPath;
  1526             gcLogFileFinalDestination->GetPath(gcLogPath);
  1528             nsString msg = NS_LITERAL_STRING("Garbage Collector log dumped to ") +
  1529                            gcLogPath;
  1530             cs->LogStringMessage(msg.get());
  1532             mGCLogPath = gcLogPath;
  1535         // Open a file for dumping the CC graph.  We again prefix with
  1536         // "incomplete-".
  1537         mOutFile = CreateTempFile("incomplete-cc-edges");
  1538         if (NS_WARN_IF(!mOutFile))
  1539             return NS_ERROR_UNEXPECTED;
  1540         MOZ_ASSERT(!mStream);
  1541         mOutFile->OpenANSIFileDesc("w", &mStream);
  1542         if (NS_WARN_IF(!mStream))
  1543             return NS_ERROR_UNEXPECTED;
  1544         MozillaRegisterDebugFILE(mStream);
  1546         fprintf(mStream, "# WantAllTraces=%s\n", mWantAllTraces ? "true" : "false");
  1548         return NS_OK;
  1550     NS_IMETHOD NoteRefCountedObject(uint64_t aAddress, uint32_t refCount,
  1551                                     const char *aObjectDescription)
  1553         if (!mDisableLog) {
  1554             fprintf(mStream, "%p [rc=%u] %s\n", (void*)aAddress, refCount,
  1555                     aObjectDescription);
  1557         if (mWantAfterProcessing) {
  1558             CCGraphDescriber* d =  new CCGraphDescriber();
  1559             mDescribers.insertBack(d);
  1560             mCurrentAddress.AssignLiteral("0x");
  1561             mCurrentAddress.AppendInt(aAddress, 16);
  1562             d->mType = CCGraphDescriber::eRefCountedObject;
  1563             d->mAddress = mCurrentAddress;
  1564             d->mCnt = refCount;
  1565             d->mName.Append(aObjectDescription);
  1567         return NS_OK;
  1569     NS_IMETHOD NoteGCedObject(uint64_t aAddress, bool aMarked,
  1570                               const char *aObjectDescription,
  1571                               uint64_t aCompartmentAddress)
  1573         if (!mDisableLog) {
  1574             fprintf(mStream, "%p [gc%s] %s\n", (void*)aAddress,
  1575                     aMarked ? ".marked" : "", aObjectDescription);
  1577         if (mWantAfterProcessing) {
  1578             CCGraphDescriber* d =  new CCGraphDescriber();
  1579             mDescribers.insertBack(d);
  1580             mCurrentAddress.AssignLiteral("0x");
  1581             mCurrentAddress.AppendInt(aAddress, 16);
  1582             d->mType = aMarked ? CCGraphDescriber::eGCMarkedObject :
  1583                                  CCGraphDescriber::eGCedObject;
  1584             d->mAddress = mCurrentAddress;
  1585             d->mName.Append(aObjectDescription);
  1586             if (aCompartmentAddress) {
  1587                 d->mCompartmentOrToAddress.AssignLiteral("0x");
  1588                 d->mCompartmentOrToAddress.AppendInt(aCompartmentAddress, 16);
  1589             } else {
  1590                 d->mCompartmentOrToAddress.SetIsVoid(true);
  1593         return NS_OK;
  1595     NS_IMETHOD NoteEdge(uint64_t aToAddress, const char *aEdgeName)
  1597         if (!mDisableLog) {
  1598             fprintf(mStream, "> %p %s\n", (void*)aToAddress, aEdgeName);
  1600         if (mWantAfterProcessing) {
  1601             CCGraphDescriber* d =  new CCGraphDescriber();
  1602             mDescribers.insertBack(d);
  1603             d->mType = CCGraphDescriber::eEdge;
  1604             d->mAddress = mCurrentAddress;
  1605             d->mCompartmentOrToAddress.AssignLiteral("0x");
  1606             d->mCompartmentOrToAddress.AppendInt(aToAddress, 16);
  1607             d->mName.Append(aEdgeName);
  1609         return NS_OK;
  1611     NS_IMETHOD NoteWeakMapEntry(uint64_t aMap, uint64_t aKey,
  1612                                 uint64_t aKeyDelegate, uint64_t aValue)
  1614         if (!mDisableLog) {
  1615             fprintf(mStream, "WeakMapEntry map=%p key=%p keyDelegate=%p value=%p\n",
  1616                     (void*)aMap, (void*)aKey, (void*)aKeyDelegate, (void*)aValue);
  1618         // We don't support after-processing for weak map entries.
  1619         return NS_OK;
  1621     NS_IMETHOD NoteIncrementalRoot(uint64_t aAddress)
  1623         if (!mDisableLog) {
  1624             fprintf(mStream, "IncrementalRoot %p\n", (void*)aAddress);
  1626         // We don't support after-processing for incremental roots.
  1627         return NS_OK;
  1629     NS_IMETHOD BeginResults()
  1631         if (!mDisableLog) {
  1632             fputs("==========\n", mStream);
  1634         return NS_OK;
  1636     NS_IMETHOD DescribeRoot(uint64_t aAddress, uint32_t aKnownEdges)
  1638         if (!mDisableLog) {
  1639             fprintf(mStream, "%p [known=%u]\n", (void*)aAddress, aKnownEdges);
  1641         if (mWantAfterProcessing) {
  1642             CCGraphDescriber* d =  new CCGraphDescriber();
  1643             mDescribers.insertBack(d);
  1644             d->mType = CCGraphDescriber::eRoot;
  1645             d->mAddress.AppendInt(aAddress, 16);
  1646             d->mCnt = aKnownEdges;
  1648         return NS_OK;
  1650     NS_IMETHOD DescribeGarbage(uint64_t aAddress)
  1652         if (!mDisableLog) {
  1653             fprintf(mStream, "%p [garbage]\n", (void*)aAddress);
  1655         if (mWantAfterProcessing) {
  1656             CCGraphDescriber* d =  new CCGraphDescriber();
  1657             mDescribers.insertBack(d);
  1658             d->mType = CCGraphDescriber::eGarbage;
  1659             d->mAddress.AppendInt(aAddress, 16);
  1661         return NS_OK;
  1663     NS_IMETHOD End()
  1665         if (!mDisableLog) {
  1666             MOZ_ASSERT(mStream);
  1667             MOZ_ASSERT(mOutFile);
  1669             MozillaUnRegisterDebugFILE(mStream);
  1670             fclose(mStream);
  1671             mStream = nullptr;
  1673             // Strip off "incomplete-" from the log file's name.
  1674             nsCOMPtr<nsIFile> logFileFinalDestination =
  1675                 CreateTempFile("cc-edges");
  1676             if (NS_WARN_IF(!logFileFinalDestination))
  1677                 return NS_ERROR_UNEXPECTED;
  1679             nsAutoString logFileFinalDestinationName;
  1680             logFileFinalDestination->GetLeafName(logFileFinalDestinationName);
  1681             if (NS_WARN_IF(logFileFinalDestinationName.IsEmpty()))
  1682                 return NS_ERROR_UNEXPECTED;
  1684             mOutFile->MoveTo(/* directory = */ nullptr,
  1685                              logFileFinalDestinationName);
  1686             mOutFile = nullptr;
  1688             // Log to the error console.
  1689             nsCOMPtr<nsIConsoleService> cs =
  1690                 do_GetService(NS_CONSOLESERVICE_CONTRACTID);
  1691             if (cs) {
  1692                 nsAutoString ccLogPath;
  1693                 logFileFinalDestination->GetPath(ccLogPath);
  1695                 nsString msg = NS_LITERAL_STRING("Cycle Collector log dumped to ") +
  1696                                ccLogPath;
  1697                 cs->LogStringMessage(msg.get());
  1699                 mCCLogPath = ccLogPath;
  1702         return NS_OK;
  1704     NS_IMETHOD ProcessNext(nsICycleCollectorHandler* aHandler,
  1705                            bool* aCanContinue)
  1707         if (NS_WARN_IF(!aHandler) || NS_WARN_IF(!mWantAfterProcessing))
  1708             return NS_ERROR_UNEXPECTED;
  1709         CCGraphDescriber* d = mDescribers.popFirst();
  1710         if (d) {
  1711             switch (d->mType) {
  1712                 case CCGraphDescriber::eRefCountedObject:
  1713                     aHandler->NoteRefCountedObject(d->mAddress,
  1714                                                    d->mCnt,
  1715                                                    d->mName);
  1716                     break;
  1717                 case CCGraphDescriber::eGCedObject:
  1718                 case CCGraphDescriber::eGCMarkedObject:
  1719                     aHandler->NoteGCedObject(d->mAddress,
  1720                                              d->mType ==
  1721                                                CCGraphDescriber::eGCMarkedObject,
  1722                                              d->mName,
  1723                                              d->mCompartmentOrToAddress);
  1724                     break;
  1725                 case CCGraphDescriber::eEdge:
  1726                     aHandler->NoteEdge(d->mAddress,
  1727                                        d->mCompartmentOrToAddress,
  1728                                        d->mName);
  1729                     break;
  1730                 case CCGraphDescriber::eRoot:
  1731                     aHandler->DescribeRoot(d->mAddress,
  1732                                            d->mCnt);
  1733                     break;
  1734                 case CCGraphDescriber::eGarbage:
  1735                     aHandler->DescribeGarbage(d->mAddress);
  1736                     break;
  1737                 case CCGraphDescriber::eUnknown:
  1738                     NS_NOTREACHED("CCGraphDescriber::eUnknown");
  1739                     break;
  1741             delete d;
  1743         if (!(*aCanContinue = !mDescribers.isEmpty())) {
  1744             mCurrentAddress.AssignLiteral("0x");
  1746         return NS_OK;
  1748 private:
  1749     /**
  1750      * Create a new file named something like aPrefix.$PID.$IDENTIFIER.log in
  1751      * $MOZ_CC_LOG_DIRECTORY or in the system's temp directory.  No existing
  1752      * file will be overwritten; if aPrefix.$PID.$IDENTIFIER.log exists, we'll
  1753      * try a file named something like aPrefix.$PID.$IDENTIFIER-1.log, and so
  1754      * on.
  1755      */
  1756     already_AddRefed<nsIFile>
  1757     CreateTempFile(const char* aPrefix)
  1759         nsPrintfCString filename("%s.%d%s%s.log",
  1760             aPrefix,
  1761             base::GetCurrentProcId(),
  1762             mFilenameIdentifier.IsEmpty() ? "" : ".",
  1763             NS_ConvertUTF16toUTF8(mFilenameIdentifier).get());
  1765         // Get the log directory either from $MOZ_CC_LOG_DIRECTORY or from
  1766         // the fallback directories in OpenTempFile.  We don't use an nsCOMPtr
  1767         // here because OpenTempFile uses an in/out param and getter_AddRefs
  1768         // wouldn't work.
  1769         nsIFile* logFile = nullptr;
  1770         if (char* env = PR_GetEnv("MOZ_CC_LOG_DIRECTORY")) {
  1771             NS_NewNativeLocalFile(nsCString(env), /* followLinks = */ true,
  1772                                   &logFile);
  1775         // In Android case, this function will open a file named aFilename under
  1776         // specific folder (/data/local/tmp/memory-reports). Otherwise, it will
  1777         // open a file named aFilename under "NS_OS_TEMP_DIR".
  1778         nsresult rv = nsDumpUtils::OpenTempFile(
  1779                                      filename,
  1780                                      &logFile,
  1781                                      NS_LITERAL_CSTRING("memory-reports"));
  1782         if (NS_FAILED(rv)) {
  1783           NS_IF_RELEASE(logFile);
  1784           return nullptr;
  1787         return dont_AddRef(logFile);
  1790     void ClearDescribers()
  1792       CCGraphDescriber* d;
  1793       while((d = mDescribers.popFirst())) {
  1794         delete d;
  1798     FILE *mStream;
  1799     nsCOMPtr<nsIFile> mOutFile;
  1800     bool mWantAllTraces;
  1801     bool mDisableLog;
  1802     bool mWantAfterProcessing;
  1803     nsString mFilenameIdentifier;
  1804     nsString mGCLogPath;
  1805     nsString mCCLogPath;
  1806     nsCString mCurrentAddress;
  1807     mozilla::LinkedList<CCGraphDescriber> mDescribers;
  1808 };
  1810 NS_IMPL_ISUPPORTS(nsCycleCollectorLogger, nsICycleCollectorListener)
  1812 nsresult
  1813 nsCycleCollectorLoggerConstructor(nsISupports* aOuter,
  1814                                   const nsIID& aIID,
  1815                                   void* *aInstancePtr)
  1817     if (NS_WARN_IF(aOuter))
  1818         return NS_ERROR_NO_AGGREGATION;
  1820     nsISupports *logger = new nsCycleCollectorLogger();
  1822     return logger->QueryInterface(aIID, aInstancePtr);
  1825 ////////////////////////////////////////////////////////////////////////
  1826 // Bacon & Rajan's |MarkRoots| routine.
  1827 ////////////////////////////////////////////////////////////////////////
  1829 class GCGraphBuilder : public nsCycleCollectionTraversalCallback,
  1830                        public nsCycleCollectionNoteRootCallback
  1832 private:
  1833     GCGraph &mGraph;
  1834     CycleCollectorResults &mResults;
  1835     NodePool::Builder mNodeBuilder;
  1836     EdgePool::Builder mEdgeBuilder;
  1837     PtrInfo *mCurrPi;
  1838     nsCycleCollectionParticipant *mJSParticipant;
  1839     nsCycleCollectionParticipant *mJSZoneParticipant;
  1840     nsCString mNextEdgeName;
  1841     nsICycleCollectorListener *mListener;
  1842     bool mMergeZones;
  1843     bool mRanOutOfMemory;
  1845 public:
  1846     GCGraphBuilder(GCGraph &aGraph,
  1847                    CycleCollectorResults &aResults,
  1848                    CycleCollectedJSRuntime *aJSRuntime,
  1849                    nsICycleCollectorListener *aListener,
  1850                    bool aMergeZones);
  1851     virtual ~GCGraphBuilder();
  1853     bool WantAllTraces() const
  1855         return nsCycleCollectionNoteRootCallback::WantAllTraces();
  1858     PtrInfo* AddNode(void *aPtr, nsCycleCollectionParticipant *aParticipant);
  1859     PtrInfo* AddWeakMapNode(void* node);
  1860     void Traverse(PtrInfo* aPtrInfo);
  1861     void SetLastChild();
  1863     bool RanOutOfMemory() const { return mRanOutOfMemory; }
  1865 private:
  1866     void DescribeNode(uint32_t refCount, const char *objName)
  1868         mCurrPi->mRefCount = refCount;
  1871 public:
  1872     // nsCycleCollectionNoteRootCallback methods.
  1873     NS_IMETHOD_(void) NoteXPCOMRoot(nsISupports *root);
  1874     NS_IMETHOD_(void) NoteJSRoot(void *root);
  1875     NS_IMETHOD_(void) NoteNativeRoot(void *root, nsCycleCollectionParticipant *participant);
  1876     NS_IMETHOD_(void) NoteWeakMapping(void *map, void *key, void *kdelegate, void *val);
  1878     // nsCycleCollectionTraversalCallback methods.
  1879     NS_IMETHOD_(void) DescribeRefCountedNode(nsrefcnt refCount,
  1880                                              const char *objName);
  1881     NS_IMETHOD_(void) DescribeGCedNode(bool isMarked, const char *objName,
  1882                                        uint64_t aCompartmentAddress);
  1884     NS_IMETHOD_(void) NoteXPCOMChild(nsISupports *child);
  1885     NS_IMETHOD_(void) NoteJSChild(void *child);
  1886     NS_IMETHOD_(void) NoteNativeChild(void *child,
  1887                                       nsCycleCollectionParticipant *participant);
  1888     NS_IMETHOD_(void) NoteNextEdgeName(const char* name);
  1890 private:
  1891     NS_IMETHOD_(void) NoteRoot(void *root,
  1892                                nsCycleCollectionParticipant *participant)
  1894         MOZ_ASSERT(root);
  1895         MOZ_ASSERT(participant);
  1897         if (!participant->CanSkipInCC(root) || MOZ_UNLIKELY(WantAllTraces())) {
  1898             AddNode(root, participant);
  1902     NS_IMETHOD_(void) NoteChild(void *child, nsCycleCollectionParticipant *cp,
  1903                                 nsCString edgeName)
  1905         PtrInfo *childPi = AddNode(child, cp);
  1906         if (!childPi)
  1907             return;
  1908         mEdgeBuilder.Add(childPi);
  1909         if (mListener) {
  1910             mListener->NoteEdge((uint64_t)child, edgeName.get());
  1912         ++childPi->mInternalRefs;
  1915     JS::Zone *MergeZone(void *gcthing) {
  1916         if (!mMergeZones) {
  1917             return nullptr;
  1919         JS::Zone *zone = JS::GetGCThingZone(gcthing);
  1920         if (js::IsSystemZone(zone)) {
  1921             return nullptr;
  1923         return zone;
  1925 };
  1927 GCGraphBuilder::GCGraphBuilder(GCGraph &aGraph,
  1928                                CycleCollectorResults &aResults,
  1929                                CycleCollectedJSRuntime *aJSRuntime,
  1930                                nsICycleCollectorListener *aListener,
  1931                                bool aMergeZones)
  1932     : mGraph(aGraph),
  1933       mResults(aResults),
  1934       mNodeBuilder(aGraph.mNodes),
  1935       mEdgeBuilder(aGraph.mEdges),
  1936       mJSParticipant(nullptr),
  1937       mJSZoneParticipant(nullptr),
  1938       mListener(aListener),
  1939       mMergeZones(aMergeZones),
  1940       mRanOutOfMemory(false)
  1942     if (aJSRuntime) {
  1943         mJSParticipant = aJSRuntime->GCThingParticipant();
  1944         mJSZoneParticipant = aJSRuntime->ZoneParticipant();
  1947     uint32_t flags = 0;
  1948     if (!flags && mListener) {
  1949         flags = nsCycleCollectionTraversalCallback::WANT_DEBUG_INFO;
  1950         bool all = false;
  1951         mListener->GetWantAllTraces(&all);
  1952         if (all) {
  1953             flags |= nsCycleCollectionTraversalCallback::WANT_ALL_TRACES;
  1954             mWantAllTraces = true; // for nsCycleCollectionNoteRootCallback
  1958     mFlags |= flags;
  1960     mMergeZones = mMergeZones && MOZ_LIKELY(!WantAllTraces());
  1962     MOZ_ASSERT(nsCycleCollectionNoteRootCallback::WantAllTraces() ==
  1963                nsCycleCollectionTraversalCallback::WantAllTraces());
  1966 GCGraphBuilder::~GCGraphBuilder()
  1970 PtrInfo*
  1971 GCGraphBuilder::AddNode(void *aPtr, nsCycleCollectionParticipant *aParticipant)
  1973     PtrToNodeEntry *e = mGraph.AddNodeToMap(aPtr);
  1974     if (!e) {
  1975         mRanOutOfMemory = true;
  1976         return nullptr;
  1979     PtrInfo *result;
  1980     if (!e->mNode) {
  1981         // New entry.
  1982         result = mNodeBuilder.Add(aPtr, aParticipant);
  1983         e->mNode = result;
  1984         NS_ASSERTION(result, "mNodeBuilder.Add returned null");
  1985     } else {
  1986         result = e->mNode;
  1987         MOZ_ASSERT(result->mParticipant == aParticipant,
  1988                    "nsCycleCollectionParticipant shouldn't change!");
  1990     return result;
  1993 MOZ_NEVER_INLINE void
  1994 GCGraphBuilder::Traverse(PtrInfo* aPtrInfo)
  1996     mCurrPi = aPtrInfo;
  1998     mCurrPi->SetFirstChild(mEdgeBuilder.Mark());
  2000     if (!aPtrInfo->mParticipant) {
  2001         return;
  2004     nsresult rv = aPtrInfo->mParticipant->Traverse(aPtrInfo->mPointer, *this);
  2005     if (NS_FAILED(rv)) {
  2006         Fault("script pointer traversal failed", aPtrInfo);
  2010 void
  2011 GCGraphBuilder::SetLastChild()
  2013     mCurrPi->SetLastChild(mEdgeBuilder.Mark());
  2016 NS_IMETHODIMP_(void)
  2017 GCGraphBuilder::NoteXPCOMRoot(nsISupports *root)
  2019     root = CanonicalizeXPCOMParticipant(root);
  2020     NS_ASSERTION(root,
  2021                  "Don't add objects that don't participate in collection!");
  2023     nsXPCOMCycleCollectionParticipant *cp;
  2024     ToParticipant(root, &cp);
  2026     NoteRoot(root, cp);
  2029 NS_IMETHODIMP_(void)
  2030 GCGraphBuilder::NoteJSRoot(void *root)
  2032     if (JS::Zone *zone = MergeZone(root)) {
  2033         NoteRoot(zone, mJSZoneParticipant);
  2034     } else {
  2035         NoteRoot(root, mJSParticipant);
  2039 NS_IMETHODIMP_(void)
  2040 GCGraphBuilder::NoteNativeRoot(void *root, nsCycleCollectionParticipant *participant)
  2042     NoteRoot(root, participant);
  2045 NS_IMETHODIMP_(void)
  2046 GCGraphBuilder::DescribeRefCountedNode(nsrefcnt refCount, const char *objName)
  2048     if (refCount == 0)
  2049         Fault("zero refcount", mCurrPi);
  2050     if (refCount == UINT32_MAX)
  2051         Fault("overflowing refcount", mCurrPi);
  2052     mResults.mVisitedRefCounted++;
  2054     if (mListener) {
  2055         mListener->NoteRefCountedObject((uint64_t)mCurrPi->mPointer, refCount,
  2056                                         objName);
  2059     DescribeNode(refCount, objName);
  2062 NS_IMETHODIMP_(void)
  2063 GCGraphBuilder::DescribeGCedNode(bool isMarked, const char *objName,
  2064                                  uint64_t aCompartmentAddress)
  2066     uint32_t refCount = isMarked ? UINT32_MAX : 0;
  2067     mResults.mVisitedGCed++;
  2069     if (mListener) {
  2070         mListener->NoteGCedObject((uint64_t)mCurrPi->mPointer, isMarked,
  2071                                   objName, aCompartmentAddress);
  2074     DescribeNode(refCount, objName);
  2077 NS_IMETHODIMP_(void)
  2078 GCGraphBuilder::NoteXPCOMChild(nsISupports *child)
  2080     nsCString edgeName;
  2081     if (WantDebugInfo()) {
  2082         edgeName.Assign(mNextEdgeName);
  2083         mNextEdgeName.Truncate();
  2085     if (!child || !(child = CanonicalizeXPCOMParticipant(child)))
  2086         return;
  2088     nsXPCOMCycleCollectionParticipant *cp;
  2089     ToParticipant(child, &cp);
  2090     if (cp && (!cp->CanSkipThis(child) || WantAllTraces())) {
  2091         NoteChild(child, cp, edgeName);
  2095 NS_IMETHODIMP_(void)
  2096 GCGraphBuilder::NoteNativeChild(void *child,
  2097                                 nsCycleCollectionParticipant *participant)
  2099     nsCString edgeName;
  2100     if (WantDebugInfo()) {
  2101         edgeName.Assign(mNextEdgeName);
  2102         mNextEdgeName.Truncate();
  2104     if (!child)
  2105         return;
  2107     MOZ_ASSERT(participant, "Need a nsCycleCollectionParticipant!");
  2108     NoteChild(child, participant, edgeName);
  2111 NS_IMETHODIMP_(void)
  2112 GCGraphBuilder::NoteJSChild(void *child)
  2114     if (!child) {
  2115         return;
  2118     nsCString edgeName;
  2119     if (MOZ_UNLIKELY(WantDebugInfo())) {
  2120         edgeName.Assign(mNextEdgeName);
  2121         mNextEdgeName.Truncate();
  2124     if (xpc_GCThingIsGrayCCThing(child) || MOZ_UNLIKELY(WantAllTraces())) {
  2125         if (JS::Zone *zone = MergeZone(child)) {
  2126             NoteChild(zone, mJSZoneParticipant, edgeName);
  2127         } else {
  2128             NoteChild(child, mJSParticipant, edgeName);
  2133 NS_IMETHODIMP_(void)
  2134 GCGraphBuilder::NoteNextEdgeName(const char* name)
  2136     if (WantDebugInfo()) {
  2137         mNextEdgeName = name;
  2141 PtrInfo*
  2142 GCGraphBuilder::AddWeakMapNode(void *node)
  2144     MOZ_ASSERT(node, "Weak map node should be non-null.");
  2146     if (!xpc_GCThingIsGrayCCThing(node) && !WantAllTraces())
  2147         return nullptr;
  2149     if (JS::Zone *zone = MergeZone(node)) {
  2150         return AddNode(zone, mJSZoneParticipant);
  2151     } else {
  2152         return AddNode(node, mJSParticipant);
  2156 NS_IMETHODIMP_(void)
  2157 GCGraphBuilder::NoteWeakMapping(void *map, void *key, void *kdelegate, void *val)
  2159     // Don't try to optimize away the entry here, as we've already attempted to
  2160     // do that in TraceWeakMapping in nsXPConnect.
  2161     WeakMapping *mapping = mGraph.mWeakMaps.AppendElement();
  2162     mapping->mMap = map ? AddWeakMapNode(map) : nullptr;
  2163     mapping->mKey = key ? AddWeakMapNode(key) : nullptr;
  2164     mapping->mKeyDelegate = kdelegate ? AddWeakMapNode(kdelegate) : mapping->mKey;
  2165     mapping->mVal = val ? AddWeakMapNode(val) : nullptr;
  2167     if (mListener) {
  2168         mListener->NoteWeakMapEntry((uint64_t)map, (uint64_t)key,
  2169                                     (uint64_t)kdelegate, (uint64_t)val);
  2173 static bool
  2174 AddPurpleRoot(GCGraphBuilder &aBuilder, void *aRoot, nsCycleCollectionParticipant *aParti)
  2176     CanonicalizeParticipant(&aRoot, &aParti);
  2178     if (aBuilder.WantAllTraces() || !aParti->CanSkipInCC(aRoot)) {
  2179         PtrInfo *pinfo = aBuilder.AddNode(aRoot, aParti);
  2180         if (!pinfo) {
  2181             return false;
  2185     return true;
  2188 // MayHaveChild() will be false after a Traverse if the object does
  2189 // not have any children the CC will visit.
  2190 class ChildFinder : public nsCycleCollectionTraversalCallback
  2192 public:
  2193     ChildFinder() : mMayHaveChild(false) {}
  2195     // The logic of the Note*Child functions must mirror that of their
  2196     // respective functions in GCGraphBuilder.
  2197     NS_IMETHOD_(void) NoteXPCOMChild(nsISupports *child);
  2198     NS_IMETHOD_(void) NoteNativeChild(void *child,
  2199                                       nsCycleCollectionParticipant *helper);
  2200     NS_IMETHOD_(void) NoteJSChild(void *child);
  2202     NS_IMETHOD_(void) DescribeRefCountedNode(nsrefcnt refcount,
  2203                                              const char *objname) {}
  2204     NS_IMETHOD_(void) DescribeGCedNode(bool ismarked,
  2205                                        const char *objname,
  2206                                        uint64_t aCompartmentAddress) {}
  2207     NS_IMETHOD_(void) NoteNextEdgeName(const char* name) {}
  2208     bool MayHaveChild() {
  2209         return mMayHaveChild;
  2211 private:
  2212     bool mMayHaveChild;
  2213 };
  2215 NS_IMETHODIMP_(void)
  2216 ChildFinder::NoteXPCOMChild(nsISupports *child)
  2218     if (!child || !(child = CanonicalizeXPCOMParticipant(child)))
  2219         return;
  2220     nsXPCOMCycleCollectionParticipant *cp;
  2221     ToParticipant(child, &cp);
  2222     if (cp && !cp->CanSkip(child, true))
  2223         mMayHaveChild = true;
  2226 NS_IMETHODIMP_(void)
  2227 ChildFinder::NoteNativeChild(void *child,
  2228                              nsCycleCollectionParticipant *helper)
  2230     if (child)
  2231         mMayHaveChild = true;
  2234 NS_IMETHODIMP_(void)
  2235 ChildFinder::NoteJSChild(void *child)
  2237     if (child && xpc_GCThingIsGrayCCThing(child)) {
  2238         mMayHaveChild = true;
  2242 static bool
  2243 MayHaveChild(void *o, nsCycleCollectionParticipant* cp)
  2245     ChildFinder cf;
  2246     cp->Traverse(o, cf);
  2247     return cf.MayHaveChild();
  2250 template<class T>
  2251 class SegmentedArrayElement : public LinkedListElement<SegmentedArrayElement<T>>
  2252                             , public AutoFallibleTArray<T, 60>
  2254 };
  2256 template<class T>
  2257 class SegmentedArray
  2259 public:
  2260     ~SegmentedArray()
  2262         MOZ_ASSERT(IsEmpty());
  2265     void AppendElement(T& aElement)
  2267         SegmentedArrayElement<T>* last = mSegments.getLast();
  2268         if (!last || last->Length() == last->Capacity()) {
  2269             last = new SegmentedArrayElement<T>();
  2270             mSegments.insertBack(last);
  2272         last->AppendElement(aElement);
  2275     void Clear()
  2277         SegmentedArrayElement<T>* first;
  2278         while ((first = mSegments.popFirst())) {
  2279             delete first;
  2283     SegmentedArrayElement<T>* GetFirstSegment()
  2285         return mSegments.getFirst();
  2288     bool IsEmpty()
  2290         return !GetFirstSegment();
  2293 private:
  2294     mozilla::LinkedList<SegmentedArrayElement<T>> mSegments;
  2295 };
  2297 // JSPurpleBuffer keeps references to GCThings which might affect the
  2298 // next cycle collection. It is owned only by itself and during unlink its
  2299 // self reference is broken down and the object ends up killing itself.
  2300 // If GC happens before CC, references to GCthings and the self reference are
  2301 // removed.
  2302 class JSPurpleBuffer
  2304 public:
  2305     JSPurpleBuffer(JSPurpleBuffer*& aReferenceToThis)
  2306       : mReferenceToThis(aReferenceToThis)
  2308         mReferenceToThis = this;
  2309         NS_ADDREF_THIS();
  2310         mozilla::HoldJSObjects(this);
  2313     ~JSPurpleBuffer()
  2315         MOZ_ASSERT(mValues.IsEmpty());
  2316         MOZ_ASSERT(mObjects.IsEmpty());
  2317         MOZ_ASSERT(mTenuredObjects.IsEmpty());
  2320     void Destroy()
  2322         mReferenceToThis = nullptr;
  2323         mValues.Clear();
  2324         mObjects.Clear();
  2325         mTenuredObjects.Clear();
  2326         mozilla::DropJSObjects(this);
  2327         NS_RELEASE_THIS();
  2330     NS_INLINE_DECL_CYCLE_COLLECTING_NATIVE_REFCOUNTING(JSPurpleBuffer)
  2331     NS_DECL_CYCLE_COLLECTION_SCRIPT_HOLDER_NATIVE_CLASS(JSPurpleBuffer)
  2333     JSPurpleBuffer*& mReferenceToThis;
  2334     SegmentedArray<JS::Heap<JS::Value>> mValues;
  2335     SegmentedArray<JS::Heap<JSObject*>> mObjects;
  2336     SegmentedArray<JS::TenuredHeap<JSObject*>> mTenuredObjects;
  2337 };
  2339 NS_IMPL_CYCLE_COLLECTION_CLASS(JSPurpleBuffer)
  2341 NS_IMPL_CYCLE_COLLECTION_UNLINK_BEGIN(JSPurpleBuffer)
  2342     tmp->Destroy();
  2343 NS_IMPL_CYCLE_COLLECTION_UNLINK_END
  2345 NS_IMPL_CYCLE_COLLECTION_TRAVERSE_BEGIN(JSPurpleBuffer)
  2346     CycleCollectionNoteChild(cb, tmp, "self");
  2347     NS_IMPL_CYCLE_COLLECTION_TRAVERSE_SCRIPT_OBJECTS
  2348 NS_IMPL_CYCLE_COLLECTION_TRAVERSE_END
  2350 #define NS_TRACE_SEGMENTED_ARRAY(_field)                                       \
  2351     {                                                                          \
  2352         auto segment = tmp->_field.GetFirstSegment();                          \
  2353         while (segment) {                                                      \
  2354             for (uint32_t i = segment->Length(); i > 0;) {                     \
  2355                 aCallbacks.Trace(&segment->ElementAt(--i), #_field, aClosure); \
  2356             }                                                                  \
  2357             segment = segment->getNext();                                      \
  2358         }                                                                      \
  2361 NS_IMPL_CYCLE_COLLECTION_TRACE_BEGIN(JSPurpleBuffer)
  2362     NS_TRACE_SEGMENTED_ARRAY(mValues)
  2363     NS_TRACE_SEGMENTED_ARRAY(mObjects)
  2364     NS_TRACE_SEGMENTED_ARRAY(mTenuredObjects)
  2365 NS_IMPL_CYCLE_COLLECTION_TRACE_END
  2367 NS_IMPL_CYCLE_COLLECTION_ROOT_NATIVE(JSPurpleBuffer, AddRef)
  2368 NS_IMPL_CYCLE_COLLECTION_UNROOT_NATIVE(JSPurpleBuffer, Release)
  2370 struct SnowWhiteObject
  2372   void* mPointer;
  2373   nsCycleCollectionParticipant* mParticipant;
  2374   nsCycleCollectingAutoRefCnt* mRefCnt;
  2375 };
  2377 class SnowWhiteKiller : public TraceCallbacks
  2379 public:
  2380     SnowWhiteKiller(nsCycleCollector *aCollector, uint32_t aMaxCount)
  2381         : mCollector(aCollector)
  2383         MOZ_ASSERT(mCollector, "Calling SnowWhiteKiller after nsCC went away");
  2384         while (true) {
  2385             if (mObjects.SetCapacity(aMaxCount)) {
  2386                 break;
  2388             if (aMaxCount == 1) {
  2389                 NS_RUNTIMEABORT("Not enough memory to even delete objects!");
  2391             aMaxCount /= 2;
  2395     ~SnowWhiteKiller()
  2397         for (uint32_t i = 0; i < mObjects.Length(); ++i) {
  2398             SnowWhiteObject& o = mObjects[i];
  2399             if (!o.mRefCnt->get() && !o.mRefCnt->IsInPurpleBuffer()) {
  2400                 mCollector->RemoveObjectFromGraph(o.mPointer);
  2401                 o.mRefCnt->stabilizeForDeletion();
  2402                 o.mParticipant->Trace(o.mPointer, *this, nullptr);
  2403                 o.mParticipant->DeleteCycleCollectable(o.mPointer);
  2408     void
  2409     Visit(nsPurpleBuffer& aBuffer, nsPurpleBufferEntry* aEntry)
  2411         MOZ_ASSERT(aEntry->mObject, "Null object in purple buffer");
  2412         if (!aEntry->mRefCnt->get()) {
  2413             void *o = aEntry->mObject;
  2414             nsCycleCollectionParticipant *cp = aEntry->mParticipant;
  2415             CanonicalizeParticipant(&o, &cp);
  2416             SnowWhiteObject swo = { o, cp, aEntry->mRefCnt };
  2417             if (mObjects.AppendElement(swo)) {
  2418                 aBuffer.Remove(aEntry);
  2423     bool HasSnowWhiteObjects() const
  2425       return mObjects.Length() > 0;
  2428     virtual void Trace(JS::Heap<JS::Value>* aValue, const char* aName,
  2429                        void* aClosure) const
  2431         if (aValue->isMarkable()) {
  2432             void* thing = aValue->toGCThing();
  2433             if (thing && xpc_GCThingIsGrayCCThing(thing)) {
  2434                 mCollector->GetJSPurpleBuffer()->mValues.AppendElement(*aValue);
  2439     virtual void Trace(JS::Heap<jsid>* aId, const char* aName,
  2440                        void* aClosure) const
  2444     virtual void Trace(JS::Heap<JSObject*>* aObject, const char* aName,
  2445                        void* aClosure) const
  2447         if (*aObject && xpc_GCThingIsGrayCCThing(*aObject)) {
  2448             mCollector->GetJSPurpleBuffer()->mObjects.AppendElement(*aObject);
  2452     virtual void Trace(JS::TenuredHeap<JSObject*>* aObject, const char* aName,
  2453                        void* aClosure) const
  2455         if (*aObject && xpc_GCThingIsGrayCCThing(*aObject)) {
  2456             mCollector->GetJSPurpleBuffer()->mTenuredObjects.AppendElement(*aObject);
  2460     virtual void Trace(JS::Heap<JSString*>* aString, const char* aName,
  2461                        void* aClosure) const
  2465     virtual void Trace(JS::Heap<JSScript*>* aScript, const char* aName,
  2466                        void* aClosure) const
  2470     virtual void Trace(JS::Heap<JSFunction*>* aFunction, const char* aName,
  2471                        void* aClosure) const
  2475 private:
  2476     nsCycleCollector *mCollector;
  2477     FallibleTArray<SnowWhiteObject> mObjects;
  2478 };
  2480 class RemoveSkippableVisitor : public SnowWhiteKiller
  2482 public:
  2483     RemoveSkippableVisitor(nsCycleCollector* aCollector,
  2484                            uint32_t aMaxCount, bool aRemoveChildlessNodes,
  2485                            bool aAsyncSnowWhiteFreeing,
  2486                            CC_ForgetSkippableCallback aCb)
  2487         : SnowWhiteKiller(aCollector, aAsyncSnowWhiteFreeing ? 0 : aMaxCount),
  2488           mRemoveChildlessNodes(aRemoveChildlessNodes),
  2489           mAsyncSnowWhiteFreeing(aAsyncSnowWhiteFreeing),
  2490           mDispatchedDeferredDeletion(false),
  2491           mCallback(aCb)
  2492     {}
  2494     ~RemoveSkippableVisitor()
  2496         // Note, we must call the callback before SnowWhiteKiller calls
  2497         // DeleteCycleCollectable!
  2498         if (mCallback) {
  2499             mCallback();
  2501         if (HasSnowWhiteObjects()) {
  2502             // Effectively a continuation.
  2503             nsCycleCollector_dispatchDeferredDeletion(true);
  2507     void
  2508     Visit(nsPurpleBuffer &aBuffer, nsPurpleBufferEntry *aEntry)
  2510         MOZ_ASSERT(aEntry->mObject, "null mObject in purple buffer");
  2511         if (!aEntry->mRefCnt->get()) {
  2512             if (!mAsyncSnowWhiteFreeing) {
  2513                 SnowWhiteKiller::Visit(aBuffer, aEntry);
  2514             } else if (!mDispatchedDeferredDeletion) {
  2515                 mDispatchedDeferredDeletion = true;
  2516                 nsCycleCollector_dispatchDeferredDeletion(false);
  2518             return;
  2520         void *o = aEntry->mObject;
  2521         nsCycleCollectionParticipant *cp = aEntry->mParticipant;
  2522         CanonicalizeParticipant(&o, &cp);
  2523         if (aEntry->mRefCnt->IsPurple() && !cp->CanSkip(o, false) &&
  2524             (!mRemoveChildlessNodes || MayHaveChild(o, cp))) {
  2525             return;
  2527         aBuffer.Remove(aEntry);
  2530 private:
  2531     bool mRemoveChildlessNodes;
  2532     bool mAsyncSnowWhiteFreeing;
  2533     bool mDispatchedDeferredDeletion;
  2534     CC_ForgetSkippableCallback mCallback;
  2535 };
  2537 void
  2538 nsPurpleBuffer::RemoveSkippable(nsCycleCollector* aCollector,
  2539                                 bool aRemoveChildlessNodes,
  2540                                 bool aAsyncSnowWhiteFreeing,
  2541                                 CC_ForgetSkippableCallback aCb)
  2543     RemoveSkippableVisitor visitor(aCollector, Count(), aRemoveChildlessNodes,
  2544                                    aAsyncSnowWhiteFreeing, aCb);
  2545     VisitEntries(visitor);
  2548 bool
  2549 nsCycleCollector::FreeSnowWhite(bool aUntilNoSWInPurpleBuffer)
  2551     CheckThreadSafety();
  2553     if (mFreeingSnowWhite) {
  2554         return false;
  2557     AutoRestore<bool> ar(mFreeingSnowWhite);
  2558     mFreeingSnowWhite = true;
  2560     bool hadSnowWhiteObjects = false;
  2561     do {
  2562         SnowWhiteKiller visitor(this, mPurpleBuf.Count());
  2563         mPurpleBuf.VisitEntries(visitor);
  2564         hadSnowWhiteObjects = hadSnowWhiteObjects ||
  2565                               visitor.HasSnowWhiteObjects();
  2566         if (!visitor.HasSnowWhiteObjects()) {
  2567             break;
  2569     } while (aUntilNoSWInPurpleBuffer);
  2570     return hadSnowWhiteObjects;
  2573 void
  2574 nsCycleCollector::ForgetSkippable(bool aRemoveChildlessNodes,
  2575                                   bool aAsyncSnowWhiteFreeing)
  2577     CheckThreadSafety();
  2579     // If we remove things from the purple buffer during graph building, we may
  2580     // lose track of an object that was mutated during graph building.
  2581     MOZ_ASSERT(mIncrementalPhase == IdlePhase);
  2583     if (mJSRuntime) {
  2584         mJSRuntime->PrepareForForgetSkippable();
  2586     MOZ_ASSERT(!mScanInProgress, "Don't forget skippable or free snow-white while scan is in progress.");
  2587     mPurpleBuf.RemoveSkippable(this, aRemoveChildlessNodes,
  2588                                aAsyncSnowWhiteFreeing, mForgetSkippableCB);
  2591 MOZ_NEVER_INLINE void
  2592 nsCycleCollector::MarkRoots(SliceBudget &aBudget)
  2594     const intptr_t kNumNodesBetweenTimeChecks = 1000;
  2595     const intptr_t kStep = SliceBudget::CounterReset / kNumNodesBetweenTimeChecks;
  2597     TimeLog timeLog;
  2598     AutoRestore<bool> ar(mScanInProgress);
  2599     MOZ_ASSERT(!mScanInProgress);
  2600     mScanInProgress = true;
  2601     MOZ_ASSERT(mIncrementalPhase == GraphBuildingPhase);
  2602     MOZ_ASSERT(mCurrNode);
  2604     while (!aBudget.isOverBudget() && !mCurrNode->IsDone()) {
  2605         PtrInfo *pi = mCurrNode->GetNext();
  2606         if (!pi) {
  2607             MOZ_CRASH();
  2610         // We need to call the builder's Traverse() method on deleted nodes, to
  2611         // set their firstChild() that may be read by a prior non-deleted
  2612         // neighbor.
  2613         mBuilder->Traverse(pi);
  2614         if (mCurrNode->AtBlockEnd()) {
  2615             mBuilder->SetLastChild();
  2617         aBudget.step(kStep);
  2620     if (!mCurrNode->IsDone()) {
  2621         timeLog.Checkpoint("MarkRoots()");
  2622         return;
  2625     if (mGraph.mRootCount > 0) {
  2626         mBuilder->SetLastChild();
  2629     if (mBuilder->RanOutOfMemory()) {
  2630         MOZ_ASSERT(false, "Ran out of memory while building cycle collector graph");
  2631         CC_TELEMETRY(_OOM, true);
  2634     mBuilder = nullptr;
  2635     mCurrNode = nullptr;
  2636     mIncrementalPhase = ScanAndCollectWhitePhase;
  2637     timeLog.Checkpoint("MarkRoots()");
  2641 ////////////////////////////////////////////////////////////////////////
  2642 // Bacon & Rajan's |ScanRoots| routine.
  2643 ////////////////////////////////////////////////////////////////////////
  2646 struct ScanBlackVisitor
  2648     ScanBlackVisitor(uint32_t &aWhiteNodeCount, bool &aFailed)
  2649         : mWhiteNodeCount(aWhiteNodeCount), mFailed(aFailed)
  2653     bool ShouldVisitNode(PtrInfo const *pi)
  2655         return pi->mColor != black;
  2658     MOZ_NEVER_INLINE void VisitNode(PtrInfo *pi)
  2660         if (pi->mColor == white)
  2661             --mWhiteNodeCount;
  2662         pi->mColor = black;
  2665     void Failed()
  2667         mFailed = true;
  2670 private:
  2671     uint32_t &mWhiteNodeCount;
  2672     bool &mFailed;
  2673 };
  2676 struct scanVisitor
  2678     scanVisitor(uint32_t &aWhiteNodeCount, bool &aFailed, bool aWasIncremental)
  2679         : mWhiteNodeCount(aWhiteNodeCount), mFailed(aFailed),
  2680           mWasIncremental(aWasIncremental)
  2684     bool ShouldVisitNode(PtrInfo const *pi)
  2686         return pi->mColor == grey;
  2689     MOZ_NEVER_INLINE void VisitNode(PtrInfo *pi)
  2691         if (pi->mInternalRefs > pi->mRefCount && pi->mRefCount > 0) {
  2692             // If we found more references to an object than its ref count, then
  2693             // the object should have already been marked as an incremental
  2694             // root. Note that this is imprecise, because pi could have been
  2695             // marked black for other reasons. Always fault if we weren't
  2696             // incremental, as there were no incremental roots in that case.
  2697             if (!mWasIncremental || pi->mColor != black) {
  2698                 Fault("traversed refs exceed refcount", pi);
  2702         if (pi->mInternalRefs == pi->mRefCount || pi->mRefCount == 0) {
  2703             pi->mColor = white;
  2704             ++mWhiteNodeCount;
  2705         } else {
  2706             GraphWalker<ScanBlackVisitor>(ScanBlackVisitor(mWhiteNodeCount, mFailed)).Walk(pi);
  2707             MOZ_ASSERT(pi->mColor == black,
  2708                        "Why didn't ScanBlackVisitor make pi black?");
  2712     void Failed() {
  2713         mFailed = true;
  2716 private:
  2717     uint32_t &mWhiteNodeCount;
  2718     bool &mFailed;
  2719     bool mWasIncremental;
  2720 };
  2722 // Iterate over the WeakMaps.  If we mark anything while iterating
  2723 // over the WeakMaps, we must iterate over all of the WeakMaps again.
  2724 void
  2725 nsCycleCollector::ScanWeakMaps()
  2727     bool anyChanged;
  2728     bool failed = false;
  2729     do {
  2730         anyChanged = false;
  2731         for (uint32_t i = 0; i < mGraph.mWeakMaps.Length(); i++) {
  2732             WeakMapping *wm = &mGraph.mWeakMaps[i];
  2734             // If any of these are null, the original object was marked black.
  2735             uint32_t mColor = wm->mMap ? wm->mMap->mColor : black;
  2736             uint32_t kColor = wm->mKey ? wm->mKey->mColor : black;
  2737             uint32_t kdColor = wm->mKeyDelegate ? wm->mKeyDelegate->mColor : black;
  2738             uint32_t vColor = wm->mVal ? wm->mVal->mColor : black;
  2740             // All non-null weak mapping maps, keys and values are
  2741             // roots (in the sense of WalkFromRoots) in the cycle
  2742             // collector graph, and thus should have been colored
  2743             // either black or white in ScanRoots().
  2744             MOZ_ASSERT(mColor != grey, "Uncolored weak map");
  2745             MOZ_ASSERT(kColor != grey, "Uncolored weak map key");
  2746             MOZ_ASSERT(kdColor != grey, "Uncolored weak map key delegate");
  2747             MOZ_ASSERT(vColor != grey, "Uncolored weak map value");
  2749             if (mColor == black && kColor != black && kdColor == black) {
  2750                 GraphWalker<ScanBlackVisitor>(ScanBlackVisitor(mWhiteNodeCount, failed)).Walk(wm->mKey);
  2751                 anyChanged = true;
  2754             if (mColor == black && kColor == black && vColor != black) {
  2755                 GraphWalker<ScanBlackVisitor>(ScanBlackVisitor(mWhiteNodeCount, failed)).Walk(wm->mVal);
  2756                 anyChanged = true;
  2759     } while (anyChanged);
  2761     if (failed) {
  2762         MOZ_ASSERT(false, "Ran out of memory in ScanWeakMaps");
  2763         CC_TELEMETRY(_OOM, true);
  2767 // Flood black from any objects in the purple buffer that are in the CC graph.
  2768 class PurpleScanBlackVisitor
  2770 public:
  2771     PurpleScanBlackVisitor(GCGraph &aGraph, nsICycleCollectorListener *aListener,
  2772                            uint32_t &aCount, bool &aFailed)
  2773         : mGraph(aGraph), mListener(aListener), mCount(aCount), mFailed(aFailed)
  2777     void
  2778     Visit(nsPurpleBuffer &aBuffer, nsPurpleBufferEntry *aEntry)
  2780         MOZ_ASSERT(aEntry->mObject, "Entries with null mObject shouldn't be in the purple buffer.");
  2781         MOZ_ASSERT(aEntry->mRefCnt->get() != 0, "Snow-white objects shouldn't be in the purple buffer.");
  2783         void *obj = aEntry->mObject;
  2784         if (!aEntry->mParticipant) {
  2785             obj = CanonicalizeXPCOMParticipant(static_cast<nsISupports*>(obj));
  2786             MOZ_ASSERT(obj, "Don't add objects that don't participate in collection!");
  2789         PtrInfo *pi = mGraph.FindNode(obj);
  2790         if (!pi) {
  2791             return;
  2793         MOZ_ASSERT(pi->mParticipant, "No dead objects should be in the purple buffer.");
  2794         if (MOZ_UNLIKELY(mListener)) {
  2795             mListener->NoteIncrementalRoot((uint64_t)pi->mPointer);
  2797         if (pi->mColor == black) {
  2798             return;
  2800         GraphWalker<ScanBlackVisitor>(ScanBlackVisitor(mCount, mFailed)).Walk(pi);
  2803 private:
  2804     GCGraph &mGraph;
  2805     nsICycleCollectorListener *mListener;
  2806     uint32_t &mCount;
  2807     bool &mFailed;
  2808 };
  2810 // Objects that have been stored somewhere since the start of incremental graph building must
  2811 // be treated as live for this cycle collection, because we may not have accurate information
  2812 // about who holds references to them.
  2813 void
  2814 nsCycleCollector::ScanIncrementalRoots()
  2816     TimeLog timeLog;
  2818     // Reference counted objects:
  2819     // We cleared the purple buffer at the start of the current ICC, so if a
  2820     // refcounted object is purple, it may have been AddRef'd during the current
  2821     // ICC. (It may also have only been released.) If that is the case, we cannot
  2822     // be sure that the set of things pointing to the object in the CC graph
  2823     // is accurate. Therefore, for safety, we treat any purple objects as being
  2824     // live during the current CC. We don't remove anything from the purple
  2825     // buffer here, so these objects will be suspected and freed in the next CC
  2826     // if they are garbage.
  2827     bool failed = false;
  2828     PurpleScanBlackVisitor purpleScanBlackVisitor(mGraph, mListener, mWhiteNodeCount, failed);
  2829     mPurpleBuf.VisitEntries(purpleScanBlackVisitor);
  2830     timeLog.Checkpoint("ScanIncrementalRoots::fix purple");
  2832     // Garbage collected objects:
  2833     // If a GCed object was added to the graph with a refcount of zero, and is
  2834     // now marked black by the GC, it was probably gray before and was exposed
  2835     // to active JS, so it may have been stored somewhere, so it needs to be
  2836     // treated as live.
  2837     if (mJSRuntime) {
  2838         nsCycleCollectionParticipant *jsParticipant = mJSRuntime->GCThingParticipant();
  2839         nsCycleCollectionParticipant *zoneParticipant = mJSRuntime->ZoneParticipant();
  2840         NodePool::Enumerator etor(mGraph.mNodes);
  2842         while (!etor.IsDone()) {
  2843             PtrInfo *pi = etor.GetNext();
  2845             // If the refcount is non-zero, pi can't have been a gray JS object.
  2846             if (pi->mRefCount != 0) {
  2847                 continue;
  2850             // As an optimization, if an object has already been determined to be live,
  2851             // don't consider it further.  We can't do this if there is a listener,
  2852             // because the listener wants to know the complete set of incremental roots.
  2853             if (pi->mColor == black && MOZ_LIKELY(!mListener)) {
  2854                 continue;
  2857             // If the object is still marked gray by the GC, nothing could have gotten
  2858             // hold of it, so it isn't an incremental root.
  2859             if (pi->mParticipant == jsParticipant) {
  2860                 if (xpc_GCThingIsGrayCCThing(pi->mPointer)) {
  2861                     continue;
  2863             } else if (pi->mParticipant == zoneParticipant) {
  2864                 JS::Zone *zone = static_cast<JS::Zone*>(pi->mPointer);
  2865                 if (js::ZoneGlobalsAreAllGray(zone)) {
  2866                     continue;
  2868             } else {
  2869                 MOZ_ASSERT(false, "Non-JS thing with 0 refcount? Treating as live.");
  2872             // At this point, pi must be an incremental root.
  2874             // If there's a listener, tell it about this root. We don't bother with the
  2875             // optimization of skipping the Walk() if pi is black: it will just return
  2876             // without doing anything and there's no need to make this case faster.
  2877             if (MOZ_UNLIKELY(mListener)) {
  2878                 mListener->NoteIncrementalRoot((uint64_t)pi->mPointer);
  2881             GraphWalker<ScanBlackVisitor>(ScanBlackVisitor(mWhiteNodeCount, failed)).Walk(pi);
  2884         timeLog.Checkpoint("ScanIncrementalRoots::fix JS");
  2887     if (failed) {
  2888         NS_ASSERTION(false, "Ran out of memory in ScanIncrementalRoots");
  2889         CC_TELEMETRY(_OOM, true);
  2893 void
  2894 nsCycleCollector::ScanRoots(bool aFullySynchGraphBuild)
  2896     AutoRestore<bool> ar(mScanInProgress);
  2897     MOZ_ASSERT(!mScanInProgress);
  2898     mScanInProgress = true;
  2899     mWhiteNodeCount = 0;
  2900     MOZ_ASSERT(mIncrementalPhase == ScanAndCollectWhitePhase);
  2902     if (!aFullySynchGraphBuild) {
  2903         ScanIncrementalRoots();
  2906     TimeLog timeLog;
  2908     // On the assumption that most nodes will be black, it's
  2909     // probably faster to use a GraphWalker than a
  2910     // NodePool::Enumerator.
  2911     bool failed = false;
  2912     scanVisitor sv(mWhiteNodeCount, failed, !aFullySynchGraphBuild);
  2913     GraphWalker<scanVisitor>(sv).WalkFromRoots(mGraph);
  2914     timeLog.Checkpoint("ScanRoots::WalkFromRoots");
  2916     if (failed) {
  2917         NS_ASSERTION(false, "Ran out of memory in ScanRoots");
  2918         CC_TELEMETRY(_OOM, true);
  2921     // Scanning weak maps must be done last.
  2922     ScanWeakMaps();
  2923     timeLog.Checkpoint("ScanRoots::ScanWeakMaps");
  2925     if (mListener) {
  2926         mListener->BeginResults();
  2928         NodePool::Enumerator etor(mGraph.mNodes);
  2929         while (!etor.IsDone()) {
  2930             PtrInfo *pi = etor.GetNext();
  2931             if (!pi->mParticipant) {
  2932                 continue;
  2934             switch (pi->mColor) {
  2935             case black:
  2936                 if (pi->mRefCount > 0 && pi->mRefCount < UINT32_MAX &&
  2937                     pi->mInternalRefs != pi->mRefCount) {
  2938                     mListener->DescribeRoot((uint64_t)pi->mPointer,
  2939                                             pi->mInternalRefs);
  2941                 break;
  2942             case white:
  2943                 mListener->DescribeGarbage((uint64_t)pi->mPointer);
  2944                 break;
  2945             case grey:
  2946                 // With incremental CC, we can end up with a grey object after
  2947                 // scanning if it is only reachable from an object that gets freed.
  2948                 break;
  2952         mListener->End();
  2953         mListener = nullptr;
  2954         timeLog.Checkpoint("ScanRoots::listener");
  2959 ////////////////////////////////////////////////////////////////////////
  2960 // Bacon & Rajan's |CollectWhite| routine, somewhat modified.
  2961 ////////////////////////////////////////////////////////////////////////
  2963 bool
  2964 nsCycleCollector::CollectWhite()
  2966     // Explanation of "somewhat modified": we have no way to collect the
  2967     // set of whites "all at once", we have to ask each of them to drop
  2968     // their outgoing links and assume this will cause the garbage cycle
  2969     // to *mostly* self-destruct (except for the reference we continue
  2970     // to hold).
  2971     //
  2972     // To do this "safely" we must make sure that the white nodes we're
  2973     // operating on are stable for the duration of our operation. So we
  2974     // make 3 sets of calls to language runtimes:
  2975     //
  2976     //   - Root(whites), which should pin the whites in memory.
  2977     //   - Unlink(whites), which drops outgoing links on each white.
  2978     //   - Unroot(whites), which returns the whites to normal GC.
  2980     TimeLog timeLog;
  2981     nsAutoTArray<PtrInfo*, 4000> whiteNodes;
  2983     MOZ_ASSERT(mIncrementalPhase == ScanAndCollectWhitePhase);
  2985     whiteNodes.SetCapacity(mWhiteNodeCount);
  2986     uint32_t numWhiteGCed = 0;
  2988     NodePool::Enumerator etor(mGraph.mNodes);
  2989     while (!etor.IsDone())
  2991         PtrInfo *pinfo = etor.GetNext();
  2992         if (pinfo->mColor == white && pinfo->mParticipant) {
  2993             whiteNodes.AppendElement(pinfo);
  2994             pinfo->mParticipant->Root(pinfo->mPointer);
  2995             if (pinfo->mRefCount == 0) {
  2996                 // only JS objects have a refcount of 0
  2997                 ++numWhiteGCed;
  3002     uint32_t count = whiteNodes.Length();
  3003     MOZ_ASSERT(numWhiteGCed <= count,
  3004                "More freed GCed nodes than total freed nodes.");
  3005     mResults.mFreedRefCounted += count - numWhiteGCed;
  3006     mResults.mFreedGCed += numWhiteGCed;
  3008     timeLog.Checkpoint("CollectWhite::Root");
  3010     if (mBeforeUnlinkCB) {
  3011         mBeforeUnlinkCB();
  3012         timeLog.Checkpoint("CollectWhite::BeforeUnlinkCB");
  3015     for (uint32_t i = 0; i < count; ++i) {
  3016         PtrInfo *pinfo = whiteNodes.ElementAt(i);
  3017         MOZ_ASSERT(pinfo->mParticipant, "Unlink shouldn't see objects removed from graph.");
  3018         pinfo->mParticipant->Unlink(pinfo->mPointer);
  3019 #ifdef DEBUG
  3020         if (mJSRuntime) {
  3021             mJSRuntime->AssertNoObjectsToTrace(pinfo->mPointer);
  3023 #endif
  3025     timeLog.Checkpoint("CollectWhite::Unlink");
  3027     for (uint32_t i = 0; i < count; ++i) {
  3028         PtrInfo *pinfo = whiteNodes.ElementAt(i);
  3029         MOZ_ASSERT(pinfo->mParticipant, "Unroot shouldn't see objects removed from graph.");
  3030         pinfo->mParticipant->Unroot(pinfo->mPointer);
  3032     timeLog.Checkpoint("CollectWhite::Unroot");
  3034     nsCycleCollector_dispatchDeferredDeletion(false);
  3035     mIncrementalPhase = CleanupPhase;
  3037     return count > 0;
  3041 ////////////////////////
  3042 // Memory reporting
  3043 ////////////////////////
  3045 MOZ_DEFINE_MALLOC_SIZE_OF(CycleCollectorMallocSizeOf)
  3047 NS_IMETHODIMP
  3048 nsCycleCollector::CollectReports(nsIHandleReportCallback* aHandleReport,
  3049                                  nsISupports* aData)
  3051     size_t objectSize, graphNodesSize, graphEdgesSize, weakMapsSize,
  3052            purpleBufferSize;
  3053     SizeOfIncludingThis(CycleCollectorMallocSizeOf,
  3054                         &objectSize,
  3055                         &graphNodesSize, &graphEdgesSize,
  3056                         &weakMapsSize,
  3057                         &purpleBufferSize);
  3059 #define REPORT(_path, _amount, _desc)                                     \
  3060     do {                                                                  \
  3061         size_t amount = _amount;  /* evaluate |_amount| only once */      \
  3062         if (amount > 0) {                                                 \
  3063             nsresult rv;                                                  \
  3064             rv = aHandleReport->Callback(EmptyCString(),                  \
  3065                                          NS_LITERAL_CSTRING(_path),       \
  3066                                          KIND_HEAP, UNITS_BYTES, _amount, \
  3067                                          NS_LITERAL_CSTRING(_desc),       \
  3068                                          aData);                          \
  3069             if (NS_WARN_IF(NS_FAILED(rv)))                                \
  3070                 return rv;                                                \
  3071         }                                                                 \
  3072     } while (0)
  3074     REPORT("explicit/cycle-collector/collector-object", objectSize,
  3075            "Memory used for the cycle collector object itself.");
  3077     REPORT("explicit/cycle-collector/graph-nodes", graphNodesSize,
  3078            "Memory used for the nodes of the cycle collector's graph. "
  3079            "This should be zero when the collector is idle.");
  3081     REPORT("explicit/cycle-collector/graph-edges", graphEdgesSize,
  3082            "Memory used for the edges of the cycle collector's graph. "
  3083            "This should be zero when the collector is idle.");
  3085     REPORT("explicit/cycle-collector/weak-maps", weakMapsSize,
  3086            "Memory used for the representation of weak maps in the "
  3087            "cycle collector's graph. "
  3088            "This should be zero when the collector is idle.");
  3090     REPORT("explicit/cycle-collector/purple-buffer", purpleBufferSize,
  3091            "Memory used for the cycle collector's purple buffer.");
  3093 #undef REPORT
  3095     return NS_OK;
  3096 };
  3099 ////////////////////////////////////////////////////////////////////////
  3100 // Collector implementation
  3101 ////////////////////////////////////////////////////////////////////////
  3103 nsCycleCollector::nsCycleCollector() :
  3104     mActivelyCollecting(false),
  3105     mFreeingSnowWhite(false),
  3106     mScanInProgress(false),
  3107     mJSRuntime(nullptr),
  3108     mIncrementalPhase(IdlePhase),
  3109     mThread(NS_GetCurrentThread()),
  3110     mWhiteNodeCount(0),
  3111     mBeforeUnlinkCB(nullptr),
  3112     mForgetSkippableCB(nullptr),
  3113     mUnmergedNeeded(0),
  3114     mMergedInARow(0),
  3115     mJSPurpleBuffer(nullptr)
  3119 nsCycleCollector::~nsCycleCollector()
  3121     UnregisterWeakMemoryReporter(this);
  3124 void
  3125 nsCycleCollector::RegisterJSRuntime(CycleCollectedJSRuntime *aJSRuntime)
  3127     if (mJSRuntime)
  3128         Fault("multiple registrations of cycle collector JS runtime", aJSRuntime);
  3130     mJSRuntime = aJSRuntime;
  3132     // We can't register as a reporter in nsCycleCollector() because that runs
  3133     // before the memory reporter manager is initialized.  So we do it here
  3134     // instead.
  3135     static bool registered = false;
  3136     if (!registered) {
  3137         RegisterWeakMemoryReporter(this);
  3138         registered = true;
  3142 void
  3143 nsCycleCollector::ForgetJSRuntime()
  3145     if (!mJSRuntime)
  3146         Fault("forgetting non-registered cycle collector JS runtime");
  3148     mJSRuntime = nullptr;
  3151 #ifdef DEBUG
  3152 static bool
  3153 HasParticipant(void *aPtr, nsCycleCollectionParticipant *aParti)
  3155     if (aParti) {
  3156         return true;
  3159     nsXPCOMCycleCollectionParticipant *xcp;
  3160     ToParticipant(static_cast<nsISupports*>(aPtr), &xcp);
  3161     return xcp != nullptr;
  3163 #endif
  3165 MOZ_ALWAYS_INLINE void
  3166 nsCycleCollector::Suspect(void *aPtr, nsCycleCollectionParticipant *aParti,
  3167                           nsCycleCollectingAutoRefCnt *aRefCnt)
  3169     CheckThreadSafety();
  3171     // Re-entering ::Suspect during collection used to be a fault, but
  3172     // we are canonicalizing nsISupports pointers using QI, so we will
  3173     // see some spurious refcount traffic here.
  3175     if (MOZ_UNLIKELY(mScanInProgress)) {
  3176         return;
  3179     MOZ_ASSERT(aPtr, "Don't suspect null pointers");
  3181     MOZ_ASSERT(HasParticipant(aPtr, aParti),
  3182                "Suspected nsISupports pointer must QI to nsXPCOMCycleCollectionParticipant");
  3184     mPurpleBuf.Put(aPtr, aParti, aRefCnt);
  3187 void
  3188 nsCycleCollector::CheckThreadSafety()
  3190 #ifdef DEBUG
  3191     nsIThread* currentThread = NS_GetCurrentThread();
  3192     // XXXkhuey we can be called so late in shutdown that NS_GetCurrentThread
  3193     // returns null (after the thread manager has shut down)
  3194     MOZ_ASSERT(mThread == currentThread || !currentThread);
  3195 #endif
  3198 // The cycle collector uses the mark bitmap to discover what JS objects
  3199 // were reachable only from XPConnect roots that might participate in
  3200 // cycles. We ask the JS runtime whether we need to force a GC before
  3201 // this CC. It returns true on startup (before the mark bits have been set),
  3202 // and also when UnmarkGray has run out of stack.  We also force GCs on shut
  3203 // down to collect cycles involving both DOM and JS.
  3204 void
  3205 nsCycleCollector::FixGrayBits(bool aForceGC)
  3207     CheckThreadSafety();
  3209     if (!mJSRuntime)
  3210         return;
  3212     if (!aForceGC) {
  3213         mJSRuntime->FixWeakMappingGrayBits();
  3215         bool needGC = mJSRuntime->NeedCollect();
  3216         // Only do a telemetry ping for non-shutdown CCs.
  3217         CC_TELEMETRY(_NEED_GC, needGC);
  3218         if (!needGC)
  3219             return;
  3220         mResults.mForcedGC = true;
  3223     TimeLog timeLog;
  3224     mJSRuntime->Collect(aForceGC ? JS::gcreason::SHUTDOWN_CC : JS::gcreason::CC_FORCED);
  3225     timeLog.Checkpoint("GC()");
  3228 void
  3229 nsCycleCollector::CleanupAfterCollection()
  3231     MOZ_ASSERT(mIncrementalPhase == CleanupPhase);
  3232     mGraph.Clear();
  3234     uint32_t interval = (uint32_t) ((TimeStamp::Now() - mCollectionStart).ToMilliseconds());
  3235 #ifdef COLLECT_TIME_DEBUG
  3236     printf("cc: total cycle collector time was %ums\n", interval);
  3237     printf("cc: visited %u ref counted and %u GCed objects, freed %d ref counted and %d GCed objects",
  3238            mResults.mVisitedRefCounted, mResults.mVisitedGCed,
  3239            mResults.mFreedRefCounted, mResults.mFreedGCed);
  3240     uint32_t numVisited = mResults.mVisitedRefCounted + mResults.mVisitedGCed;
  3241     if (numVisited > 1000) {
  3242         uint32_t numFreed = mResults.mFreedRefCounted + mResults.mFreedGCed;
  3243         printf(" (%d%%)", 100 * numFreed / numVisited);
  3245     printf(".\ncc: \n");
  3246 #endif
  3247     CC_TELEMETRY( , interval);
  3248     CC_TELEMETRY(_VISITED_REF_COUNTED, mResults.mVisitedRefCounted);
  3249     CC_TELEMETRY(_VISITED_GCED, mResults.mVisitedGCed);
  3250     CC_TELEMETRY(_COLLECTED, mWhiteNodeCount);
  3252     if (mJSRuntime) {
  3253         mJSRuntime->EndCycleCollectionCallback(mResults);
  3255     mIncrementalPhase = IdlePhase;
  3258 void
  3259 nsCycleCollector::ShutdownCollect()
  3261     SliceBudget unlimitedBudget;
  3262     uint32_t i;
  3263     for (i = 0; i < DEFAULT_SHUTDOWN_COLLECTIONS; ++i) {
  3264         if (!Collect(ShutdownCC, unlimitedBudget, nullptr)) {
  3265             break;
  3268     NS_WARN_IF_FALSE(i < NORMAL_SHUTDOWN_COLLECTIONS, "Extra shutdown CC");
  3271 static void
  3272 PrintPhase(const char *aPhase)
  3274 #ifdef DEBUG_PHASES
  3275     printf("cc: begin %s on %s\n", aPhase,
  3276            NS_IsMainThread() ? "mainthread" : "worker");
  3277 #endif
  3280 bool
  3281 nsCycleCollector::Collect(ccType aCCType,
  3282                           SliceBudget &aBudget,
  3283                           nsICycleCollectorListener *aManualListener)
  3285     CheckThreadSafety();
  3287     // This can legitimately happen in a few cases. See bug 383651.
  3288     if (mActivelyCollecting || mFreeingSnowWhite) {
  3289         return false;
  3291     mActivelyCollecting = true;
  3293     bool startedIdle = (mIncrementalPhase == IdlePhase);
  3294     bool collectedAny = false;
  3296     // If the CC started idle, it will call BeginCollection, which
  3297     // will do FreeSnowWhite, so it doesn't need to be done here.
  3298     if (!startedIdle) {
  3299         FreeSnowWhite(true);
  3302     bool finished = false;
  3303     do {
  3304         switch (mIncrementalPhase) {
  3305         case IdlePhase:
  3306             PrintPhase("BeginCollection");
  3307             BeginCollection(aCCType, aManualListener);
  3308             break;
  3309         case GraphBuildingPhase:
  3310             PrintPhase("MarkRoots");
  3311             MarkRoots(aBudget);
  3312             break;
  3313         case ScanAndCollectWhitePhase:
  3314             // We do ScanRoots and CollectWhite in a single slice to ensure
  3315             // that we won't unlink a live object if a weak reference is
  3316             // promoted to a strong reference after ScanRoots has finished.
  3317             // See bug 926533.
  3318             PrintPhase("ScanRoots");
  3319             ScanRoots(startedIdle);
  3320             PrintPhase("CollectWhite");
  3321             collectedAny = CollectWhite();
  3322             break;
  3323         case CleanupPhase:
  3324             PrintPhase("CleanupAfterCollection");
  3325             CleanupAfterCollection();
  3326             finished = true;
  3327             break;
  3329     } while (!aBudget.checkOverBudget() && !finished);
  3331     // Clear mActivelyCollecting here to ensure that a recursive call to
  3332     // Collect() does something.
  3333     mActivelyCollecting = false;
  3335     if (aCCType != SliceCC && !startedIdle) {
  3336         // We were in the middle of an incremental CC (using its own listener).
  3337         // Somebody has forced a CC, so after having finished out the current CC,
  3338         // run the CC again using the new listener.
  3339         MOZ_ASSERT(mIncrementalPhase == IdlePhase);
  3340         if (Collect(aCCType, aBudget, aManualListener)) {
  3341             collectedAny = true;
  3345     MOZ_ASSERT_IF(aCCType != SliceCC, mIncrementalPhase == IdlePhase);
  3347     return collectedAny;
  3350 // Any JS objects we have in the graph could die when we GC, but we
  3351 // don't want to abandon the current CC, because the graph contains
  3352 // information about purple roots. So we synchronously finish off
  3353 // the current CC.
  3354 void
  3355 nsCycleCollector::PrepareForGarbageCollection()
  3357     if (mIncrementalPhase == IdlePhase) {
  3358         MOZ_ASSERT(mGraph.IsEmpty(), "Non-empty graph when idle");
  3359         MOZ_ASSERT(!mBuilder, "Non-null builder when idle");
  3360         if (mJSPurpleBuffer) {
  3361             mJSPurpleBuffer->Destroy();
  3363         return;
  3366     SliceBudget unlimitedBudget;
  3367     PrintPhase("PrepareForGarbageCollection");
  3368     // Use SliceCC because we only want to finish the CC in progress.
  3369     Collect(SliceCC, unlimitedBudget, nullptr);
  3370     MOZ_ASSERT(mIncrementalPhase == IdlePhase);
  3373 // Don't merge too many times in a row, and do at least a minimum
  3374 // number of unmerged CCs in a row.
  3375 static const uint32_t kMinConsecutiveUnmerged = 3;
  3376 static const uint32_t kMaxConsecutiveMerged = 3;
  3378 bool
  3379 nsCycleCollector::ShouldMergeZones(ccType aCCType)
  3381     if (!mJSRuntime) {
  3382         return false;
  3385     MOZ_ASSERT(mUnmergedNeeded <= kMinConsecutiveUnmerged);
  3386     MOZ_ASSERT(mMergedInARow <= kMaxConsecutiveMerged);
  3388     if (mMergedInARow == kMaxConsecutiveMerged) {
  3389         MOZ_ASSERT(mUnmergedNeeded == 0);
  3390         mUnmergedNeeded = kMinConsecutiveUnmerged;
  3393     if (mUnmergedNeeded > 0) {
  3394         mUnmergedNeeded--;
  3395         mMergedInARow = 0;
  3396         return false;
  3399     if (aCCType == SliceCC && mJSRuntime->UsefulToMergeZones()) {
  3400         mMergedInARow++;
  3401         return true;
  3402     } else {
  3403         mMergedInARow = 0;
  3404         return false;
  3408 void
  3409 nsCycleCollector::BeginCollection(ccType aCCType,
  3410                                   nsICycleCollectorListener *aManualListener)
  3412     TimeLog timeLog;
  3413     MOZ_ASSERT(mIncrementalPhase == IdlePhase);
  3415     mCollectionStart = TimeStamp::Now();
  3417     if (mJSRuntime) {
  3418         mJSRuntime->BeginCycleCollectionCallback();
  3419         timeLog.Checkpoint("BeginCycleCollectionCallback()");
  3422     bool isShutdown = (aCCType == ShutdownCC);
  3424     // Set up the listener for this CC.
  3425     MOZ_ASSERT_IF(isShutdown, !aManualListener);
  3426     MOZ_ASSERT(!mListener, "Forgot to clear a previous listener?");
  3427     mListener = aManualListener;
  3428     aManualListener = nullptr;
  3429     if (!mListener && mParams.LogThisCC(isShutdown)) {
  3430         nsRefPtr<nsCycleCollectorLogger> logger = new nsCycleCollectorLogger();
  3431         if (mParams.AllTracesThisCC(isShutdown)) {
  3432             logger->SetAllTraces();
  3434         mListener = logger.forget();
  3437     bool forceGC = isShutdown;
  3438     if (!forceGC && mListener) {
  3439         // On a WantAllTraces CC, force a synchronous global GC to prevent
  3440         // hijinks from ForgetSkippable and compartmental GCs.
  3441         mListener->GetWantAllTraces(&forceGC);
  3443     FixGrayBits(forceGC);
  3445     FreeSnowWhite(true);
  3447     if (mListener && NS_FAILED(mListener->Begin())) {
  3448         mListener = nullptr;
  3451     // Set up the data structures for building the graph.
  3452     mGraph.Init();
  3453     mResults.Init();
  3454     bool mergeZones = ShouldMergeZones(aCCType);
  3455     mResults.mMergedZones = mergeZones;
  3457     MOZ_ASSERT(!mBuilder, "Forgot to clear mBuilder");
  3458     mBuilder = new GCGraphBuilder(mGraph, mResults, mJSRuntime, mListener, mergeZones);
  3460     if (mJSRuntime) {
  3461         mJSRuntime->TraverseRoots(*mBuilder);
  3462         timeLog.Checkpoint("mJSRuntime->TraverseRoots()");
  3465     AutoRestore<bool> ar(mScanInProgress);
  3466     MOZ_ASSERT(!mScanInProgress);
  3467     mScanInProgress = true;
  3468     mPurpleBuf.SelectPointers(*mBuilder);
  3469     timeLog.Checkpoint("SelectPointers()");
  3471     // We've finished adding roots, and everything in the graph is a root.
  3472     mGraph.mRootCount = mGraph.MapCount();
  3474     mCurrNode = new NodePool::Enumerator(mGraph.mNodes);
  3475     mIncrementalPhase = GraphBuildingPhase;
  3478 uint32_t
  3479 nsCycleCollector::SuspectedCount()
  3481     CheckThreadSafety();
  3482     return mPurpleBuf.Count();
  3485 void
  3486 nsCycleCollector::Shutdown()
  3488     CheckThreadSafety();
  3490     // Always delete snow white objects.
  3491     FreeSnowWhite(true);
  3493 #ifndef DEBUG
  3494     if (PR_GetEnv("MOZ_CC_RUN_DURING_SHUTDOWN"))
  3495 #endif
  3497         ShutdownCollect();
  3501 void
  3502 nsCycleCollector::RemoveObjectFromGraph(void *aObj)
  3504     if (mIncrementalPhase == IdlePhase) {
  3505         return;
  3508     if (PtrInfo *pinfo = mGraph.FindNode(aObj)) {
  3509         mGraph.RemoveNodeFromMap(aObj);
  3511         pinfo->mPointer = nullptr;
  3512         pinfo->mParticipant = nullptr;
  3516 void
  3517 nsCycleCollector::SizeOfIncludingThis(mozilla::MallocSizeOf aMallocSizeOf,
  3518                                       size_t *aObjectSize,
  3519                                       size_t *aGraphNodesSize,
  3520                                       size_t *aGraphEdgesSize,
  3521                                       size_t *aWeakMapsSize,
  3522                                       size_t *aPurpleBufferSize) const
  3524     *aObjectSize = aMallocSizeOf(this);
  3526     mGraph.SizeOfExcludingThis(aMallocSizeOf, aGraphNodesSize, aGraphEdgesSize,
  3527                                aWeakMapsSize);
  3529     *aPurpleBufferSize = mPurpleBuf.SizeOfExcludingThis(aMallocSizeOf);
  3531     // These fields are deliberately not measured:
  3532     // - mJSRuntime: because it's non-owning and measured by JS reporters.
  3533     // - mParams: because it only contains scalars.
  3536 JSPurpleBuffer*
  3537 nsCycleCollector::GetJSPurpleBuffer()
  3539   if (!mJSPurpleBuffer) {
  3540     // JSPurpleBuffer keeps itself alive, but we need to create it in such way
  3541     // that it ends up in the normal purple buffer. That happens when
  3542     // nsRefPtr goes out of the scope and calls Release.
  3543     nsRefPtr<JSPurpleBuffer> pb = new JSPurpleBuffer(mJSPurpleBuffer);
  3545   return mJSPurpleBuffer;
  3548 ////////////////////////////////////////////////////////////////////////
  3549 // Module public API (exported in nsCycleCollector.h)
  3550 // Just functions that redirect into the singleton, once it's built.
  3551 ////////////////////////////////////////////////////////////////////////
  3553 void
  3554 nsCycleCollector_registerJSRuntime(CycleCollectedJSRuntime *rt)
  3556     CollectorData *data = sCollectorData.get();
  3558     // We should have started the cycle collector by now.
  3559     MOZ_ASSERT(data);
  3560     MOZ_ASSERT(data->mCollector);
  3561     // But we shouldn't already have a runtime.
  3562     MOZ_ASSERT(!data->mRuntime);
  3564     data->mRuntime = rt;
  3565     data->mCollector->RegisterJSRuntime(rt);
  3568 void
  3569 nsCycleCollector_forgetJSRuntime()
  3571     CollectorData *data = sCollectorData.get();
  3573     // We should have started the cycle collector by now.
  3574     MOZ_ASSERT(data);
  3575     // And we shouldn't have already forgotten our runtime.
  3576     MOZ_ASSERT(data->mRuntime);
  3578     // But it may have shutdown already.
  3579     if (data->mCollector) {
  3580         data->mCollector->ForgetJSRuntime();
  3581         data->mRuntime = nullptr;
  3582     } else {
  3583         data->mRuntime = nullptr;
  3584         delete data;
  3585         sCollectorData.set(nullptr);
  3589 /* static */ CycleCollectedJSRuntime*
  3590 CycleCollectedJSRuntime::Get()
  3592     CollectorData* data = sCollectorData.get();
  3593     if (data) {
  3594         return data->mRuntime;
  3596     return nullptr;
  3600 namespace mozilla {
  3601 namespace cyclecollector {
  3603 void
  3604 HoldJSObjectsImpl(void* aHolder, nsScriptObjectTracer* aTracer)
  3606     CollectorData* data = sCollectorData.get();
  3608     // We should have started the cycle collector by now.
  3609     MOZ_ASSERT(data);
  3610     MOZ_ASSERT(data->mCollector);
  3611     // And we should have a runtime.
  3612     MOZ_ASSERT(data->mRuntime);
  3614     data->mRuntime->AddJSHolder(aHolder, aTracer);
  3617 void
  3618 HoldJSObjectsImpl(nsISupports* aHolder)
  3620     nsXPCOMCycleCollectionParticipant* participant;
  3621     CallQueryInterface(aHolder, &participant);
  3622     MOZ_ASSERT(participant, "Failed to QI to nsXPCOMCycleCollectionParticipant!");
  3623     MOZ_ASSERT(participant->CheckForRightISupports(aHolder),
  3624                "The result of QIing a JS holder should be the same as ToSupports");
  3626     HoldJSObjectsImpl(aHolder, participant);
  3629 void
  3630 DropJSObjectsImpl(void* aHolder)
  3632     CollectorData* data = sCollectorData.get();
  3634     // We should have started the cycle collector by now, and not completely
  3635     // shut down.
  3636     MOZ_ASSERT(data);
  3637     // And we should have a runtime.
  3638     MOZ_ASSERT(data->mRuntime);
  3640     data->mRuntime->RemoveJSHolder(aHolder);
  3643 void
  3644 DropJSObjectsImpl(nsISupports* aHolder)
  3646 #ifdef DEBUG
  3647     nsXPCOMCycleCollectionParticipant* participant;
  3648     CallQueryInterface(aHolder, &participant);
  3649     MOZ_ASSERT(participant, "Failed to QI to nsXPCOMCycleCollectionParticipant!");
  3650     MOZ_ASSERT(participant->CheckForRightISupports(aHolder),
  3651                "The result of QIing a JS holder should be the same as ToSupports");
  3652 #endif
  3653     DropJSObjectsImpl(static_cast<void*>(aHolder));
  3656 #ifdef DEBUG
  3657 bool
  3658 IsJSHolder(void* aHolder)
  3660     CollectorData *data = sCollectorData.get();
  3662     // We should have started the cycle collector by now, and not completely
  3663     // shut down.
  3664     MOZ_ASSERT(data);
  3665     // And we should have a runtime.
  3666     MOZ_ASSERT(data->mRuntime);
  3668     return data->mRuntime->IsJSHolder(aHolder);
  3670 #endif
  3672 void
  3673 DeferredFinalize(nsISupports* aSupports)
  3675     CollectorData *data = sCollectorData.get();
  3677     // We should have started the cycle collector by now, and not completely
  3678     // shut down.
  3679     MOZ_ASSERT(data);
  3680     // And we should have a runtime.
  3681     MOZ_ASSERT(data->mRuntime);
  3683     data->mRuntime->DeferredFinalize(aSupports);
  3686 void
  3687 DeferredFinalize(DeferredFinalizeAppendFunction aAppendFunc,
  3688                  DeferredFinalizeFunction aFunc,
  3689                  void* aThing)
  3691     CollectorData *data = sCollectorData.get();
  3693     // We should have started the cycle collector by now, and not completely
  3694     // shut down.
  3695     MOZ_ASSERT(data);
  3696     // And we should have a runtime.
  3697     MOZ_ASSERT(data->mRuntime);
  3699     data->mRuntime->DeferredFinalize(aAppendFunc, aFunc, aThing);
  3702 } // namespace cyclecollector
  3703 } // namespace mozilla
  3706 MOZ_NEVER_INLINE static void
  3707 SuspectAfterShutdown(void* n, nsCycleCollectionParticipant* cp,
  3708                      nsCycleCollectingAutoRefCnt* aRefCnt,
  3709                      bool* aShouldDelete)
  3711     if (aRefCnt->get() == 0) {
  3712         if (!aShouldDelete) {
  3713             // The CC is shut down, so we can't be in the middle of an ICC.
  3714             CanonicalizeParticipant(&n, &cp);
  3715             aRefCnt->stabilizeForDeletion();
  3716             cp->DeleteCycleCollectable(n);
  3717         } else {
  3718             *aShouldDelete = true;
  3720     } else {
  3721         // Make sure we'll get called again.
  3722         aRefCnt->RemoveFromPurpleBuffer();
  3726 void
  3727 NS_CycleCollectorSuspect3(void *n, nsCycleCollectionParticipant *cp,
  3728                           nsCycleCollectingAutoRefCnt *aRefCnt,
  3729                           bool* aShouldDelete)
  3731     CollectorData *data = sCollectorData.get();
  3733     // We should have started the cycle collector by now.
  3734     MOZ_ASSERT(data);
  3736     if (MOZ_LIKELY(data->mCollector)) {
  3737         data->mCollector->Suspect(n, cp, aRefCnt);
  3738         return;
  3740     SuspectAfterShutdown(n, cp, aRefCnt, aShouldDelete);
  3743 uint32_t
  3744 nsCycleCollector_suspectedCount()
  3746     CollectorData *data = sCollectorData.get();
  3748     // We should have started the cycle collector by now.
  3749     MOZ_ASSERT(data);
  3751     if (!data->mCollector) {
  3752         return 0;
  3755     return data->mCollector->SuspectedCount();
  3758 bool
  3759 nsCycleCollector_init()
  3761     MOZ_ASSERT(NS_IsMainThread(), "Wrong thread!");
  3762     MOZ_ASSERT(!sCollectorData.initialized(), "Called twice!?");
  3764     return sCollectorData.init();
  3767 void
  3768 nsCycleCollector_startup()
  3770     MOZ_ASSERT(sCollectorData.initialized(),
  3771                "Forgot to call nsCycleCollector_init!");
  3772     if (sCollectorData.get()) {
  3773         MOZ_CRASH();
  3776     CollectorData* data = new CollectorData;
  3777     data->mCollector = new nsCycleCollector();
  3778     data->mRuntime = nullptr;
  3780     sCollectorData.set(data);
  3783 void
  3784 nsCycleCollector_setBeforeUnlinkCallback(CC_BeforeUnlinkCallback aCB)
  3786     CollectorData *data = sCollectorData.get();
  3788     // We should have started the cycle collector by now.
  3789     MOZ_ASSERT(data);
  3790     MOZ_ASSERT(data->mCollector);
  3792     data->mCollector->SetBeforeUnlinkCallback(aCB);
  3795 void
  3796 nsCycleCollector_setForgetSkippableCallback(CC_ForgetSkippableCallback aCB)
  3798     CollectorData *data = sCollectorData.get();
  3800     // We should have started the cycle collector by now.
  3801     MOZ_ASSERT(data);
  3802     MOZ_ASSERT(data->mCollector);
  3804     data->mCollector->SetForgetSkippableCallback(aCB);
  3807 void
  3808 nsCycleCollector_forgetSkippable(bool aRemoveChildlessNodes,
  3809                                  bool aAsyncSnowWhiteFreeing)
  3811     CollectorData *data = sCollectorData.get();
  3813     // We should have started the cycle collector by now.
  3814     MOZ_ASSERT(data);
  3815     MOZ_ASSERT(data->mCollector);
  3817     PROFILER_LABEL("CC", "nsCycleCollector_forgetSkippable");
  3818     TimeLog timeLog;
  3819     data->mCollector->ForgetSkippable(aRemoveChildlessNodes,
  3820                                       aAsyncSnowWhiteFreeing);
  3821     timeLog.Checkpoint("ForgetSkippable()");
  3824 void
  3825 nsCycleCollector_dispatchDeferredDeletion(bool aContinuation)
  3827     CollectorData *data = sCollectorData.get();
  3829     if (!data || !data->mRuntime) {
  3830         return;
  3833     data->mRuntime->DispatchDeferredDeletion(aContinuation);
  3836 bool
  3837 nsCycleCollector_doDeferredDeletion()
  3839     CollectorData *data = sCollectorData.get();
  3841     // We should have started the cycle collector by now.
  3842     MOZ_ASSERT(data);
  3843     MOZ_ASSERT(data->mCollector);
  3844     MOZ_ASSERT(data->mRuntime);
  3846     return data->mCollector->FreeSnowWhite(false);
  3849 void
  3850 nsCycleCollector_collect(nsICycleCollectorListener *aManualListener)
  3852     CollectorData *data = sCollectorData.get();
  3854     // We should have started the cycle collector by now.
  3855     MOZ_ASSERT(data);
  3856     MOZ_ASSERT(data->mCollector);
  3858     PROFILER_LABEL("CC", "nsCycleCollector_collect");
  3859     SliceBudget unlimitedBudget;
  3860     data->mCollector->Collect(ManualCC, unlimitedBudget, aManualListener);
  3863 void
  3864 nsCycleCollector_collectSlice(int64_t aSliceTime)
  3866     CollectorData *data = sCollectorData.get();
  3868     // We should have started the cycle collector by now.
  3869     MOZ_ASSERT(data);
  3870     MOZ_ASSERT(data->mCollector);
  3872     PROFILER_LABEL("CC", "nsCycleCollector_collectSlice");
  3873     SliceBudget budget;
  3874     if (aSliceTime > 0) {
  3875         budget = SliceBudget::TimeBudget(aSliceTime);
  3876     } else if (aSliceTime == 0) {
  3877         budget = SliceBudget::WorkBudget(1);
  3879     data->mCollector->Collect(SliceCC, budget, nullptr);
  3882 void
  3883 nsCycleCollector_prepareForGarbageCollection()
  3885     CollectorData *data = sCollectorData.get();
  3887     MOZ_ASSERT(data);
  3889     if (!data->mCollector) {
  3890         return;
  3893     data->mCollector->PrepareForGarbageCollection();
  3896 void
  3897 nsCycleCollector_shutdown()
  3899     CollectorData *data = sCollectorData.get();
  3901     if (data) {
  3902         MOZ_ASSERT(data->mCollector);
  3903         PROFILER_LABEL("CC", "nsCycleCollector_shutdown");
  3904         data->mCollector->Shutdown();
  3905         data->mCollector = nullptr;
  3906         if (!data->mRuntime) {
  3907           delete data;
  3908           sCollectorData.set(nullptr);

mercurial