js/src/jsgc.cpp

Sat, 03 Jan 2015 20:18:00 +0100

author
Michael Schloh von Bennewitz <michael@schloh.com>
date
Sat, 03 Jan 2015 20:18:00 +0100
branch
TOR_BUG_3246
changeset 7
129ffea94266
permissions
-rw-r--r--

Conditionally enable double key logic according to:
private browsing mode or privacy.thirdparty.isolate preference and
implement in GetCookieStringCommon and FindCookie where it counts...
With some reservations of how to convince FindCookie users to test
condition and pass a nullptr when disabling double key logic.

     1 /* -*- Mode: C++; tab-width: 8; indent-tabs-mode: nil; c-basic-offset: 4 -*-
     2  * vim: set ts=8 sts=4 et sw=4 tw=99:
     3  * This Source Code Form is subject to the terms of the Mozilla Public
     4  * License, v. 2.0. If a copy of the MPL was not distributed with this
     5  * file, You can obtain one at http://mozilla.org/MPL/2.0/. */
     7 /*
     8  * This code implements an incremental mark-and-sweep garbage collector, with
     9  * most sweeping carried out in the background on a parallel thread.
    10  *
    11  * Full vs. zone GC
    12  * ----------------
    13  *
    14  * The collector can collect all zones at once, or a subset. These types of
    15  * collection are referred to as a full GC and a zone GC respectively.
    16  *
    17  * The atoms zone is only collected in a full GC since objects in any zone may
    18  * have pointers to atoms, and these are not recorded in the cross compartment
    19  * pointer map. Also, the atoms zone is not collected if any thread has an
    20  * AutoKeepAtoms instance on the stack, or there are any exclusive threads using
    21  * the runtime.
    22  *
    23  * It is possible for an incremental collection that started out as a full GC to
    24  * become a zone GC if new zones are created during the course of the
    25  * collection.
    26  *
    27  * Incremental collection
    28  * ----------------------
    29  *
    30  * For a collection to be carried out incrementally the following conditions
    31  * must be met:
    32  *  - the collection must be run by calling js::GCSlice() rather than js::GC()
    33  *  - the GC mode must have been set to JSGC_MODE_INCREMENTAL with
    34  *    JS_SetGCParameter()
    35  *  - no thread may have an AutoKeepAtoms instance on the stack
    36  *  - all native objects that have their own trace hook must indicate that they
    37  *    implement read and write barriers with the JSCLASS_IMPLEMENTS_BARRIERS
    38  *    flag
    39  *
    40  * The last condition is an engine-internal mechanism to ensure that incremental
    41  * collection is not carried out without the correct barriers being implemented.
    42  * For more information see 'Incremental marking' below.
    43  *
    44  * If the collection is not incremental, all foreground activity happens inside
    45  * a single call to GC() or GCSlice(). However the collection is not complete
    46  * until the background sweeping activity has finished.
    47  *
    48  * An incremental collection proceeds as a series of slices, interleaved with
    49  * mutator activity, i.e. running JavaScript code. Slices are limited by a time
    50  * budget. The slice finishes as soon as possible after the requested time has
    51  * passed.
    52  *
    53  * Collector states
    54  * ----------------
    55  *
    56  * The collector proceeds through the following states, the current state being
    57  * held in JSRuntime::gcIncrementalState:
    58  *
    59  *  - MARK_ROOTS - marks the stack and other roots
    60  *  - MARK       - incrementally marks reachable things
    61  *  - SWEEP      - sweeps zones in groups and continues marking unswept zones
    62  *
    63  * The MARK_ROOTS activity always takes place in the first slice. The next two
    64  * states can take place over one or more slices.
    65  *
    66  * In other words an incremental collection proceeds like this:
    67  *
    68  * Slice 1:   MARK_ROOTS: Roots pushed onto the mark stack.
    69  *            MARK:       The mark stack is processed by popping an element,
    70  *                        marking it, and pushing its children.
    71  *
    72  *          ... JS code runs ...
    73  *
    74  * Slice 2:   MARK:       More mark stack processing.
    75  *
    76  *          ... JS code runs ...
    77  *
    78  * Slice n-1: MARK:       More mark stack processing.
    79  *
    80  *          ... JS code runs ...
    81  *
    82  * Slice n:   MARK:       Mark stack is completely drained.
    83  *            SWEEP:      Select first group of zones to sweep and sweep them.
    84  *
    85  *          ... JS code runs ...
    86  *
    87  * Slice n+1: SWEEP:      Mark objects in unswept zones that were newly
    88  *                        identified as alive (see below). Then sweep more zone
    89  *                        groups.
    90  *
    91  *          ... JS code runs ...
    92  *
    93  * Slice n+2: SWEEP:      Mark objects in unswept zones that were newly
    94  *                        identified as alive. Then sweep more zone groups.
    95  *
    96  *          ... JS code runs ...
    97  *
    98  * Slice m:   SWEEP:      Sweeping is finished, and background sweeping
    99  *                        started on the helper thread.
   100  *
   101  *          ... JS code runs, remaining sweeping done on background thread ...
   102  *
   103  * When background sweeping finishes the GC is complete.
   104  *
   105  * Incremental marking
   106  * -------------------
   107  *
   108  * Incremental collection requires close collaboration with the mutator (i.e.,
   109  * JS code) to guarantee correctness.
   110  *
   111  *  - During an incremental GC, if a memory location (except a root) is written
   112  *    to, then the value it previously held must be marked. Write barriers
   113  *    ensure this.
   114  *
   115  *  - Any object that is allocated during incremental GC must start out marked.
   116  *
   117  *  - Roots are marked in the first slice and hence don't need write barriers.
   118  *    Roots are things like the C stack and the VM stack.
   119  *
   120  * The problem that write barriers solve is that between slices the mutator can
   121  * change the object graph. We must ensure that it cannot do this in such a way
   122  * that makes us fail to mark a reachable object (marking an unreachable object
   123  * is tolerable).
   124  *
   125  * We use a snapshot-at-the-beginning algorithm to do this. This means that we
   126  * promise to mark at least everything that is reachable at the beginning of
   127  * collection. To implement it we mark the old contents of every non-root memory
   128  * location written to by the mutator while the collection is in progress, using
   129  * write barriers. This is described in gc/Barrier.h.
   130  *
   131  * Incremental sweeping
   132  * --------------------
   133  *
   134  * Sweeping is difficult to do incrementally because object finalizers must be
   135  * run at the start of sweeping, before any mutator code runs. The reason is
   136  * that some objects use their finalizers to remove themselves from caches. If
   137  * mutator code was allowed to run after the start of sweeping, it could observe
   138  * the state of the cache and create a new reference to an object that was just
   139  * about to be destroyed.
   140  *
   141  * Sweeping all finalizable objects in one go would introduce long pauses, so
   142  * instead sweeping broken up into groups of zones. Zones which are not yet
   143  * being swept are still marked, so the issue above does not apply.
   144  *
   145  * The order of sweeping is restricted by cross compartment pointers - for
   146  * example say that object |a| from zone A points to object |b| in zone B and
   147  * neither object was marked when we transitioned to the SWEEP phase. Imagine we
   148  * sweep B first and then return to the mutator. It's possible that the mutator
   149  * could cause |a| to become alive through a read barrier (perhaps it was a
   150  * shape that was accessed via a shape table). Then we would need to mark |b|,
   151  * which |a| points to, but |b| has already been swept.
   152  *
   153  * So if there is such a pointer then marking of zone B must not finish before
   154  * marking of zone A.  Pointers which form a cycle between zones therefore
   155  * restrict those zones to being swept at the same time, and these are found
   156  * using Tarjan's algorithm for finding the strongly connected components of a
   157  * graph.
   158  *
   159  * GC things without finalizers, and things with finalizers that are able to run
   160  * in the background, are swept on the background thread. This accounts for most
   161  * of the sweeping work.
   162  *
   163  * Reset
   164  * -----
   165  *
   166  * During incremental collection it is possible, although unlikely, for
   167  * conditions to change such that incremental collection is no longer safe. In
   168  * this case, the collection is 'reset' by ResetIncrementalGC(). If we are in
   169  * the mark state, this just stops marking, but if we have started sweeping
   170  * already, we continue until we have swept the current zone group. Following a
   171  * reset, a new non-incremental collection is started.
   172  */
   174 #include "jsgcinlines.h"
   176 #include "mozilla/ArrayUtils.h"
   177 #include "mozilla/DebugOnly.h"
   178 #include "mozilla/MemoryReporting.h"
   179 #include "mozilla/Move.h"
   181 #include <string.h>     /* for memset used when DEBUG */
   182 #ifndef XP_WIN
   183 # include <unistd.h>
   184 #endif
   186 #include "jsapi.h"
   187 #include "jsatom.h"
   188 #include "jscntxt.h"
   189 #include "jscompartment.h"
   190 #include "jsobj.h"
   191 #include "jsscript.h"
   192 #include "jstypes.h"
   193 #include "jsutil.h"
   194 #include "jswatchpoint.h"
   195 #include "jsweakmap.h"
   196 #ifdef XP_WIN
   197 # include "jswin.h"
   198 #endif
   199 #include "prmjtime.h"
   201 #include "gc/FindSCCs.h"
   202 #include "gc/GCInternals.h"
   203 #include "gc/Marking.h"
   204 #include "gc/Memory.h"
   205 #ifdef JS_ION
   206 # include "jit/BaselineJIT.h"
   207 #endif
   208 #include "jit/IonCode.h"
   209 #include "js/SliceBudget.h"
   210 #include "vm/Debugger.h"
   211 #include "vm/ForkJoin.h"
   212 #include "vm/ProxyObject.h"
   213 #include "vm/Shape.h"
   214 #include "vm/String.h"
   215 #include "vm/TraceLogging.h"
   216 #include "vm/WrapperObject.h"
   218 #include "jsobjinlines.h"
   219 #include "jsscriptinlines.h"
   221 #include "vm/Stack-inl.h"
   222 #include "vm/String-inl.h"
   224 using namespace js;
   225 using namespace js::gc;
   227 using mozilla::ArrayEnd;
   228 using mozilla::DebugOnly;
   229 using mozilla::Maybe;
   230 using mozilla::Swap;
   232 /* Perform a Full GC every 20 seconds if MaybeGC is called */
   233 static const uint64_t GC_IDLE_FULL_SPAN = 20 * 1000 * 1000;
   235 /* Increase the IGC marking slice time if we are in highFrequencyGC mode. */
   236 static const int IGC_MARK_SLICE_MULTIPLIER = 2;
   238 #if defined(ANDROID) || defined(MOZ_B2G)
   239 static const int MAX_EMPTY_CHUNK_COUNT = 2;
   240 #else
   241 static const int MAX_EMPTY_CHUNK_COUNT = 30;
   242 #endif
   244 /* This array should be const, but that doesn't link right under GCC. */
   245 const AllocKind gc::slotsToThingKind[] = {
   246     /* 0 */  FINALIZE_OBJECT0,  FINALIZE_OBJECT2,  FINALIZE_OBJECT2,  FINALIZE_OBJECT4,
   247     /* 4 */  FINALIZE_OBJECT4,  FINALIZE_OBJECT8,  FINALIZE_OBJECT8,  FINALIZE_OBJECT8,
   248     /* 8 */  FINALIZE_OBJECT8,  FINALIZE_OBJECT12, FINALIZE_OBJECT12, FINALIZE_OBJECT12,
   249     /* 12 */ FINALIZE_OBJECT12, FINALIZE_OBJECT16, FINALIZE_OBJECT16, FINALIZE_OBJECT16,
   250     /* 16 */ FINALIZE_OBJECT16
   251 };
   253 static_assert(JS_ARRAY_LENGTH(slotsToThingKind) == SLOTS_TO_THING_KIND_LIMIT,
   254               "We have defined a slot count for each kind.");
   256 const uint32_t Arena::ThingSizes[] = {
   257     sizeof(JSObject),           /* FINALIZE_OBJECT0             */
   258     sizeof(JSObject),           /* FINALIZE_OBJECT0_BACKGROUND  */
   259     sizeof(JSObject_Slots2),    /* FINALIZE_OBJECT2             */
   260     sizeof(JSObject_Slots2),    /* FINALIZE_OBJECT2_BACKGROUND  */
   261     sizeof(JSObject_Slots4),    /* FINALIZE_OBJECT4             */
   262     sizeof(JSObject_Slots4),    /* FINALIZE_OBJECT4_BACKGROUND  */
   263     sizeof(JSObject_Slots8),    /* FINALIZE_OBJECT8             */
   264     sizeof(JSObject_Slots8),    /* FINALIZE_OBJECT8_BACKGROUND  */
   265     sizeof(JSObject_Slots12),   /* FINALIZE_OBJECT12            */
   266     sizeof(JSObject_Slots12),   /* FINALIZE_OBJECT12_BACKGROUND */
   267     sizeof(JSObject_Slots16),   /* FINALIZE_OBJECT16            */
   268     sizeof(JSObject_Slots16),   /* FINALIZE_OBJECT16_BACKGROUND */
   269     sizeof(JSScript),           /* FINALIZE_SCRIPT              */
   270     sizeof(LazyScript),         /* FINALIZE_LAZY_SCRIPT         */
   271     sizeof(Shape),              /* FINALIZE_SHAPE               */
   272     sizeof(BaseShape),          /* FINALIZE_BASE_SHAPE          */
   273     sizeof(types::TypeObject),  /* FINALIZE_TYPE_OBJECT         */
   274     sizeof(JSFatInlineString),  /* FINALIZE_FAT_INLINE_STRING   */
   275     sizeof(JSString),           /* FINALIZE_STRING              */
   276     sizeof(JSExternalString),   /* FINALIZE_EXTERNAL_STRING     */
   277     sizeof(jit::JitCode),       /* FINALIZE_JITCODE             */
   278 };
   280 #define OFFSET(type) uint32_t(sizeof(ArenaHeader) + (ArenaSize - sizeof(ArenaHeader)) % sizeof(type))
   282 const uint32_t Arena::FirstThingOffsets[] = {
   283     OFFSET(JSObject),           /* FINALIZE_OBJECT0             */
   284     OFFSET(JSObject),           /* FINALIZE_OBJECT0_BACKGROUND  */
   285     OFFSET(JSObject_Slots2),    /* FINALIZE_OBJECT2             */
   286     OFFSET(JSObject_Slots2),    /* FINALIZE_OBJECT2_BACKGROUND  */
   287     OFFSET(JSObject_Slots4),    /* FINALIZE_OBJECT4             */
   288     OFFSET(JSObject_Slots4),    /* FINALIZE_OBJECT4_BACKGROUND  */
   289     OFFSET(JSObject_Slots8),    /* FINALIZE_OBJECT8             */
   290     OFFSET(JSObject_Slots8),    /* FINALIZE_OBJECT8_BACKGROUND  */
   291     OFFSET(JSObject_Slots12),   /* FINALIZE_OBJECT12            */
   292     OFFSET(JSObject_Slots12),   /* FINALIZE_OBJECT12_BACKGROUND */
   293     OFFSET(JSObject_Slots16),   /* FINALIZE_OBJECT16            */
   294     OFFSET(JSObject_Slots16),   /* FINALIZE_OBJECT16_BACKGROUND */
   295     OFFSET(JSScript),           /* FINALIZE_SCRIPT              */
   296     OFFSET(LazyScript),         /* FINALIZE_LAZY_SCRIPT         */
   297     OFFSET(Shape),              /* FINALIZE_SHAPE               */
   298     OFFSET(BaseShape),          /* FINALIZE_BASE_SHAPE          */
   299     OFFSET(types::TypeObject),  /* FINALIZE_TYPE_OBJECT         */
   300     OFFSET(JSFatInlineString),  /* FINALIZE_FAT_INLINE_STRING   */
   301     OFFSET(JSString),           /* FINALIZE_STRING              */
   302     OFFSET(JSExternalString),   /* FINALIZE_EXTERNAL_STRING     */
   303     OFFSET(jit::JitCode),       /* FINALIZE_JITCODE             */
   304 };
   306 #undef OFFSET
   308 /*
   309  * Finalization order for incrementally swept things.
   310  */
   312 static const AllocKind FinalizePhaseStrings[] = {
   313     FINALIZE_EXTERNAL_STRING
   314 };
   316 static const AllocKind FinalizePhaseScripts[] = {
   317     FINALIZE_SCRIPT,
   318     FINALIZE_LAZY_SCRIPT
   319 };
   321 static const AllocKind FinalizePhaseJitCode[] = {
   322     FINALIZE_JITCODE
   323 };
   325 static const AllocKind * const FinalizePhases[] = {
   326     FinalizePhaseStrings,
   327     FinalizePhaseScripts,
   328     FinalizePhaseJitCode
   329 };
   330 static const int FinalizePhaseCount = sizeof(FinalizePhases) / sizeof(AllocKind*);
   332 static const int FinalizePhaseLength[] = {
   333     sizeof(FinalizePhaseStrings) / sizeof(AllocKind),
   334     sizeof(FinalizePhaseScripts) / sizeof(AllocKind),
   335     sizeof(FinalizePhaseJitCode) / sizeof(AllocKind)
   336 };
   338 static const gcstats::Phase FinalizePhaseStatsPhase[] = {
   339     gcstats::PHASE_SWEEP_STRING,
   340     gcstats::PHASE_SWEEP_SCRIPT,
   341     gcstats::PHASE_SWEEP_JITCODE
   342 };
   344 /*
   345  * Finalization order for things swept in the background.
   346  */
   348 static const AllocKind BackgroundPhaseObjects[] = {
   349     FINALIZE_OBJECT0_BACKGROUND,
   350     FINALIZE_OBJECT2_BACKGROUND,
   351     FINALIZE_OBJECT4_BACKGROUND,
   352     FINALIZE_OBJECT8_BACKGROUND,
   353     FINALIZE_OBJECT12_BACKGROUND,
   354     FINALIZE_OBJECT16_BACKGROUND
   355 };
   357 static const AllocKind BackgroundPhaseStrings[] = {
   358     FINALIZE_FAT_INLINE_STRING,
   359     FINALIZE_STRING
   360 };
   362 static const AllocKind BackgroundPhaseShapes[] = {
   363     FINALIZE_SHAPE,
   364     FINALIZE_BASE_SHAPE,
   365     FINALIZE_TYPE_OBJECT
   366 };
   368 static const AllocKind * const BackgroundPhases[] = {
   369     BackgroundPhaseObjects,
   370     BackgroundPhaseStrings,
   371     BackgroundPhaseShapes
   372 };
   373 static const int BackgroundPhaseCount = sizeof(BackgroundPhases) / sizeof(AllocKind*);
   375 static const int BackgroundPhaseLength[] = {
   376     sizeof(BackgroundPhaseObjects) / sizeof(AllocKind),
   377     sizeof(BackgroundPhaseStrings) / sizeof(AllocKind),
   378     sizeof(BackgroundPhaseShapes) / sizeof(AllocKind)
   379 };
   381 #ifdef DEBUG
   382 void
   383 ArenaHeader::checkSynchronizedWithFreeList() const
   384 {
   385     /*
   386      * Do not allow to access the free list when its real head is still stored
   387      * in FreeLists and is not synchronized with this one.
   388      */
   389     JS_ASSERT(allocated());
   391     /*
   392      * We can be called from the background finalization thread when the free
   393      * list in the zone can mutate at any moment. We cannot do any
   394      * checks in this case.
   395      */
   396     if (IsBackgroundFinalized(getAllocKind()) && zone->runtimeFromAnyThread()->gcHelperThread.onBackgroundThread())
   397         return;
   399     FreeSpan firstSpan = FreeSpan::decodeOffsets(arenaAddress(), firstFreeSpanOffsets);
   400     if (firstSpan.isEmpty())
   401         return;
   402     const FreeSpan *list = zone->allocator.arenas.getFreeList(getAllocKind());
   403     if (list->isEmpty() || firstSpan.arenaAddress() != list->arenaAddress())
   404         return;
   406     /*
   407      * Here this arena has free things, FreeList::lists[thingKind] is not
   408      * empty and also points to this arena. Thus they must the same.
   409      */
   410     JS_ASSERT(firstSpan.isSameNonEmptySpan(list));
   411 }
   412 #endif
   414 /* static */ void
   415 Arena::staticAsserts()
   416 {
   417     static_assert(JS_ARRAY_LENGTH(ThingSizes) == FINALIZE_LIMIT, "We have defined all thing sizes.");
   418     static_assert(JS_ARRAY_LENGTH(FirstThingOffsets) == FINALIZE_LIMIT, "We have defined all offsets.");
   419 }
   421 void
   422 Arena::setAsFullyUnused(AllocKind thingKind)
   423 {
   424     FreeSpan entireList;
   425     entireList.first = thingsStart(thingKind);
   426     uintptr_t arenaAddr = aheader.arenaAddress();
   427     entireList.last = arenaAddr | ArenaMask;
   428     aheader.setFirstFreeSpan(&entireList);
   429 }
   431 template<typename T>
   432 inline bool
   433 Arena::finalize(FreeOp *fop, AllocKind thingKind, size_t thingSize)
   434 {
   435     /* Enforce requirements on size of T. */
   436     JS_ASSERT(thingSize % CellSize == 0);
   437     JS_ASSERT(thingSize <= 255);
   439     JS_ASSERT(aheader.allocated());
   440     JS_ASSERT(thingKind == aheader.getAllocKind());
   441     JS_ASSERT(thingSize == aheader.getThingSize());
   442     JS_ASSERT(!aheader.hasDelayedMarking);
   443     JS_ASSERT(!aheader.markOverflow);
   444     JS_ASSERT(!aheader.allocatedDuringIncremental);
   446     uintptr_t thing = thingsStart(thingKind);
   447     uintptr_t lastByte = thingsEnd() - 1;
   449     FreeSpan nextFree(aheader.getFirstFreeSpan());
   450     nextFree.checkSpan();
   452     FreeSpan newListHead;
   453     FreeSpan *newListTail = &newListHead;
   454     uintptr_t newFreeSpanStart = 0;
   455     bool allClear = true;
   456     DebugOnly<size_t> nmarked = 0;
   457     for (;; thing += thingSize) {
   458         JS_ASSERT(thing <= lastByte + 1);
   459         if (thing == nextFree.first) {
   460             JS_ASSERT(nextFree.last <= lastByte);
   461             if (nextFree.last == lastByte)
   462                 break;
   463             JS_ASSERT(Arena::isAligned(nextFree.last, thingSize));
   464             if (!newFreeSpanStart)
   465                 newFreeSpanStart = thing;
   466             thing = nextFree.last;
   467             nextFree = *nextFree.nextSpan();
   468             nextFree.checkSpan();
   469         } else {
   470             T *t = reinterpret_cast<T *>(thing);
   471             if (t->isMarked()) {
   472                 allClear = false;
   473                 nmarked++;
   474                 if (newFreeSpanStart) {
   475                     JS_ASSERT(thing >= thingsStart(thingKind) + thingSize);
   476                     newListTail->first = newFreeSpanStart;
   477                     newListTail->last = thing - thingSize;
   478                     newListTail = newListTail->nextSpanUnchecked(thingSize);
   479                     newFreeSpanStart = 0;
   480                 }
   481             } else {
   482                 if (!newFreeSpanStart)
   483                     newFreeSpanStart = thing;
   484                 t->finalize(fop);
   485                 JS_POISON(t, JS_SWEPT_TENURED_PATTERN, thingSize);
   486             }
   487         }
   488     }
   490     if (allClear) {
   491         JS_ASSERT(newListTail == &newListHead);
   492         JS_ASSERT(!newFreeSpanStart ||
   493                   newFreeSpanStart == thingsStart(thingKind));
   494         JS_EXTRA_POISON(data, JS_SWEPT_TENURED_PATTERN, sizeof(data));
   495         return true;
   496     }
   498     newListTail->first = newFreeSpanStart ? newFreeSpanStart : nextFree.first;
   499     JS_ASSERT(Arena::isAligned(newListTail->first, thingSize));
   500     newListTail->last = lastByte;
   502 #ifdef DEBUG
   503     size_t nfree = 0;
   504     for (const FreeSpan *span = &newListHead; span != newListTail; span = span->nextSpan()) {
   505         span->checkSpan();
   506         JS_ASSERT(Arena::isAligned(span->first, thingSize));
   507         JS_ASSERT(Arena::isAligned(span->last, thingSize));
   508         nfree += (span->last - span->first) / thingSize + 1;
   509         JS_ASSERT(nfree + nmarked <= thingsPerArena(thingSize));
   510     }
   511     nfree += (newListTail->last + 1 - newListTail->first) / thingSize;
   512     JS_ASSERT(nfree + nmarked == thingsPerArena(thingSize));
   513 #endif
   514     aheader.setFirstFreeSpan(&newListHead);
   516     return false;
   517 }
   519 /*
   520  * Insert an arena into the list in appropriate position and update the cursor
   521  * to ensure that any arena before the cursor is full.
   522  */
   523 void ArenaList::insert(ArenaHeader *a)
   524 {
   525     JS_ASSERT(a);
   526     JS_ASSERT_IF(!head, cursor == &head);
   527     a->next = *cursor;
   528     *cursor = a;
   529     if (!a->hasFreeThings())
   530         cursor = &a->next;
   531 }
   533 template<typename T>
   534 static inline bool
   535 FinalizeTypedArenas(FreeOp *fop,
   536                     ArenaHeader **src,
   537                     ArenaList &dest,
   538                     AllocKind thingKind,
   539                     SliceBudget &budget)
   540 {
   541     /*
   542      * Finalize arenas from src list, releasing empty arenas and inserting the
   543      * others into dest in an appropriate position.
   544      */
   546     /*
   547      * During parallel sections, we sometimes finalize the parallel arenas,
   548      * but in that case, we want to hold on to the memory in our arena
   549      * lists, not offer it up for reuse.
   550      */
   551     bool releaseArenas = !InParallelSection();
   553     size_t thingSize = Arena::thingSize(thingKind);
   555     while (ArenaHeader *aheader = *src) {
   556         *src = aheader->next;
   557         bool allClear = aheader->getArena()->finalize<T>(fop, thingKind, thingSize);
   558         if (!allClear)
   559             dest.insert(aheader);
   560         else if (releaseArenas)
   561             aheader->chunk()->releaseArena(aheader);
   562         else
   563             aheader->chunk()->recycleArena(aheader, dest, thingKind);
   565         budget.step(Arena::thingsPerArena(thingSize));
   566         if (budget.isOverBudget())
   567             return false;
   568     }
   570     return true;
   571 }
   573 /*
   574  * Finalize the list. On return al->cursor points to the first non-empty arena
   575  * after the al->head.
   576  */
   577 static bool
   578 FinalizeArenas(FreeOp *fop,
   579                ArenaHeader **src,
   580                ArenaList &dest,
   581                AllocKind thingKind,
   582                SliceBudget &budget)
   583 {
   584     switch(thingKind) {
   585       case FINALIZE_OBJECT0:
   586       case FINALIZE_OBJECT0_BACKGROUND:
   587       case FINALIZE_OBJECT2:
   588       case FINALIZE_OBJECT2_BACKGROUND:
   589       case FINALIZE_OBJECT4:
   590       case FINALIZE_OBJECT4_BACKGROUND:
   591       case FINALIZE_OBJECT8:
   592       case FINALIZE_OBJECT8_BACKGROUND:
   593       case FINALIZE_OBJECT12:
   594       case FINALIZE_OBJECT12_BACKGROUND:
   595       case FINALIZE_OBJECT16:
   596       case FINALIZE_OBJECT16_BACKGROUND:
   597         return FinalizeTypedArenas<JSObject>(fop, src, dest, thingKind, budget);
   598       case FINALIZE_SCRIPT:
   599         return FinalizeTypedArenas<JSScript>(fop, src, dest, thingKind, budget);
   600       case FINALIZE_LAZY_SCRIPT:
   601         return FinalizeTypedArenas<LazyScript>(fop, src, dest, thingKind, budget);
   602       case FINALIZE_SHAPE:
   603         return FinalizeTypedArenas<Shape>(fop, src, dest, thingKind, budget);
   604       case FINALIZE_BASE_SHAPE:
   605         return FinalizeTypedArenas<BaseShape>(fop, src, dest, thingKind, budget);
   606       case FINALIZE_TYPE_OBJECT:
   607         return FinalizeTypedArenas<types::TypeObject>(fop, src, dest, thingKind, budget);
   608       case FINALIZE_STRING:
   609         return FinalizeTypedArenas<JSString>(fop, src, dest, thingKind, budget);
   610       case FINALIZE_FAT_INLINE_STRING:
   611         return FinalizeTypedArenas<JSFatInlineString>(fop, src, dest, thingKind, budget);
   612       case FINALIZE_EXTERNAL_STRING:
   613         return FinalizeTypedArenas<JSExternalString>(fop, src, dest, thingKind, budget);
   614       case FINALIZE_JITCODE:
   615 #ifdef JS_ION
   616       {
   617         // JitCode finalization may release references on an executable
   618         // allocator that is accessed when requesting interrupts.
   619         JSRuntime::AutoLockForInterrupt lock(fop->runtime());
   620         return FinalizeTypedArenas<jit::JitCode>(fop, src, dest, thingKind, budget);
   621       }
   622 #endif
   623       default:
   624         MOZ_ASSUME_UNREACHABLE("Invalid alloc kind");
   625     }
   626 }
   628 static inline Chunk *
   629 AllocChunk(JSRuntime *rt)
   630 {
   631     return static_cast<Chunk *>(MapAlignedPages(rt, ChunkSize, ChunkSize));
   632 }
   634 static inline void
   635 FreeChunk(JSRuntime *rt, Chunk *p)
   636 {
   637     UnmapPages(rt, static_cast<void *>(p), ChunkSize);
   638 }
   640 inline bool
   641 ChunkPool::wantBackgroundAllocation(JSRuntime *rt) const
   642 {
   643     /*
   644      * To minimize memory waste we do not want to run the background chunk
   645      * allocation if we have empty chunks or when the runtime needs just few
   646      * of them.
   647      */
   648     return rt->gcHelperThread.canBackgroundAllocate() &&
   649            emptyCount == 0 &&
   650            rt->gcChunkSet.count() >= 4;
   651 }
   653 /* Must be called with the GC lock taken. */
   654 inline Chunk *
   655 ChunkPool::get(JSRuntime *rt)
   656 {
   657     JS_ASSERT(this == &rt->gcChunkPool);
   659     Chunk *chunk = emptyChunkListHead;
   660     if (chunk) {
   661         JS_ASSERT(emptyCount);
   662         emptyChunkListHead = chunk->info.next;
   663         --emptyCount;
   664     } else {
   665         JS_ASSERT(!emptyCount);
   666         chunk = Chunk::allocate(rt);
   667         if (!chunk)
   668             return nullptr;
   669         JS_ASSERT(chunk->info.numArenasFreeCommitted == 0);
   670     }
   671     JS_ASSERT(chunk->unused());
   672     JS_ASSERT(!rt->gcChunkSet.has(chunk));
   674     if (wantBackgroundAllocation(rt))
   675         rt->gcHelperThread.startBackgroundAllocationIfIdle();
   677     return chunk;
   678 }
   680 /* Must be called either during the GC or with the GC lock taken. */
   681 inline void
   682 ChunkPool::put(Chunk *chunk)
   683 {
   684     chunk->info.age = 0;
   685     chunk->info.next = emptyChunkListHead;
   686     emptyChunkListHead = chunk;
   687     emptyCount++;
   688 }
   690 /* Must be called either during the GC or with the GC lock taken. */
   691 Chunk *
   692 ChunkPool::expire(JSRuntime *rt, bool releaseAll)
   693 {
   694     JS_ASSERT(this == &rt->gcChunkPool);
   696     /*
   697      * Return old empty chunks to the system while preserving the order of
   698      * other chunks in the list. This way, if the GC runs several times
   699      * without emptying the list, the older chunks will stay at the tail
   700      * and are more likely to reach the max age.
   701      */
   702     Chunk *freeList = nullptr;
   703     int freeChunkCount = 0;
   704     for (Chunk **chunkp = &emptyChunkListHead; *chunkp; ) {
   705         JS_ASSERT(emptyCount);
   706         Chunk *chunk = *chunkp;
   707         JS_ASSERT(chunk->unused());
   708         JS_ASSERT(!rt->gcChunkSet.has(chunk));
   709         JS_ASSERT(chunk->info.age <= MAX_EMPTY_CHUNK_AGE);
   710         if (releaseAll || chunk->info.age == MAX_EMPTY_CHUNK_AGE ||
   711             freeChunkCount++ > MAX_EMPTY_CHUNK_COUNT)
   712         {
   713             *chunkp = chunk->info.next;
   714             --emptyCount;
   715             chunk->prepareToBeFreed(rt);
   716             chunk->info.next = freeList;
   717             freeList = chunk;
   718         } else {
   719             /* Keep the chunk but increase its age. */
   720             ++chunk->info.age;
   721             chunkp = &chunk->info.next;
   722         }
   723     }
   724     JS_ASSERT_IF(releaseAll, !emptyCount);
   725     return freeList;
   726 }
   728 static void
   729 FreeChunkList(JSRuntime *rt, Chunk *chunkListHead)
   730 {
   731     while (Chunk *chunk = chunkListHead) {
   732         JS_ASSERT(!chunk->info.numArenasFreeCommitted);
   733         chunkListHead = chunk->info.next;
   734         FreeChunk(rt, chunk);
   735     }
   736 }
   738 void
   739 ChunkPool::expireAndFree(JSRuntime *rt, bool releaseAll)
   740 {
   741     FreeChunkList(rt, expire(rt, releaseAll));
   742 }
   744 /* static */ Chunk *
   745 Chunk::allocate(JSRuntime *rt)
   746 {
   747     Chunk *chunk = AllocChunk(rt);
   748     if (!chunk)
   749         return nullptr;
   750     chunk->init(rt);
   751     rt->gcStats.count(gcstats::STAT_NEW_CHUNK);
   752     return chunk;
   753 }
   755 /* Must be called with the GC lock taken. */
   756 /* static */ inline void
   757 Chunk::release(JSRuntime *rt, Chunk *chunk)
   758 {
   759     JS_ASSERT(chunk);
   760     chunk->prepareToBeFreed(rt);
   761     FreeChunk(rt, chunk);
   762 }
   764 inline void
   765 Chunk::prepareToBeFreed(JSRuntime *rt)
   766 {
   767     JS_ASSERT(rt->gcNumArenasFreeCommitted >= info.numArenasFreeCommitted);
   768     rt->gcNumArenasFreeCommitted -= info.numArenasFreeCommitted;
   769     rt->gcStats.count(gcstats::STAT_DESTROY_CHUNK);
   771 #ifdef DEBUG
   772     /*
   773      * Let FreeChunkList detect a missing prepareToBeFreed call before it
   774      * frees chunk.
   775      */
   776     info.numArenasFreeCommitted = 0;
   777 #endif
   778 }
   780 void
   781 Chunk::init(JSRuntime *rt)
   782 {
   783     JS_POISON(this, JS_FRESH_TENURED_PATTERN, ChunkSize);
   785     /*
   786      * We clear the bitmap to guard against xpc_IsGrayGCThing being called on
   787      * uninitialized data, which would happen before the first GC cycle.
   788      */
   789     bitmap.clear();
   791     /*
   792      * Decommit the arenas. We do this after poisoning so that if the OS does
   793      * not have to recycle the pages, we still get the benefit of poisoning.
   794      */
   795     decommitAllArenas(rt);
   797     /* Initialize the chunk info. */
   798     info.age = 0;
   799     info.trailer.location = ChunkLocationTenuredHeap;
   800     info.trailer.runtime = rt;
   802     /* The rest of info fields are initialized in PickChunk. */
   803 }
   805 static inline Chunk **
   806 GetAvailableChunkList(Zone *zone)
   807 {
   808     JSRuntime *rt = zone->runtimeFromAnyThread();
   809     return zone->isSystem
   810            ? &rt->gcSystemAvailableChunkListHead
   811            : &rt->gcUserAvailableChunkListHead;
   812 }
   814 inline void
   815 Chunk::addToAvailableList(Zone *zone)
   816 {
   817     insertToAvailableList(GetAvailableChunkList(zone));
   818 }
   820 inline void
   821 Chunk::insertToAvailableList(Chunk **insertPoint)
   822 {
   823     JS_ASSERT(hasAvailableArenas());
   824     JS_ASSERT(!info.prevp);
   825     JS_ASSERT(!info.next);
   826     info.prevp = insertPoint;
   827     Chunk *insertBefore = *insertPoint;
   828     if (insertBefore) {
   829         JS_ASSERT(insertBefore->info.prevp == insertPoint);
   830         insertBefore->info.prevp = &info.next;
   831     }
   832     info.next = insertBefore;
   833     *insertPoint = this;
   834 }
   836 inline void
   837 Chunk::removeFromAvailableList()
   838 {
   839     JS_ASSERT(info.prevp);
   840     *info.prevp = info.next;
   841     if (info.next) {
   842         JS_ASSERT(info.next->info.prevp == &info.next);
   843         info.next->info.prevp = info.prevp;
   844     }
   845     info.prevp = nullptr;
   846     info.next = nullptr;
   847 }
   849 /*
   850  * Search for and return the next decommitted Arena. Our goal is to keep
   851  * lastDecommittedArenaOffset "close" to a free arena. We do this by setting
   852  * it to the most recently freed arena when we free, and forcing it to
   853  * the last alloc + 1 when we allocate.
   854  */
   855 uint32_t
   856 Chunk::findDecommittedArenaOffset()
   857 {
   858     /* Note: lastFreeArenaOffset can be past the end of the list. */
   859     for (unsigned i = info.lastDecommittedArenaOffset; i < ArenasPerChunk; i++)
   860         if (decommittedArenas.get(i))
   861             return i;
   862     for (unsigned i = 0; i < info.lastDecommittedArenaOffset; i++)
   863         if (decommittedArenas.get(i))
   864             return i;
   865     MOZ_ASSUME_UNREACHABLE("No decommitted arenas found.");
   866 }
   868 ArenaHeader *
   869 Chunk::fetchNextDecommittedArena()
   870 {
   871     JS_ASSERT(info.numArenasFreeCommitted == 0);
   872     JS_ASSERT(info.numArenasFree > 0);
   874     unsigned offset = findDecommittedArenaOffset();
   875     info.lastDecommittedArenaOffset = offset + 1;
   876     --info.numArenasFree;
   877     decommittedArenas.unset(offset);
   879     Arena *arena = &arenas[offset];
   880     MarkPagesInUse(info.trailer.runtime, arena, ArenaSize);
   881     arena->aheader.setAsNotAllocated();
   883     return &arena->aheader;
   884 }
   886 inline ArenaHeader *
   887 Chunk::fetchNextFreeArena(JSRuntime *rt)
   888 {
   889     JS_ASSERT(info.numArenasFreeCommitted > 0);
   890     JS_ASSERT(info.numArenasFreeCommitted <= info.numArenasFree);
   891     JS_ASSERT(info.numArenasFreeCommitted <= rt->gcNumArenasFreeCommitted);
   893     ArenaHeader *aheader = info.freeArenasHead;
   894     info.freeArenasHead = aheader->next;
   895     --info.numArenasFreeCommitted;
   896     --info.numArenasFree;
   897     --rt->gcNumArenasFreeCommitted;
   899     return aheader;
   900 }
   902 ArenaHeader *
   903 Chunk::allocateArena(Zone *zone, AllocKind thingKind)
   904 {
   905     JS_ASSERT(hasAvailableArenas());
   907     JSRuntime *rt = zone->runtimeFromAnyThread();
   908     if (!rt->isHeapMinorCollecting() && rt->gcBytes >= rt->gcMaxBytes)
   909         return nullptr;
   911     ArenaHeader *aheader = MOZ_LIKELY(info.numArenasFreeCommitted > 0)
   912                            ? fetchNextFreeArena(rt)
   913                            : fetchNextDecommittedArena();
   914     aheader->init(zone, thingKind);
   915     if (MOZ_UNLIKELY(!hasAvailableArenas()))
   916         removeFromAvailableList();
   918     rt->gcBytes += ArenaSize;
   919     zone->gcBytes += ArenaSize;
   921     if (zone->gcBytes >= zone->gcTriggerBytes) {
   922         AutoUnlockGC unlock(rt);
   923         TriggerZoneGC(zone, JS::gcreason::ALLOC_TRIGGER);
   924     }
   926     return aheader;
   927 }
   929 inline void
   930 Chunk::addArenaToFreeList(JSRuntime *rt, ArenaHeader *aheader)
   931 {
   932     JS_ASSERT(!aheader->allocated());
   933     aheader->next = info.freeArenasHead;
   934     info.freeArenasHead = aheader;
   935     ++info.numArenasFreeCommitted;
   936     ++info.numArenasFree;
   937     ++rt->gcNumArenasFreeCommitted;
   938 }
   940 void
   941 Chunk::recycleArena(ArenaHeader *aheader, ArenaList &dest, AllocKind thingKind)
   942 {
   943     aheader->getArena()->setAsFullyUnused(thingKind);
   944     dest.insert(aheader);
   945 }
   947 void
   948 Chunk::releaseArena(ArenaHeader *aheader)
   949 {
   950     JS_ASSERT(aheader->allocated());
   951     JS_ASSERT(!aheader->hasDelayedMarking);
   952     Zone *zone = aheader->zone;
   953     JSRuntime *rt = zone->runtimeFromAnyThread();
   954     AutoLockGC maybeLock;
   955     if (rt->gcHelperThread.sweeping())
   956         maybeLock.lock(rt);
   958     JS_ASSERT(rt->gcBytes >= ArenaSize);
   959     JS_ASSERT(zone->gcBytes >= ArenaSize);
   960     if (rt->gcHelperThread.sweeping())
   961         zone->reduceGCTriggerBytes(zone->gcHeapGrowthFactor * ArenaSize);
   962     rt->gcBytes -= ArenaSize;
   963     zone->gcBytes -= ArenaSize;
   965     aheader->setAsNotAllocated();
   966     addArenaToFreeList(rt, aheader);
   968     if (info.numArenasFree == 1) {
   969         JS_ASSERT(!info.prevp);
   970         JS_ASSERT(!info.next);
   971         addToAvailableList(zone);
   972     } else if (!unused()) {
   973         JS_ASSERT(info.prevp);
   974     } else {
   975         rt->gcChunkSet.remove(this);
   976         removeFromAvailableList();
   977         JS_ASSERT(info.numArenasFree == ArenasPerChunk);
   978         decommitAllArenas(rt);
   979         rt->gcChunkPool.put(this);
   980     }
   981 }
   983 /* The caller must hold the GC lock. */
   984 static Chunk *
   985 PickChunk(Zone *zone)
   986 {
   987     JSRuntime *rt = zone->runtimeFromAnyThread();
   988     Chunk **listHeadp = GetAvailableChunkList(zone);
   989     Chunk *chunk = *listHeadp;
   990     if (chunk)
   991         return chunk;
   993     chunk = rt->gcChunkPool.get(rt);
   994     if (!chunk)
   995         return nullptr;
   997     rt->gcChunkAllocationSinceLastGC = true;
   999     /*
  1000      * FIXME bug 583732 - chunk is newly allocated and cannot be present in
  1001      * the table so using ordinary lookupForAdd is suboptimal here.
  1002      */
  1003     GCChunkSet::AddPtr p = rt->gcChunkSet.lookupForAdd(chunk);
  1004     JS_ASSERT(!p);
  1005     if (!rt->gcChunkSet.add(p, chunk)) {
  1006         Chunk::release(rt, chunk);
  1007         return nullptr;
  1010     chunk->info.prevp = nullptr;
  1011     chunk->info.next = nullptr;
  1012     chunk->addToAvailableList(zone);
  1014     return chunk;
  1017 #ifdef JS_GC_ZEAL
  1019 extern void
  1020 js::SetGCZeal(JSRuntime *rt, uint8_t zeal, uint32_t frequency)
  1022     if (rt->gcVerifyPreData)
  1023         VerifyBarriers(rt, PreBarrierVerifier);
  1024     if (rt->gcVerifyPostData)
  1025         VerifyBarriers(rt, PostBarrierVerifier);
  1027 #ifdef JSGC_GENERATIONAL
  1028     if (rt->gcZeal_ == ZealGenerationalGCValue) {
  1029         MinorGC(rt, JS::gcreason::DEBUG_GC);
  1030         rt->gcNursery.leaveZealMode();
  1033     if (zeal == ZealGenerationalGCValue)
  1034         rt->gcNursery.enterZealMode();
  1035 #endif
  1037     bool schedule = zeal >= js::gc::ZealAllocValue;
  1038     rt->gcZeal_ = zeal;
  1039     rt->gcZealFrequency = frequency;
  1040     rt->gcNextScheduled = schedule ? frequency : 0;
  1043 static bool
  1044 InitGCZeal(JSRuntime *rt)
  1046     const char *env = getenv("JS_GC_ZEAL");
  1047     if (!env)
  1048         return true;
  1050     int zeal = -1;
  1051     int frequency = JS_DEFAULT_ZEAL_FREQ;
  1052     if (strcmp(env, "help") != 0) {
  1053         zeal = atoi(env);
  1054         const char *p = strchr(env, ',');
  1055         if (p)
  1056             frequency = atoi(p + 1);
  1059     if (zeal < 0 || zeal > ZealLimit || frequency < 0) {
  1060         fprintf(stderr,
  1061                 "Format: JS_GC_ZEAL=N[,F]\n"
  1062                 "N indicates \"zealousness\":\n"
  1063                 "  0: no additional GCs\n"
  1064                 "  1: additional GCs at common danger points\n"
  1065                 "  2: GC every F allocations (default: 100)\n"
  1066                 "  3: GC when the window paints (browser only)\n"
  1067                 "  4: Verify pre write barriers between instructions\n"
  1068                 "  5: Verify pre write barriers between paints\n"
  1069                 "  6: Verify stack rooting\n"
  1070                 "  7: Collect the nursery every N nursery allocations\n"
  1071                 "  8: Incremental GC in two slices: 1) mark roots 2) finish collection\n"
  1072                 "  9: Incremental GC in two slices: 1) mark all 2) new marking and finish\n"
  1073                 " 10: Incremental GC in multiple slices\n"
  1074                 " 11: Verify post write barriers between instructions\n"
  1075                 " 12: Verify post write barriers between paints\n"
  1076                 " 13: Purge analysis state every F allocations (default: 100)\n");
  1077         return false;
  1080     SetGCZeal(rt, zeal, frequency);
  1081     return true;
  1084 #endif
  1086 /* Lifetime for type sets attached to scripts containing observed types. */
  1087 static const int64_t JIT_SCRIPT_RELEASE_TYPES_INTERVAL = 60 * 1000 * 1000;
  1089 bool
  1090 js_InitGC(JSRuntime *rt, uint32_t maxbytes)
  1092     InitMemorySubsystem(rt);
  1094     if (!rt->gcChunkSet.init(INITIAL_CHUNK_CAPACITY))
  1095         return false;
  1097     if (!rt->gcRootsHash.init(256))
  1098         return false;
  1100     if (!rt->gcHelperThread.init())
  1101         return false;
  1103     /*
  1104      * Separate gcMaxMallocBytes from gcMaxBytes but initialize to maxbytes
  1105      * for default backward API compatibility.
  1106      */
  1107     rt->gcMaxBytes = maxbytes;
  1108     rt->setGCMaxMallocBytes(maxbytes);
  1110 #ifndef JS_MORE_DETERMINISTIC
  1111     rt->gcJitReleaseTime = PRMJ_Now() + JIT_SCRIPT_RELEASE_TYPES_INTERVAL;
  1112 #endif
  1114 #ifdef JSGC_GENERATIONAL
  1115     if (!rt->gcNursery.init())
  1116         return false;
  1118     if (!rt->gcStoreBuffer.enable())
  1119         return false;
  1120 #endif
  1122 #ifdef JS_GC_ZEAL
  1123     if (!InitGCZeal(rt))
  1124         return false;
  1125 #endif
  1127     return true;
  1130 static void
  1131 RecordNativeStackTopForGC(JSRuntime *rt)
  1133     ConservativeGCData *cgcd = &rt->conservativeGC;
  1135 #ifdef JS_THREADSAFE
  1136     /* Record the stack top here only if we are called from a request. */
  1137     if (!rt->requestDepth)
  1138         return;
  1139 #endif
  1140     cgcd->recordStackTop();
  1143 void
  1144 js_FinishGC(JSRuntime *rt)
  1146     /*
  1147      * Wait until the background finalization stops and the helper thread
  1148      * shuts down before we forcefully release any remaining GC memory.
  1149      */
  1150     rt->gcHelperThread.finish();
  1152 #ifdef JS_GC_ZEAL
  1153     /* Free memory associated with GC verification. */
  1154     FinishVerifier(rt);
  1155 #endif
  1157     /* Delete all remaining zones. */
  1158     if (rt->gcInitialized) {
  1159         for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) {
  1160             for (CompartmentsInZoneIter comp(zone); !comp.done(); comp.next())
  1161                 js_delete(comp.get());
  1162             js_delete(zone.get());
  1166     rt->zones.clear();
  1168     rt->gcSystemAvailableChunkListHead = nullptr;
  1169     rt->gcUserAvailableChunkListHead = nullptr;
  1170     if (rt->gcChunkSet.initialized()) {
  1171         for (GCChunkSet::Range r(rt->gcChunkSet.all()); !r.empty(); r.popFront())
  1172             Chunk::release(rt, r.front());
  1173         rt->gcChunkSet.clear();
  1176     rt->gcChunkPool.expireAndFree(rt, true);
  1178     if (rt->gcRootsHash.initialized())
  1179         rt->gcRootsHash.clear();
  1181     rt->functionPersistentRooteds.clear();
  1182     rt->idPersistentRooteds.clear();
  1183     rt->objectPersistentRooteds.clear();
  1184     rt->scriptPersistentRooteds.clear();
  1185     rt->stringPersistentRooteds.clear();
  1186     rt->valuePersistentRooteds.clear();
  1189 template <typename T> struct BarrierOwner {};
  1190 template <typename T> struct BarrierOwner<T *> { typedef T result; };
  1191 template <> struct BarrierOwner<Value> { typedef HeapValue result; };
  1193 template <typename T>
  1194 static bool
  1195 AddRoot(JSRuntime *rt, T *rp, const char *name, JSGCRootType rootType)
  1197     /*
  1198      * Sometimes Firefox will hold weak references to objects and then convert
  1199      * them to strong references by calling AddRoot (e.g., via PreserveWrapper,
  1200      * or ModifyBusyCount in workers). We need a read barrier to cover these
  1201      * cases.
  1202      */
  1203     if (rt->gcIncrementalState != NO_INCREMENTAL)
  1204         BarrierOwner<T>::result::writeBarrierPre(*rp);
  1206     return rt->gcRootsHash.put((void *)rp, RootInfo(name, rootType));
  1209 template <typename T>
  1210 static bool
  1211 AddRoot(JSContext *cx, T *rp, const char *name, JSGCRootType rootType)
  1213     bool ok = AddRoot(cx->runtime(), rp, name, rootType);
  1214     if (!ok)
  1215         JS_ReportOutOfMemory(cx);
  1216     return ok;
  1219 bool
  1220 js::AddValueRoot(JSContext *cx, Value *vp, const char *name)
  1222     return AddRoot(cx, vp, name, JS_GC_ROOT_VALUE_PTR);
  1225 extern bool
  1226 js::AddValueRootRT(JSRuntime *rt, js::Value *vp, const char *name)
  1228     return AddRoot(rt, vp, name, JS_GC_ROOT_VALUE_PTR);
  1231 extern bool
  1232 js::AddStringRoot(JSContext *cx, JSString **rp, const char *name)
  1234     return AddRoot(cx, rp, name, JS_GC_ROOT_STRING_PTR);
  1237 extern bool
  1238 js::AddObjectRoot(JSContext *cx, JSObject **rp, const char *name)
  1240     return AddRoot(cx, rp, name, JS_GC_ROOT_OBJECT_PTR);
  1243 extern bool
  1244 js::AddObjectRoot(JSRuntime *rt, JSObject **rp, const char *name)
  1246     return AddRoot(rt, rp, name, JS_GC_ROOT_OBJECT_PTR);
  1249 extern bool
  1250 js::AddScriptRoot(JSContext *cx, JSScript **rp, const char *name)
  1252     return AddRoot(cx, rp, name, JS_GC_ROOT_SCRIPT_PTR);
  1255 extern JS_FRIEND_API(bool)
  1256 js::AddRawValueRoot(JSContext *cx, Value *vp, const char *name)
  1258     return AddRoot(cx, vp, name, JS_GC_ROOT_VALUE_PTR);
  1261 extern JS_FRIEND_API(void)
  1262 js::RemoveRawValueRoot(JSContext *cx, Value *vp)
  1264     RemoveRoot(cx->runtime(), vp);
  1267 void
  1268 js::RemoveRoot(JSRuntime *rt, void *rp)
  1270     rt->gcRootsHash.remove(rp);
  1271     rt->gcPoke = true;
  1274 typedef RootedValueMap::Range RootRange;
  1275 typedef RootedValueMap::Entry RootEntry;
  1276 typedef RootedValueMap::Enum RootEnum;
  1278 static size_t
  1279 ComputeTriggerBytes(Zone *zone, size_t lastBytes, size_t maxBytes, JSGCInvocationKind gckind)
  1281     size_t base = gckind == GC_SHRINK ? lastBytes : Max(lastBytes, zone->runtimeFromMainThread()->gcAllocationThreshold);
  1282     double trigger = double(base) * zone->gcHeapGrowthFactor;
  1283     return size_t(Min(double(maxBytes), trigger));
  1286 void
  1287 Zone::setGCLastBytes(size_t lastBytes, JSGCInvocationKind gckind)
  1289     /*
  1290      * The heap growth factor depends on the heap size after a GC and the GC frequency.
  1291      * For low frequency GCs (more than 1sec between GCs) we let the heap grow to 150%.
  1292      * For high frequency GCs we let the heap grow depending on the heap size:
  1293      *   lastBytes < highFrequencyLowLimit: 300%
  1294      *   lastBytes > highFrequencyHighLimit: 150%
  1295      *   otherwise: linear interpolation between 150% and 300% based on lastBytes
  1296      */
  1297     JSRuntime *rt = runtimeFromMainThread();
  1299     if (!rt->gcDynamicHeapGrowth) {
  1300         gcHeapGrowthFactor = 3.0;
  1301     } else if (lastBytes < 1 * 1024 * 1024) {
  1302         gcHeapGrowthFactor = rt->gcLowFrequencyHeapGrowth;
  1303     } else {
  1304         JS_ASSERT(rt->gcHighFrequencyHighLimitBytes > rt->gcHighFrequencyLowLimitBytes);
  1305         uint64_t now = PRMJ_Now();
  1306         if (rt->gcLastGCTime && rt->gcLastGCTime + rt->gcHighFrequencyTimeThreshold * PRMJ_USEC_PER_MSEC > now) {
  1307             if (lastBytes <= rt->gcHighFrequencyLowLimitBytes) {
  1308                 gcHeapGrowthFactor = rt->gcHighFrequencyHeapGrowthMax;
  1309             } else if (lastBytes >= rt->gcHighFrequencyHighLimitBytes) {
  1310                 gcHeapGrowthFactor = rt->gcHighFrequencyHeapGrowthMin;
  1311             } else {
  1312                 double k = (rt->gcHighFrequencyHeapGrowthMin - rt->gcHighFrequencyHeapGrowthMax)
  1313                            / (double)(rt->gcHighFrequencyHighLimitBytes - rt->gcHighFrequencyLowLimitBytes);
  1314                 gcHeapGrowthFactor = (k * (lastBytes - rt->gcHighFrequencyLowLimitBytes)
  1315                                      + rt->gcHighFrequencyHeapGrowthMax);
  1316                 JS_ASSERT(gcHeapGrowthFactor <= rt->gcHighFrequencyHeapGrowthMax
  1317                           && gcHeapGrowthFactor >= rt->gcHighFrequencyHeapGrowthMin);
  1319             rt->gcHighFrequencyGC = true;
  1320         } else {
  1321             gcHeapGrowthFactor = rt->gcLowFrequencyHeapGrowth;
  1322             rt->gcHighFrequencyGC = false;
  1325     gcTriggerBytes = ComputeTriggerBytes(this, lastBytes, rt->gcMaxBytes, gckind);
  1328 void
  1329 Zone::reduceGCTriggerBytes(size_t amount)
  1331     JS_ASSERT(amount > 0);
  1332     JS_ASSERT(gcTriggerBytes >= amount);
  1333     if (gcTriggerBytes - amount < runtimeFromAnyThread()->gcAllocationThreshold * gcHeapGrowthFactor)
  1334         return;
  1335     gcTriggerBytes -= amount;
  1338 Allocator::Allocator(Zone *zone)
  1339   : zone_(zone)
  1340 {}
  1342 inline void
  1343 GCMarker::delayMarkingArena(ArenaHeader *aheader)
  1345     if (aheader->hasDelayedMarking) {
  1346         /* Arena already scheduled to be marked later */
  1347         return;
  1349     aheader->setNextDelayedMarking(unmarkedArenaStackTop);
  1350     unmarkedArenaStackTop = aheader;
  1351     markLaterArenas++;
  1354 void
  1355 GCMarker::delayMarkingChildren(const void *thing)
  1357     const Cell *cell = reinterpret_cast<const Cell *>(thing);
  1358     cell->arenaHeader()->markOverflow = 1;
  1359     delayMarkingArena(cell->arenaHeader());
  1362 inline void
  1363 ArenaLists::prepareForIncrementalGC(JSRuntime *rt)
  1365     for (size_t i = 0; i != FINALIZE_LIMIT; ++i) {
  1366         FreeSpan *headSpan = &freeLists[i];
  1367         if (!headSpan->isEmpty()) {
  1368             ArenaHeader *aheader = headSpan->arenaHeader();
  1369             aheader->allocatedDuringIncremental = true;
  1370             rt->gcMarker.delayMarkingArena(aheader);
  1375 static inline void
  1376 PushArenaAllocatedDuringSweep(JSRuntime *runtime, ArenaHeader *arena)
  1378     arena->setNextAllocDuringSweep(runtime->gcArenasAllocatedDuringSweep);
  1379     runtime->gcArenasAllocatedDuringSweep = arena;
  1382 inline void *
  1383 ArenaLists::allocateFromArenaInline(Zone *zone, AllocKind thingKind)
  1385     /*
  1386      * Parallel JS Note:
  1388      * This function can be called from parallel threads all of which
  1389      * are associated with the same compartment. In that case, each
  1390      * thread will have a distinct ArenaLists.  Therefore, whenever we
  1391      * fall through to PickChunk() we must be sure that we are holding
  1392      * a lock.
  1393      */
  1395     Chunk *chunk = nullptr;
  1397     ArenaList *al = &arenaLists[thingKind];
  1398     AutoLockGC maybeLock;
  1400 #ifdef JS_THREADSAFE
  1401     volatile uintptr_t *bfs = &backgroundFinalizeState[thingKind];
  1402     if (*bfs != BFS_DONE) {
  1403         /*
  1404          * We cannot search the arena list for free things while the
  1405          * background finalization runs and can modify head or cursor at any
  1406          * moment. So we always allocate a new arena in that case.
  1407          */
  1408         maybeLock.lock(zone->runtimeFromAnyThread());
  1409         if (*bfs == BFS_RUN) {
  1410             JS_ASSERT(!*al->cursor);
  1411             chunk = PickChunk(zone);
  1412             if (!chunk) {
  1413                 /*
  1414                  * Let the caller to wait for the background allocation to
  1415                  * finish and restart the allocation attempt.
  1416                  */
  1417                 return nullptr;
  1419         } else if (*bfs == BFS_JUST_FINISHED) {
  1420             /* See comments before BackgroundFinalizeState definition. */
  1421             *bfs = BFS_DONE;
  1422         } else {
  1423             JS_ASSERT(*bfs == BFS_DONE);
  1426 #endif /* JS_THREADSAFE */
  1428     if (!chunk) {
  1429         if (ArenaHeader *aheader = *al->cursor) {
  1430             JS_ASSERT(aheader->hasFreeThings());
  1432             /*
  1433              * Normally, the empty arenas are returned to the chunk
  1434              * and should not present on the list. In parallel
  1435              * execution, however, we keep empty arenas in the arena
  1436              * list to avoid synchronizing on the chunk.
  1437              */
  1438             JS_ASSERT(!aheader->isEmpty() || InParallelSection());
  1439             al->cursor = &aheader->next;
  1441             /*
  1442              * Move the free span stored in the arena to the free list and
  1443              * allocate from it.
  1444              */
  1445             freeLists[thingKind] = aheader->getFirstFreeSpan();
  1446             aheader->setAsFullyUsed();
  1447             if (MOZ_UNLIKELY(zone->wasGCStarted())) {
  1448                 if (zone->needsBarrier()) {
  1449                     aheader->allocatedDuringIncremental = true;
  1450                     zone->runtimeFromMainThread()->gcMarker.delayMarkingArena(aheader);
  1451                 } else if (zone->isGCSweeping()) {
  1452                     PushArenaAllocatedDuringSweep(zone->runtimeFromMainThread(), aheader);
  1455             return freeLists[thingKind].infallibleAllocate(Arena::thingSize(thingKind));
  1458         /* Make sure we hold the GC lock before we call PickChunk. */
  1459         if (!maybeLock.locked())
  1460             maybeLock.lock(zone->runtimeFromAnyThread());
  1461         chunk = PickChunk(zone);
  1462         if (!chunk)
  1463             return nullptr;
  1466     /*
  1467      * While we still hold the GC lock get an arena from some chunk, mark it
  1468      * as full as its single free span is moved to the free lits, and insert
  1469      * it to the list as a fully allocated arena.
  1471      * We add the arena before the the head, not after the tail pointed by the
  1472      * cursor, so after the GC the most recently added arena will be used first
  1473      * for allocations improving cache locality.
  1474      */
  1475     JS_ASSERT(!*al->cursor);
  1476     ArenaHeader *aheader = chunk->allocateArena(zone, thingKind);
  1477     if (!aheader)
  1478         return nullptr;
  1480     if (MOZ_UNLIKELY(zone->wasGCStarted())) {
  1481         if (zone->needsBarrier()) {
  1482             aheader->allocatedDuringIncremental = true;
  1483             zone->runtimeFromMainThread()->gcMarker.delayMarkingArena(aheader);
  1484         } else if (zone->isGCSweeping()) {
  1485             PushArenaAllocatedDuringSweep(zone->runtimeFromMainThread(), aheader);
  1488     aheader->next = al->head;
  1489     if (!al->head) {
  1490         JS_ASSERT(al->cursor == &al->head);
  1491         al->cursor = &aheader->next;
  1493     al->head = aheader;
  1495     /* See comments before allocateFromNewArena about this assert. */
  1496     JS_ASSERT(!aheader->hasFreeThings());
  1497     uintptr_t arenaAddr = aheader->arenaAddress();
  1498     return freeLists[thingKind].allocateFromNewArena(arenaAddr,
  1499                                                      Arena::firstThingOffset(thingKind),
  1500                                                      Arena::thingSize(thingKind));
  1503 void *
  1504 ArenaLists::allocateFromArena(JS::Zone *zone, AllocKind thingKind)
  1506     return allocateFromArenaInline(zone, thingKind);
  1509 void
  1510 ArenaLists::wipeDuringParallelExecution(JSRuntime *rt)
  1512     JS_ASSERT(InParallelSection());
  1514     // First, check that we all objects we have allocated are eligible
  1515     // for background finalization. The idea is that we will free
  1516     // (below) ALL background finalizable objects, because we know (by
  1517     // the rules of parallel execution) they are not reachable except
  1518     // by other thread-local objects. However, if there were any
  1519     // object ineligible for background finalization, it might retain
  1520     // a reference to one of these background finalizable objects, and
  1521     // that'd be bad.
  1522     for (unsigned i = 0; i < FINALIZE_LAST; i++) {
  1523         AllocKind thingKind = AllocKind(i);
  1524         if (!IsBackgroundFinalized(thingKind) && arenaLists[thingKind].head)
  1525             return;
  1528     // Finalize all background finalizable objects immediately and
  1529     // return the (now empty) arenas back to arena list.
  1530     FreeOp fop(rt, false);
  1531     for (unsigned i = 0; i < FINALIZE_OBJECT_LAST; i++) {
  1532         AllocKind thingKind = AllocKind(i);
  1534         if (!IsBackgroundFinalized(thingKind))
  1535             continue;
  1537         if (arenaLists[i].head) {
  1538             purge(thingKind);
  1539             forceFinalizeNow(&fop, thingKind);
  1544 void
  1545 ArenaLists::finalizeNow(FreeOp *fop, AllocKind thingKind)
  1547     JS_ASSERT(!IsBackgroundFinalized(thingKind));
  1548     forceFinalizeNow(fop, thingKind);
  1551 void
  1552 ArenaLists::forceFinalizeNow(FreeOp *fop, AllocKind thingKind)
  1554     JS_ASSERT(backgroundFinalizeState[thingKind] == BFS_DONE);
  1556     ArenaHeader *arenas = arenaLists[thingKind].head;
  1557     arenaLists[thingKind].clear();
  1559     SliceBudget budget;
  1560     FinalizeArenas(fop, &arenas, arenaLists[thingKind], thingKind, budget);
  1561     JS_ASSERT(!arenas);
  1564 void
  1565 ArenaLists::queueForForegroundSweep(FreeOp *fop, AllocKind thingKind)
  1567     JS_ASSERT(!IsBackgroundFinalized(thingKind));
  1568     JS_ASSERT(backgroundFinalizeState[thingKind] == BFS_DONE);
  1569     JS_ASSERT(!arenaListsToSweep[thingKind]);
  1571     arenaListsToSweep[thingKind] = arenaLists[thingKind].head;
  1572     arenaLists[thingKind].clear();
  1575 inline void
  1576 ArenaLists::queueForBackgroundSweep(FreeOp *fop, AllocKind thingKind)
  1578     JS_ASSERT(IsBackgroundFinalized(thingKind));
  1580 #ifdef JS_THREADSAFE
  1581     JS_ASSERT(!fop->runtime()->gcHelperThread.sweeping());
  1582 #endif
  1584     ArenaList *al = &arenaLists[thingKind];
  1585     if (!al->head) {
  1586         JS_ASSERT(backgroundFinalizeState[thingKind] == BFS_DONE);
  1587         JS_ASSERT(al->cursor == &al->head);
  1588         return;
  1591     /*
  1592      * The state can be done, or just-finished if we have not allocated any GC
  1593      * things from the arena list after the previous background finalization.
  1594      */
  1595     JS_ASSERT(backgroundFinalizeState[thingKind] == BFS_DONE ||
  1596               backgroundFinalizeState[thingKind] == BFS_JUST_FINISHED);
  1598     arenaListsToSweep[thingKind] = al->head;
  1599     al->clear();
  1600     backgroundFinalizeState[thingKind] = BFS_RUN;
  1603 /*static*/ void
  1604 ArenaLists::backgroundFinalize(FreeOp *fop, ArenaHeader *listHead, bool onBackgroundThread)
  1606     JS_ASSERT(listHead);
  1607     AllocKind thingKind = listHead->getAllocKind();
  1608     Zone *zone = listHead->zone;
  1610     ArenaList finalized;
  1611     SliceBudget budget;
  1612     FinalizeArenas(fop, &listHead, finalized, thingKind, budget);
  1613     JS_ASSERT(!listHead);
  1615     /*
  1616      * After we finish the finalization al->cursor must point to the end of
  1617      * the head list as we emptied the list before the background finalization
  1618      * and the allocation adds new arenas before the cursor.
  1619      */
  1620     ArenaLists *lists = &zone->allocator.arenas;
  1621     ArenaList *al = &lists->arenaLists[thingKind];
  1623     AutoLockGC lock(fop->runtime());
  1624     JS_ASSERT(lists->backgroundFinalizeState[thingKind] == BFS_RUN);
  1625     JS_ASSERT(!*al->cursor);
  1627     if (finalized.head) {
  1628         *al->cursor = finalized.head;
  1629         if (finalized.cursor != &finalized.head)
  1630             al->cursor = finalized.cursor;
  1633     /*
  1634      * We must set the state to BFS_JUST_FINISHED if we are running on the
  1635      * background thread and we have touched arenaList list, even if we add to
  1636      * the list only fully allocated arenas without any free things. It ensures
  1637      * that the allocation thread takes the GC lock and all writes to the free
  1638      * list elements are propagated. As we always take the GC lock when
  1639      * allocating new arenas from the chunks we can set the state to BFS_DONE if
  1640      * we have released all finalized arenas back to their chunks.
  1641      */
  1642     if (onBackgroundThread && finalized.head)
  1643         lists->backgroundFinalizeState[thingKind] = BFS_JUST_FINISHED;
  1644     else
  1645         lists->backgroundFinalizeState[thingKind] = BFS_DONE;
  1647     lists->arenaListsToSweep[thingKind] = nullptr;
  1650 void
  1651 ArenaLists::queueObjectsForSweep(FreeOp *fop)
  1653     gcstats::AutoPhase ap(fop->runtime()->gcStats, gcstats::PHASE_SWEEP_OBJECT);
  1655     finalizeNow(fop, FINALIZE_OBJECT0);
  1656     finalizeNow(fop, FINALIZE_OBJECT2);
  1657     finalizeNow(fop, FINALIZE_OBJECT4);
  1658     finalizeNow(fop, FINALIZE_OBJECT8);
  1659     finalizeNow(fop, FINALIZE_OBJECT12);
  1660     finalizeNow(fop, FINALIZE_OBJECT16);
  1662     queueForBackgroundSweep(fop, FINALIZE_OBJECT0_BACKGROUND);
  1663     queueForBackgroundSweep(fop, FINALIZE_OBJECT2_BACKGROUND);
  1664     queueForBackgroundSweep(fop, FINALIZE_OBJECT4_BACKGROUND);
  1665     queueForBackgroundSweep(fop, FINALIZE_OBJECT8_BACKGROUND);
  1666     queueForBackgroundSweep(fop, FINALIZE_OBJECT12_BACKGROUND);
  1667     queueForBackgroundSweep(fop, FINALIZE_OBJECT16_BACKGROUND);
  1670 void
  1671 ArenaLists::queueStringsForSweep(FreeOp *fop)
  1673     gcstats::AutoPhase ap(fop->runtime()->gcStats, gcstats::PHASE_SWEEP_STRING);
  1675     queueForBackgroundSweep(fop, FINALIZE_FAT_INLINE_STRING);
  1676     queueForBackgroundSweep(fop, FINALIZE_STRING);
  1678     queueForForegroundSweep(fop, FINALIZE_EXTERNAL_STRING);
  1681 void
  1682 ArenaLists::queueScriptsForSweep(FreeOp *fop)
  1684     gcstats::AutoPhase ap(fop->runtime()->gcStats, gcstats::PHASE_SWEEP_SCRIPT);
  1685     queueForForegroundSweep(fop, FINALIZE_SCRIPT);
  1686     queueForForegroundSweep(fop, FINALIZE_LAZY_SCRIPT);
  1689 void
  1690 ArenaLists::queueJitCodeForSweep(FreeOp *fop)
  1692     gcstats::AutoPhase ap(fop->runtime()->gcStats, gcstats::PHASE_SWEEP_JITCODE);
  1693     queueForForegroundSweep(fop, FINALIZE_JITCODE);
  1696 void
  1697 ArenaLists::queueShapesForSweep(FreeOp *fop)
  1699     gcstats::AutoPhase ap(fop->runtime()->gcStats, gcstats::PHASE_SWEEP_SHAPE);
  1701     queueForBackgroundSweep(fop, FINALIZE_SHAPE);
  1702     queueForBackgroundSweep(fop, FINALIZE_BASE_SHAPE);
  1703     queueForBackgroundSweep(fop, FINALIZE_TYPE_OBJECT);
  1706 static void *
  1707 RunLastDitchGC(JSContext *cx, JS::Zone *zone, AllocKind thingKind)
  1709     /*
  1710      * In parallel sections, we do not attempt to refill the free list
  1711      * and hence do not encounter last ditch GC.
  1712      */
  1713     JS_ASSERT(!InParallelSection());
  1715     PrepareZoneForGC(zone);
  1717     JSRuntime *rt = cx->runtime();
  1719     /* The last ditch GC preserves all atoms. */
  1720     AutoKeepAtoms keepAtoms(cx->perThreadData);
  1721     GC(rt, GC_NORMAL, JS::gcreason::LAST_DITCH);
  1723     /*
  1724      * The JSGC_END callback can legitimately allocate new GC
  1725      * things and populate the free list. If that happens, just
  1726      * return that list head.
  1727      */
  1728     size_t thingSize = Arena::thingSize(thingKind);
  1729     if (void *thing = zone->allocator.arenas.allocateFromFreeList(thingKind, thingSize))
  1730         return thing;
  1732     return nullptr;
  1735 template <AllowGC allowGC>
  1736 /* static */ void *
  1737 ArenaLists::refillFreeList(ThreadSafeContext *cx, AllocKind thingKind)
  1739     JS_ASSERT(cx->allocator()->arenas.freeLists[thingKind].isEmpty());
  1740     JS_ASSERT_IF(cx->isJSContext(), !cx->asJSContext()->runtime()->isHeapBusy());
  1742     Zone *zone = cx->allocator()->zone_;
  1744     bool runGC = cx->allowGC() && allowGC &&
  1745                  cx->asJSContext()->runtime()->gcIncrementalState != NO_INCREMENTAL &&
  1746                  zone->gcBytes > zone->gcTriggerBytes;
  1748 #ifdef JS_THREADSAFE
  1749     JS_ASSERT_IF(cx->isJSContext() && allowGC,
  1750                  !cx->asJSContext()->runtime()->currentThreadHasExclusiveAccess());
  1751 #endif
  1753     for (;;) {
  1754         if (MOZ_UNLIKELY(runGC)) {
  1755             if (void *thing = RunLastDitchGC(cx->asJSContext(), zone, thingKind))
  1756                 return thing;
  1759         if (cx->isJSContext()) {
  1760             /*
  1761              * allocateFromArena may fail while the background finalization still
  1762              * run. If we are on the main thread, we want to wait for it to finish
  1763              * and restart. However, checking for that is racy as the background
  1764              * finalization could free some things after allocateFromArena decided
  1765              * to fail but at this point it may have already stopped. To avoid
  1766              * this race we always try to allocate twice.
  1767              */
  1768             for (bool secondAttempt = false; ; secondAttempt = true) {
  1769                 void *thing = cx->allocator()->arenas.allocateFromArenaInline(zone, thingKind);
  1770                 if (MOZ_LIKELY(!!thing))
  1771                     return thing;
  1772                 if (secondAttempt)
  1773                     break;
  1775                 cx->asJSContext()->runtime()->gcHelperThread.waitBackgroundSweepEnd();
  1777         } else {
  1778 #ifdef JS_THREADSAFE
  1779             /*
  1780              * If we're off the main thread, we try to allocate once and
  1781              * return whatever value we get. If we aren't in a ForkJoin
  1782              * session (i.e. we are in a worker thread async with the main
  1783              * thread), we need to first ensure the main thread is not in a GC
  1784              * session.
  1785              */
  1786             mozilla::Maybe<AutoLockWorkerThreadState> lock;
  1787             JSRuntime *rt = zone->runtimeFromAnyThread();
  1788             if (rt->exclusiveThreadsPresent()) {
  1789                 lock.construct();
  1790                 while (rt->isHeapBusy())
  1791                     WorkerThreadState().wait(GlobalWorkerThreadState::PRODUCER);
  1794             void *thing = cx->allocator()->arenas.allocateFromArenaInline(zone, thingKind);
  1795             if (thing)
  1796                 return thing;
  1797 #else
  1798             MOZ_CRASH();
  1799 #endif
  1802         if (!cx->allowGC() || !allowGC)
  1803             return nullptr;
  1805         /*
  1806          * We failed to allocate. Run the GC if we haven't done it already.
  1807          * Otherwise report OOM.
  1808          */
  1809         if (runGC)
  1810             break;
  1811         runGC = true;
  1814     JS_ASSERT(allowGC);
  1815     js_ReportOutOfMemory(cx);
  1816     return nullptr;
  1819 template void *
  1820 ArenaLists::refillFreeList<NoGC>(ThreadSafeContext *cx, AllocKind thingKind);
  1822 template void *
  1823 ArenaLists::refillFreeList<CanGC>(ThreadSafeContext *cx, AllocKind thingKind);
  1825 JSGCTraceKind
  1826 js_GetGCThingTraceKind(void *thing)
  1828     return GetGCThingTraceKind(thing);
  1831 /* static */ int64_t
  1832 SliceBudget::TimeBudget(int64_t millis)
  1834     return millis * PRMJ_USEC_PER_MSEC;
  1837 /* static */ int64_t
  1838 SliceBudget::WorkBudget(int64_t work)
  1840     /* For work = 0 not to mean Unlimited, we subtract 1. */
  1841     return -work - 1;
  1844 SliceBudget::SliceBudget()
  1845   : deadline(INT64_MAX),
  1846     counter(INTPTR_MAX)
  1850 SliceBudget::SliceBudget(int64_t budget)
  1852     if (budget == Unlimited) {
  1853         deadline = INT64_MAX;
  1854         counter = INTPTR_MAX;
  1855     } else if (budget > 0) {
  1856         deadline = PRMJ_Now() + budget;
  1857         counter = CounterReset;
  1858     } else {
  1859         deadline = 0;
  1860         counter = -budget - 1;
  1864 bool
  1865 SliceBudget::checkOverBudget()
  1867     bool over = PRMJ_Now() > deadline;
  1868     if (!over)
  1869         counter = CounterReset;
  1870     return over;
  1873 void
  1874 js::MarkCompartmentActive(InterpreterFrame *fp)
  1876     fp->script()->compartment()->zone()->active = true;
  1879 static void
  1880 RequestInterrupt(JSRuntime *rt, JS::gcreason::Reason reason)
  1882     if (rt->gcIsNeeded)
  1883         return;
  1885     rt->gcIsNeeded = true;
  1886     rt->gcTriggerReason = reason;
  1887     rt->requestInterrupt(JSRuntime::RequestInterruptMainThread);
  1890 bool
  1891 js::TriggerGC(JSRuntime *rt, JS::gcreason::Reason reason)
  1893     /* Wait till end of parallel section to trigger GC. */
  1894     if (InParallelSection()) {
  1895         ForkJoinContext::current()->requestGC(reason);
  1896         return true;
  1899     /* Don't trigger GCs when allocating under the interrupt callback lock. */
  1900     if (rt->currentThreadOwnsInterruptLock())
  1901         return false;
  1903     JS_ASSERT(CurrentThreadCanAccessRuntime(rt));
  1905     /* GC is already running. */
  1906     if (rt->isHeapCollecting())
  1907         return false;
  1909     JS::PrepareForFullGC(rt);
  1910     RequestInterrupt(rt, reason);
  1911     return true;
  1914 bool
  1915 js::TriggerZoneGC(Zone *zone, JS::gcreason::Reason reason)
  1917     /*
  1918      * If parallel threads are running, wait till they
  1919      * are stopped to trigger GC.
  1920      */
  1921     if (InParallelSection()) {
  1922         ForkJoinContext::current()->requestZoneGC(zone, reason);
  1923         return true;
  1926     /* Zones in use by a thread with an exclusive context can't be collected. */
  1927     if (zone->usedByExclusiveThread)
  1928         return false;
  1930     JSRuntime *rt = zone->runtimeFromMainThread();
  1932     /* Don't trigger GCs when allocating under the interrupt callback lock. */
  1933     if (rt->currentThreadOwnsInterruptLock())
  1934         return false;
  1936     /* GC is already running. */
  1937     if (rt->isHeapCollecting())
  1938         return false;
  1940     if (rt->gcZeal() == ZealAllocValue) {
  1941         TriggerGC(rt, reason);
  1942         return true;
  1945     if (rt->isAtomsZone(zone)) {
  1946         /* We can't do a zone GC of the atoms compartment. */
  1947         TriggerGC(rt, reason);
  1948         return true;
  1951     PrepareZoneForGC(zone);
  1952     RequestInterrupt(rt, reason);
  1953     return true;
  1956 void
  1957 js::MaybeGC(JSContext *cx)
  1959     JSRuntime *rt = cx->runtime();
  1960     JS_ASSERT(CurrentThreadCanAccessRuntime(rt));
  1962     if (rt->gcZeal() == ZealAllocValue || rt->gcZeal() == ZealPokeValue) {
  1963         JS::PrepareForFullGC(rt);
  1964         GC(rt, GC_NORMAL, JS::gcreason::MAYBEGC);
  1965         return;
  1968     if (rt->gcIsNeeded) {
  1969         GCSlice(rt, GC_NORMAL, JS::gcreason::MAYBEGC);
  1970         return;
  1973     double factor = rt->gcHighFrequencyGC ? 0.85 : 0.9;
  1974     Zone *zone = cx->zone();
  1975     if (zone->gcBytes > 1024 * 1024 &&
  1976         zone->gcBytes >= factor * zone->gcTriggerBytes &&
  1977         rt->gcIncrementalState == NO_INCREMENTAL &&
  1978         !rt->gcHelperThread.sweeping())
  1980         PrepareZoneForGC(zone);
  1981         GCSlice(rt, GC_NORMAL, JS::gcreason::MAYBEGC);
  1982         return;
  1985 #ifndef JS_MORE_DETERMINISTIC
  1986     /*
  1987      * Access to the counters and, on 32 bit, setting gcNextFullGCTime below
  1988      * is not atomic and a race condition could trigger or suppress the GC. We
  1989      * tolerate this.
  1990      */
  1991     int64_t now = PRMJ_Now();
  1992     if (rt->gcNextFullGCTime && rt->gcNextFullGCTime <= now) {
  1993         if (rt->gcChunkAllocationSinceLastGC ||
  1994             rt->gcNumArenasFreeCommitted > rt->gcDecommitThreshold)
  1996             JS::PrepareForFullGC(rt);
  1997             GCSlice(rt, GC_SHRINK, JS::gcreason::MAYBEGC);
  1998         } else {
  1999             rt->gcNextFullGCTime = now + GC_IDLE_FULL_SPAN;
  2002 #endif
  2005 static void
  2006 DecommitArenasFromAvailableList(JSRuntime *rt, Chunk **availableListHeadp)
  2008     Chunk *chunk = *availableListHeadp;
  2009     if (!chunk)
  2010         return;
  2012     /*
  2013      * Decommit is expensive so we avoid holding the GC lock while calling it.
  2015      * We decommit from the tail of the list to minimize interference with the
  2016      * main thread that may start to allocate things at this point.
  2018      * The arena that is been decommitted outside the GC lock must not be
  2019      * available for allocations either via the free list or via the
  2020      * decommittedArenas bitmap. For that we just fetch the arena from the
  2021      * free list before the decommit pretending as it was allocated. If this
  2022      * arena also is the single free arena in the chunk, then we must remove
  2023      * from the available list before we release the lock so the allocation
  2024      * thread would not see chunks with no free arenas on the available list.
  2026      * After we retake the lock, we mark the arena as free and decommitted if
  2027      * the decommit was successful. We must also add the chunk back to the
  2028      * available list if we removed it previously or when the main thread
  2029      * have allocated all remaining free arenas in the chunk.
  2031      * We also must make sure that the aheader is not accessed again after we
  2032      * decommit the arena.
  2033      */
  2034     JS_ASSERT(chunk->info.prevp == availableListHeadp);
  2035     while (Chunk *next = chunk->info.next) {
  2036         JS_ASSERT(next->info.prevp == &chunk->info.next);
  2037         chunk = next;
  2040     for (;;) {
  2041         while (chunk->info.numArenasFreeCommitted != 0) {
  2042             ArenaHeader *aheader = chunk->fetchNextFreeArena(rt);
  2044             Chunk **savedPrevp = chunk->info.prevp;
  2045             if (!chunk->hasAvailableArenas())
  2046                 chunk->removeFromAvailableList();
  2048             size_t arenaIndex = Chunk::arenaIndex(aheader->arenaAddress());
  2049             bool ok;
  2051                 /*
  2052                  * If the main thread waits for the decommit to finish, skip
  2053                  * potentially expensive unlock/lock pair on the contested
  2054                  * lock.
  2055                  */
  2056                 Maybe<AutoUnlockGC> maybeUnlock;
  2057                 if (!rt->isHeapBusy())
  2058                     maybeUnlock.construct(rt);
  2059                 ok = MarkPagesUnused(rt, aheader->getArena(), ArenaSize);
  2062             if (ok) {
  2063                 ++chunk->info.numArenasFree;
  2064                 chunk->decommittedArenas.set(arenaIndex);
  2065             } else {
  2066                 chunk->addArenaToFreeList(rt, aheader);
  2068             JS_ASSERT(chunk->hasAvailableArenas());
  2069             JS_ASSERT(!chunk->unused());
  2070             if (chunk->info.numArenasFree == 1) {
  2071                 /*
  2072                  * Put the chunk back to the available list either at the
  2073                  * point where it was before to preserve the available list
  2074                  * that we enumerate, or, when the allocation thread has fully
  2075                  * used all the previous chunks, at the beginning of the
  2076                  * available list.
  2077                  */
  2078                 Chunk **insertPoint = savedPrevp;
  2079                 if (savedPrevp != availableListHeadp) {
  2080                     Chunk *prev = Chunk::fromPointerToNext(savedPrevp);
  2081                     if (!prev->hasAvailableArenas())
  2082                         insertPoint = availableListHeadp;
  2084                 chunk->insertToAvailableList(insertPoint);
  2085             } else {
  2086                 JS_ASSERT(chunk->info.prevp);
  2089             if (rt->gcChunkAllocationSinceLastGC || !ok) {
  2090                 /*
  2091                  * The allocator thread has started to get new chunks. We should stop
  2092                  * to avoid decommitting arenas in just allocated chunks.
  2093                  */
  2094                 return;
  2098         /*
  2099          * chunk->info.prevp becomes null when the allocator thread consumed
  2100          * all chunks from the available list.
  2101          */
  2102         JS_ASSERT_IF(chunk->info.prevp, *chunk->info.prevp == chunk);
  2103         if (chunk->info.prevp == availableListHeadp || !chunk->info.prevp)
  2104             break;
  2106         /*
  2107          * prevp exists and is not the list head. It must point to the next
  2108          * field of the previous chunk.
  2109          */
  2110         chunk = chunk->getPrevious();
  2114 static void
  2115 DecommitArenas(JSRuntime *rt)
  2117     DecommitArenasFromAvailableList(rt, &rt->gcSystemAvailableChunkListHead);
  2118     DecommitArenasFromAvailableList(rt, &rt->gcUserAvailableChunkListHead);
  2121 /* Must be called with the GC lock taken. */
  2122 static void
  2123 ExpireChunksAndArenas(JSRuntime *rt, bool shouldShrink)
  2125     if (Chunk *toFree = rt->gcChunkPool.expire(rt, shouldShrink)) {
  2126         AutoUnlockGC unlock(rt);
  2127         FreeChunkList(rt, toFree);
  2130     if (shouldShrink)
  2131         DecommitArenas(rt);
  2134 static void
  2135 SweepBackgroundThings(JSRuntime* rt, bool onBackgroundThread)
  2137     /*
  2138      * We must finalize in the correct order, see comments in
  2139      * finalizeObjects.
  2140      */
  2141     FreeOp fop(rt, false);
  2142     for (int phase = 0 ; phase < BackgroundPhaseCount ; ++phase) {
  2143         for (Zone *zone = rt->gcSweepingZones; zone; zone = zone->gcNextGraphNode) {
  2144             for (int index = 0 ; index < BackgroundPhaseLength[phase] ; ++index) {
  2145                 AllocKind kind = BackgroundPhases[phase][index];
  2146                 ArenaHeader *arenas = zone->allocator.arenas.arenaListsToSweep[kind];
  2147                 if (arenas)
  2148                     ArenaLists::backgroundFinalize(&fop, arenas, onBackgroundThread);
  2153     rt->gcSweepingZones = nullptr;
  2156 #ifdef JS_THREADSAFE
  2157 static void
  2158 AssertBackgroundSweepingFinished(JSRuntime *rt)
  2160     JS_ASSERT(!rt->gcSweepingZones);
  2161     for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) {
  2162         for (unsigned i = 0; i < FINALIZE_LIMIT; ++i) {
  2163             JS_ASSERT(!zone->allocator.arenas.arenaListsToSweep[i]);
  2164             JS_ASSERT(zone->allocator.arenas.doneBackgroundFinalize(AllocKind(i)));
  2169 unsigned
  2170 js::GetCPUCount()
  2172     static unsigned ncpus = 0;
  2173     if (ncpus == 0) {
  2174 # ifdef XP_WIN
  2175         SYSTEM_INFO sysinfo;
  2176         GetSystemInfo(&sysinfo);
  2177         ncpus = unsigned(sysinfo.dwNumberOfProcessors);
  2178 # else
  2179         long n = sysconf(_SC_NPROCESSORS_ONLN);
  2180         ncpus = (n > 0) ? unsigned(n) : 1;
  2181 # endif
  2183     return ncpus;
  2185 #endif /* JS_THREADSAFE */
  2187 bool
  2188 GCHelperThread::init()
  2190     if (!rt->useHelperThreads()) {
  2191         backgroundAllocation = false;
  2192         return true;
  2195 #ifdef JS_THREADSAFE
  2196     if (!(wakeup = PR_NewCondVar(rt->gcLock)))
  2197         return false;
  2198     if (!(done = PR_NewCondVar(rt->gcLock)))
  2199         return false;
  2201     thread = PR_CreateThread(PR_USER_THREAD, threadMain, this, PR_PRIORITY_NORMAL,
  2202                              PR_GLOBAL_THREAD, PR_JOINABLE_THREAD, 0);
  2203     if (!thread)
  2204         return false;
  2206     backgroundAllocation = (GetCPUCount() >= 2);
  2207 #endif /* JS_THREADSAFE */
  2208     return true;
  2211 void
  2212 GCHelperThread::finish()
  2214     if (!rt->useHelperThreads() || !rt->gcLock) {
  2215         JS_ASSERT(state == IDLE);
  2216         return;
  2219 #ifdef JS_THREADSAFE
  2220     PRThread *join = nullptr;
  2222         AutoLockGC lock(rt);
  2223         if (thread && state != SHUTDOWN) {
  2224             /*
  2225              * We cannot be in the ALLOCATING or CANCEL_ALLOCATION states as
  2226              * the allocations should have been stopped during the last GC.
  2227              */
  2228             JS_ASSERT(state == IDLE || state == SWEEPING);
  2229             if (state == IDLE)
  2230                 PR_NotifyCondVar(wakeup);
  2231             state = SHUTDOWN;
  2232             join = thread;
  2235     if (join) {
  2236         /* PR_DestroyThread is not necessary. */
  2237         PR_JoinThread(join);
  2239     if (wakeup)
  2240         PR_DestroyCondVar(wakeup);
  2241     if (done)
  2242         PR_DestroyCondVar(done);
  2243 #endif /* JS_THREADSAFE */
  2246 #ifdef JS_THREADSAFE
  2247 #ifdef MOZ_NUWA_PROCESS
  2248 extern "C" {
  2249 MFBT_API bool IsNuwaProcess();
  2250 MFBT_API void NuwaMarkCurrentThread(void (*recreate)(void *), void *arg);
  2252 #endif
  2254 /* static */
  2255 void
  2256 GCHelperThread::threadMain(void *arg)
  2258     PR_SetCurrentThreadName("JS GC Helper");
  2260 #ifdef MOZ_NUWA_PROCESS
  2261     if (IsNuwaProcess && IsNuwaProcess()) {
  2262         JS_ASSERT(NuwaMarkCurrentThread != nullptr);
  2263         NuwaMarkCurrentThread(nullptr, nullptr);
  2265 #endif
  2267     static_cast<GCHelperThread *>(arg)->threadLoop();
  2270 void
  2271 GCHelperThread::wait(PRCondVar *which)
  2273     rt->gcLockOwner = nullptr;
  2274     PR_WaitCondVar(which, PR_INTERVAL_NO_TIMEOUT);
  2275 #ifdef DEBUG
  2276     rt->gcLockOwner = PR_GetCurrentThread();
  2277 #endif
  2280 void
  2281 GCHelperThread::threadLoop()
  2283     AutoLockGC lock(rt);
  2285     TraceLogger *logger = TraceLoggerForCurrentThread();
  2287     /*
  2288      * Even on the first iteration the state can be SHUTDOWN or SWEEPING if
  2289      * the stop request or the GC and the corresponding startBackgroundSweep call
  2290      * happen before this thread has a chance to run.
  2291      */
  2292     for (;;) {
  2293         switch (state) {
  2294           case SHUTDOWN:
  2295             return;
  2296           case IDLE:
  2297             wait(wakeup);
  2298             break;
  2299           case SWEEPING: {
  2300             AutoTraceLog logSweeping(logger, TraceLogger::GCSweeping);
  2301             doSweep();
  2302             if (state == SWEEPING)
  2303                 state = IDLE;
  2304             PR_NotifyAllCondVar(done);
  2305             break;
  2307           case ALLOCATING: {
  2308             AutoTraceLog logAllocating(logger, TraceLogger::GCAllocation);
  2309             do {
  2310                 Chunk *chunk;
  2312                     AutoUnlockGC unlock(rt);
  2313                     chunk = Chunk::allocate(rt);
  2316                 /* OOM stops the background allocation. */
  2317                 if (!chunk)
  2318                     break;
  2319                 JS_ASSERT(chunk->info.numArenasFreeCommitted == 0);
  2320                 rt->gcChunkPool.put(chunk);
  2321             } while (state == ALLOCATING && rt->gcChunkPool.wantBackgroundAllocation(rt));
  2322             if (state == ALLOCATING)
  2323                 state = IDLE;
  2324             break;
  2326           case CANCEL_ALLOCATION:
  2327             state = IDLE;
  2328             PR_NotifyAllCondVar(done);
  2329             break;
  2333 #endif /* JS_THREADSAFE */
  2335 void
  2336 GCHelperThread::startBackgroundSweep(bool shouldShrink)
  2338     JS_ASSERT(rt->useHelperThreads());
  2340 #ifdef JS_THREADSAFE
  2341     AutoLockGC lock(rt);
  2342     JS_ASSERT(state == IDLE);
  2343     JS_ASSERT(!sweepFlag);
  2344     sweepFlag = true;
  2345     shrinkFlag = shouldShrink;
  2346     state = SWEEPING;
  2347     PR_NotifyCondVar(wakeup);
  2348 #endif /* JS_THREADSAFE */
  2351 /* Must be called with the GC lock taken. */
  2352 void
  2353 GCHelperThread::startBackgroundShrink()
  2355     JS_ASSERT(rt->useHelperThreads());
  2357 #ifdef JS_THREADSAFE
  2358     switch (state) {
  2359       case IDLE:
  2360         JS_ASSERT(!sweepFlag);
  2361         shrinkFlag = true;
  2362         state = SWEEPING;
  2363         PR_NotifyCondVar(wakeup);
  2364         break;
  2365       case SWEEPING:
  2366         shrinkFlag = true;
  2367         break;
  2368       case ALLOCATING:
  2369       case CANCEL_ALLOCATION:
  2370         /*
  2371          * If we have started background allocation there is nothing to
  2372          * shrink.
  2373          */
  2374         break;
  2375       case SHUTDOWN:
  2376         MOZ_ASSUME_UNREACHABLE("No shrink on shutdown");
  2378 #endif /* JS_THREADSAFE */
  2381 void
  2382 GCHelperThread::waitBackgroundSweepEnd()
  2384     if (!rt->useHelperThreads()) {
  2385         JS_ASSERT(state == IDLE);
  2386         return;
  2389 #ifdef JS_THREADSAFE
  2390     AutoLockGC lock(rt);
  2391     while (state == SWEEPING)
  2392         wait(done);
  2393     if (rt->gcIncrementalState == NO_INCREMENTAL)
  2394         AssertBackgroundSweepingFinished(rt);
  2395 #endif /* JS_THREADSAFE */
  2398 void
  2399 GCHelperThread::waitBackgroundSweepOrAllocEnd()
  2401     if (!rt->useHelperThreads()) {
  2402         JS_ASSERT(state == IDLE);
  2403         return;
  2406 #ifdef JS_THREADSAFE
  2407     AutoLockGC lock(rt);
  2408     if (state == ALLOCATING)
  2409         state = CANCEL_ALLOCATION;
  2410     while (state == SWEEPING || state == CANCEL_ALLOCATION)
  2411         wait(done);
  2412     if (rt->gcIncrementalState == NO_INCREMENTAL)
  2413         AssertBackgroundSweepingFinished(rt);
  2414 #endif /* JS_THREADSAFE */
  2417 /* Must be called with the GC lock taken. */
  2418 inline void
  2419 GCHelperThread::startBackgroundAllocationIfIdle()
  2421     JS_ASSERT(rt->useHelperThreads());
  2423 #ifdef JS_THREADSAFE
  2424     if (state == IDLE) {
  2425         state = ALLOCATING;
  2426         PR_NotifyCondVar(wakeup);
  2428 #endif /* JS_THREADSAFE */
  2431 void
  2432 GCHelperThread::replenishAndFreeLater(void *ptr)
  2434     JS_ASSERT(freeCursor == freeCursorEnd);
  2435     do {
  2436         if (freeCursor && !freeVector.append(freeCursorEnd - FREE_ARRAY_LENGTH))
  2437             break;
  2438         freeCursor = (void **) js_malloc(FREE_ARRAY_SIZE);
  2439         if (!freeCursor) {
  2440             freeCursorEnd = nullptr;
  2441             break;
  2443         freeCursorEnd = freeCursor + FREE_ARRAY_LENGTH;
  2444         *freeCursor++ = ptr;
  2445         return;
  2446     } while (false);
  2447     js_free(ptr);
  2450 #ifdef JS_THREADSAFE
  2451 /* Must be called with the GC lock taken. */
  2452 void
  2453 GCHelperThread::doSweep()
  2455     if (sweepFlag) {
  2456         sweepFlag = false;
  2457         AutoUnlockGC unlock(rt);
  2459         SweepBackgroundThings(rt, true);
  2461         if (freeCursor) {
  2462             void **array = freeCursorEnd - FREE_ARRAY_LENGTH;
  2463             freeElementsAndArray(array, freeCursor);
  2464             freeCursor = freeCursorEnd = nullptr;
  2465         } else {
  2466             JS_ASSERT(!freeCursorEnd);
  2468         for (void ***iter = freeVector.begin(); iter != freeVector.end(); ++iter) {
  2469             void **array = *iter;
  2470             freeElementsAndArray(array, array + FREE_ARRAY_LENGTH);
  2472         freeVector.resize(0);
  2474         rt->freeLifoAlloc.freeAll();
  2477     bool shrinking = shrinkFlag;
  2478     ExpireChunksAndArenas(rt, shrinking);
  2480     /*
  2481      * The main thread may have called ShrinkGCBuffers while
  2482      * ExpireChunksAndArenas(rt, false) was running, so we recheck the flag
  2483      * afterwards.
  2484      */
  2485     if (!shrinking && shrinkFlag) {
  2486         shrinkFlag = false;
  2487         ExpireChunksAndArenas(rt, true);
  2490 #endif /* JS_THREADSAFE */
  2492 bool
  2493 GCHelperThread::onBackgroundThread()
  2495 #ifdef JS_THREADSAFE
  2496     return PR_GetCurrentThread() == getThread();
  2497 #else
  2498     return false;
  2499 #endif
  2502 static bool
  2503 ReleaseObservedTypes(JSRuntime *rt)
  2505     bool releaseTypes = rt->gcZeal() != 0;
  2507 #ifndef JS_MORE_DETERMINISTIC
  2508     int64_t now = PRMJ_Now();
  2509     if (now >= rt->gcJitReleaseTime)
  2510         releaseTypes = true;
  2511     if (releaseTypes)
  2512         rt->gcJitReleaseTime = now + JIT_SCRIPT_RELEASE_TYPES_INTERVAL;
  2513 #endif
  2515     return releaseTypes;
  2518 /*
  2519  * It's simpler if we preserve the invariant that every zone has at least one
  2520  * compartment. If we know we're deleting the entire zone, then
  2521  * SweepCompartments is allowed to delete all compartments. In this case,
  2522  * |keepAtleastOne| is false. If some objects remain in the zone so that it
  2523  * cannot be deleted, then we set |keepAtleastOne| to true, which prohibits
  2524  * SweepCompartments from deleting every compartment. Instead, it preserves an
  2525  * arbitrary compartment in the zone.
  2526  */
  2527 static void
  2528 SweepCompartments(FreeOp *fop, Zone *zone, bool keepAtleastOne, bool lastGC)
  2530     JSRuntime *rt = zone->runtimeFromMainThread();
  2531     JSDestroyCompartmentCallback callback = rt->destroyCompartmentCallback;
  2533     JSCompartment **read = zone->compartments.begin();
  2534     JSCompartment **end = zone->compartments.end();
  2535     JSCompartment **write = read;
  2536     bool foundOne = false;
  2537     while (read < end) {
  2538         JSCompartment *comp = *read++;
  2539         JS_ASSERT(!rt->isAtomsCompartment(comp));
  2541         /*
  2542          * Don't delete the last compartment if all the ones before it were
  2543          * deleted and keepAtleastOne is true.
  2544          */
  2545         bool dontDelete = read == end && !foundOne && keepAtleastOne;
  2546         if ((!comp->marked && !dontDelete) || lastGC) {
  2547             if (callback)
  2548                 callback(fop, comp);
  2549             if (comp->principals)
  2550                 JS_DropPrincipals(rt, comp->principals);
  2551             js_delete(comp);
  2552         } else {
  2553             *write++ = comp;
  2554             foundOne = true;
  2557     zone->compartments.resize(write - zone->compartments.begin());
  2558     JS_ASSERT_IF(keepAtleastOne, !zone->compartments.empty());
  2561 static void
  2562 SweepZones(FreeOp *fop, bool lastGC)
  2564     JSRuntime *rt = fop->runtime();
  2565     JSZoneCallback callback = rt->destroyZoneCallback;
  2567     /* Skip the atomsCompartment zone. */
  2568     Zone **read = rt->zones.begin() + 1;
  2569     Zone **end = rt->zones.end();
  2570     Zone **write = read;
  2571     JS_ASSERT(rt->zones.length() >= 1);
  2572     JS_ASSERT(rt->isAtomsZone(rt->zones[0]));
  2574     while (read < end) {
  2575         Zone *zone = *read++;
  2577         if (zone->wasGCStarted()) {
  2578             if ((zone->allocator.arenas.arenaListsAreEmpty() && !zone->hasMarkedCompartments()) ||
  2579                 lastGC)
  2581                 zone->allocator.arenas.checkEmptyFreeLists();
  2582                 if (callback)
  2583                     callback(zone);
  2584                 SweepCompartments(fop, zone, false, lastGC);
  2585                 JS_ASSERT(zone->compartments.empty());
  2586                 fop->delete_(zone);
  2587                 continue;
  2589             SweepCompartments(fop, zone, true, lastGC);
  2591         *write++ = zone;
  2593     rt->zones.resize(write - rt->zones.begin());
  2596 static void
  2597 PurgeRuntime(JSRuntime *rt)
  2599     for (GCCompartmentsIter comp(rt); !comp.done(); comp.next())
  2600         comp->purge();
  2602     rt->freeLifoAlloc.transferUnusedFrom(&rt->tempLifoAlloc);
  2603     rt->interpreterStack().purge(rt);
  2605     rt->gsnCache.purge();
  2606     rt->scopeCoordinateNameCache.purge();
  2607     rt->newObjectCache.purge();
  2608     rt->nativeIterCache.purge();
  2609     rt->sourceDataCache.purge();
  2610     rt->evalCache.clear();
  2612     if (!rt->hasActiveCompilations())
  2613         rt->parseMapPool().purgeAll();
  2616 static bool
  2617 ShouldPreserveJITCode(JSCompartment *comp, int64_t currentTime)
  2619     JSRuntime *rt = comp->runtimeFromMainThread();
  2620     if (rt->gcShouldCleanUpEverything)
  2621         return false;
  2623     if (rt->alwaysPreserveCode)
  2624         return true;
  2625     if (comp->lastAnimationTime + PRMJ_USEC_PER_SEC >= currentTime)
  2626         return true;
  2628     return false;
  2631 #ifdef DEBUG
  2632 class CompartmentCheckTracer : public JSTracer
  2634   public:
  2635     CompartmentCheckTracer(JSRuntime *rt, JSTraceCallback callback)
  2636       : JSTracer(rt, callback)
  2637     {}
  2639     Cell *src;
  2640     JSGCTraceKind srcKind;
  2641     Zone *zone;
  2642     JSCompartment *compartment;
  2643 };
  2645 static bool
  2646 InCrossCompartmentMap(JSObject *src, Cell *dst, JSGCTraceKind dstKind)
  2648     JSCompartment *srccomp = src->compartment();
  2650     if (dstKind == JSTRACE_OBJECT) {
  2651         Value key = ObjectValue(*static_cast<JSObject *>(dst));
  2652         if (WrapperMap::Ptr p = srccomp->lookupWrapper(key)) {
  2653             if (*p->value().unsafeGet() == ObjectValue(*src))
  2654                 return true;
  2658     /*
  2659      * If the cross-compartment edge is caused by the debugger, then we don't
  2660      * know the right hashtable key, so we have to iterate.
  2661      */
  2662     for (JSCompartment::WrapperEnum e(srccomp); !e.empty(); e.popFront()) {
  2663         if (e.front().key().wrapped == dst && ToMarkable(e.front().value()) == src)
  2664             return true;
  2667     return false;
  2670 static void
  2671 CheckCompartment(CompartmentCheckTracer *trc, JSCompartment *thingCompartment,
  2672                  Cell *thing, JSGCTraceKind kind)
  2674     JS_ASSERT(thingCompartment == trc->compartment ||
  2675               trc->runtime()->isAtomsCompartment(thingCompartment) ||
  2676               (trc->srcKind == JSTRACE_OBJECT &&
  2677                InCrossCompartmentMap((JSObject *)trc->src, thing, kind)));
  2680 static JSCompartment *
  2681 CompartmentOfCell(Cell *thing, JSGCTraceKind kind)
  2683     if (kind == JSTRACE_OBJECT)
  2684         return static_cast<JSObject *>(thing)->compartment();
  2685     else if (kind == JSTRACE_SHAPE)
  2686         return static_cast<Shape *>(thing)->compartment();
  2687     else if (kind == JSTRACE_BASE_SHAPE)
  2688         return static_cast<BaseShape *>(thing)->compartment();
  2689     else if (kind == JSTRACE_SCRIPT)
  2690         return static_cast<JSScript *>(thing)->compartment();
  2691     else
  2692         return nullptr;
  2695 static void
  2696 CheckCompartmentCallback(JSTracer *trcArg, void **thingp, JSGCTraceKind kind)
  2698     CompartmentCheckTracer *trc = static_cast<CompartmentCheckTracer *>(trcArg);
  2699     Cell *thing = (Cell *)*thingp;
  2701     JSCompartment *comp = CompartmentOfCell(thing, kind);
  2702     if (comp && trc->compartment) {
  2703         CheckCompartment(trc, comp, thing, kind);
  2704     } else {
  2705         JS_ASSERT(thing->tenuredZone() == trc->zone ||
  2706                   trc->runtime()->isAtomsZone(thing->tenuredZone()));
  2710 static void
  2711 CheckForCompartmentMismatches(JSRuntime *rt)
  2713     if (rt->gcDisableStrictProxyCheckingCount)
  2714         return;
  2716     CompartmentCheckTracer trc(rt, CheckCompartmentCallback);
  2717     for (ZonesIter zone(rt, SkipAtoms); !zone.done(); zone.next()) {
  2718         trc.zone = zone;
  2719         for (size_t thingKind = 0; thingKind < FINALIZE_LAST; thingKind++) {
  2720             for (CellIterUnderGC i(zone, AllocKind(thingKind)); !i.done(); i.next()) {
  2721                 trc.src = i.getCell();
  2722                 trc.srcKind = MapAllocToTraceKind(AllocKind(thingKind));
  2723                 trc.compartment = CompartmentOfCell(trc.src, trc.srcKind);
  2724                 JS_TraceChildren(&trc, trc.src, trc.srcKind);
  2729 #endif
  2731 static bool
  2732 BeginMarkPhase(JSRuntime *rt)
  2734     int64_t currentTime = PRMJ_Now();
  2736 #ifdef DEBUG
  2737     if (rt->gcFullCompartmentChecks)
  2738         CheckForCompartmentMismatches(rt);
  2739 #endif
  2741     rt->gcIsFull = true;
  2742     bool any = false;
  2744     for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) {
  2745         /* Assert that zone state is as we expect */
  2746         JS_ASSERT(!zone->isCollecting());
  2747         JS_ASSERT(!zone->compartments.empty());
  2748         for (unsigned i = 0; i < FINALIZE_LIMIT; ++i)
  2749             JS_ASSERT(!zone->allocator.arenas.arenaListsToSweep[i]);
  2751         /* Set up which zones will be collected. */
  2752         if (zone->isGCScheduled()) {
  2753             if (!rt->isAtomsZone(zone)) {
  2754                 any = true;
  2755                 zone->setGCState(Zone::Mark);
  2757         } else {
  2758             rt->gcIsFull = false;
  2761         zone->scheduledForDestruction = false;
  2762         zone->maybeAlive = false;
  2763         zone->setPreservingCode(false);
  2766     for (CompartmentsIter c(rt, WithAtoms); !c.done(); c.next()) {
  2767         JS_ASSERT(c->gcLiveArrayBuffers.empty());
  2768         c->marked = false;
  2769         if (ShouldPreserveJITCode(c, currentTime))
  2770             c->zone()->setPreservingCode(true);
  2773     /*
  2774      * Atoms are not in the cross-compartment map. So if there are any
  2775      * zones that are not being collected, we are not allowed to collect
  2776      * atoms. Otherwise, the non-collected zones could contain pointers
  2777      * to atoms that we would miss.
  2779      * keepAtoms() will only change on the main thread, which we are currently
  2780      * on. If the value of keepAtoms() changes between GC slices, then we'll
  2781      * cancel the incremental GC. See IsIncrementalGCSafe.
  2782      */
  2783     if (rt->gcIsFull && !rt->keepAtoms()) {
  2784         Zone *atomsZone = rt->atomsCompartment()->zone();
  2785         if (atomsZone->isGCScheduled()) {
  2786             JS_ASSERT(!atomsZone->isCollecting());
  2787             atomsZone->setGCState(Zone::Mark);
  2788             any = true;
  2792     /* Check that at least one zone is scheduled for collection. */
  2793     if (!any)
  2794         return false;
  2796     /*
  2797      * At the end of each incremental slice, we call prepareForIncrementalGC,
  2798      * which marks objects in all arenas that we're currently allocating
  2799      * into. This can cause leaks if unreachable objects are in these
  2800      * arenas. This purge call ensures that we only mark arenas that have had
  2801      * allocations after the incremental GC started.
  2802      */
  2803     if (rt->gcIsIncremental) {
  2804         for (GCZonesIter zone(rt); !zone.done(); zone.next())
  2805             zone->allocator.arenas.purge();
  2808     rt->gcMarker.start();
  2809     JS_ASSERT(!rt->gcMarker.callback);
  2810     JS_ASSERT(IS_GC_MARKING_TRACER(&rt->gcMarker));
  2812     /* For non-incremental GC the following sweep discards the jit code. */
  2813     if (rt->gcIsIncremental) {
  2814         for (GCZonesIter zone(rt); !zone.done(); zone.next()) {
  2815             gcstats::AutoPhase ap(rt->gcStats, gcstats::PHASE_MARK_DISCARD_CODE);
  2816             zone->discardJitCode(rt->defaultFreeOp());
  2820     GCMarker *gcmarker = &rt->gcMarker;
  2822     rt->gcStartNumber = rt->gcNumber;
  2824     /*
  2825      * We must purge the runtime at the beginning of an incremental GC. The
  2826      * danger if we purge later is that the snapshot invariant of incremental
  2827      * GC will be broken, as follows. If some object is reachable only through
  2828      * some cache (say the dtoaCache) then it will not be part of the snapshot.
  2829      * If we purge after root marking, then the mutator could obtain a pointer
  2830      * to the object and start using it. This object might never be marked, so
  2831      * a GC hazard would exist.
  2832      */
  2834         gcstats::AutoPhase ap(rt->gcStats, gcstats::PHASE_PURGE);
  2835         PurgeRuntime(rt);
  2838     /*
  2839      * Mark phase.
  2840      */
  2841     gcstats::AutoPhase ap1(rt->gcStats, gcstats::PHASE_MARK);
  2842     gcstats::AutoPhase ap2(rt->gcStats, gcstats::PHASE_MARK_ROOTS);
  2844     for (GCZonesIter zone(rt); !zone.done(); zone.next()) {
  2845         /* Unmark everything in the zones being collected. */
  2846         zone->allocator.arenas.unmarkAll();
  2849     for (GCCompartmentsIter c(rt); !c.done(); c.next()) {
  2850         /* Reset weak map list for the compartments being collected. */
  2851         WeakMapBase::resetCompartmentWeakMapList(c);
  2854     if (rt->gcIsFull)
  2855         UnmarkScriptData(rt);
  2857     MarkRuntime(gcmarker);
  2858     if (rt->gcIsIncremental)
  2859         BufferGrayRoots(gcmarker);
  2861     /*
  2862      * This code ensures that if a zone is "dead", then it will be
  2863      * collected in this GC. A zone is considered dead if its maybeAlive
  2864      * flag is false. The maybeAlive flag is set if:
  2865      *   (1) the zone has incoming cross-compartment edges, or
  2866      *   (2) an object in the zone was marked during root marking, either
  2867      *       as a black root or a gray root.
  2868      * If the maybeAlive is false, then we set the scheduledForDestruction flag.
  2869      * At any time later in the GC, if we try to mark an object whose
  2870      * zone is scheduled for destruction, we will assert.
  2871      * NOTE: Due to bug 811587, we only assert if gcManipulatingDeadCompartments
  2872      * is true (e.g., if we're doing a brain transplant).
  2874      * The purpose of this check is to ensure that a zone that we would
  2875      * normally destroy is not resurrected by a read barrier or an
  2876      * allocation. This might happen during a function like JS_TransplantObject,
  2877      * which iterates over all compartments, live or dead, and operates on their
  2878      * objects. See bug 803376 for details on this problem. To avoid the
  2879      * problem, we are very careful to avoid allocation and read barriers during
  2880      * JS_TransplantObject and the like. The code here ensures that we don't
  2881      * regress.
  2883      * Note that there are certain cases where allocations or read barriers in
  2884      * dead zone are difficult to avoid. We detect such cases (via the
  2885      * gcObjectsMarkedInDeadCompartment counter) and redo any ongoing GCs after
  2886      * the JS_TransplantObject function has finished. This ensures that the dead
  2887      * zones will be cleaned up. See AutoMarkInDeadZone and
  2888      * AutoMaybeTouchDeadZones for details.
  2889      */
  2891     /* Set the maybeAlive flag based on cross-compartment edges. */
  2892     for (CompartmentsIter c(rt, SkipAtoms); !c.done(); c.next()) {
  2893         for (JSCompartment::WrapperEnum e(c); !e.empty(); e.popFront()) {
  2894             Cell *dst = e.front().key().wrapped;
  2895             dst->tenuredZone()->maybeAlive = true;
  2899     /*
  2900      * For black roots, code in gc/Marking.cpp will already have set maybeAlive
  2901      * during MarkRuntime.
  2902      */
  2904     for (GCZonesIter zone(rt); !zone.done(); zone.next()) {
  2905         if (!zone->maybeAlive && !rt->isAtomsZone(zone))
  2906             zone->scheduledForDestruction = true;
  2908     rt->gcFoundBlackGrayEdges = false;
  2910     return true;
  2913 template <class CompartmentIterT>
  2914 static void
  2915 MarkWeakReferences(JSRuntime *rt, gcstats::Phase phase)
  2917     GCMarker *gcmarker = &rt->gcMarker;
  2918     JS_ASSERT(gcmarker->isDrained());
  2920     gcstats::AutoPhase ap(rt->gcStats, gcstats::PHASE_SWEEP_MARK);
  2921     gcstats::AutoPhase ap1(rt->gcStats, phase);
  2923     for (;;) {
  2924         bool markedAny = false;
  2925         for (CompartmentIterT c(rt); !c.done(); c.next()) {
  2926             markedAny |= WatchpointMap::markCompartmentIteratively(c, gcmarker);
  2927             markedAny |= WeakMapBase::markCompartmentIteratively(c, gcmarker);
  2929         markedAny |= Debugger::markAllIteratively(gcmarker);
  2931         if (!markedAny)
  2932             break;
  2934         SliceBudget budget;
  2935         gcmarker->drainMarkStack(budget);
  2937     JS_ASSERT(gcmarker->isDrained());
  2940 static void
  2941 MarkWeakReferencesInCurrentGroup(JSRuntime *rt, gcstats::Phase phase)
  2943     MarkWeakReferences<GCCompartmentGroupIter>(rt, phase);
  2946 template <class ZoneIterT, class CompartmentIterT>
  2947 static void
  2948 MarkGrayReferences(JSRuntime *rt)
  2950     GCMarker *gcmarker = &rt->gcMarker;
  2953         gcstats::AutoPhase ap(rt->gcStats, gcstats::PHASE_SWEEP_MARK);
  2954         gcstats::AutoPhase ap1(rt->gcStats, gcstats::PHASE_SWEEP_MARK_GRAY);
  2955         gcmarker->setMarkColorGray();
  2956         if (gcmarker->hasBufferedGrayRoots()) {
  2957             for (ZoneIterT zone(rt); !zone.done(); zone.next())
  2958                 gcmarker->markBufferedGrayRoots(zone);
  2959         } else {
  2960             JS_ASSERT(!rt->gcIsIncremental);
  2961             if (JSTraceDataOp op = rt->gcGrayRootTracer.op)
  2962                 (*op)(gcmarker, rt->gcGrayRootTracer.data);
  2964         SliceBudget budget;
  2965         gcmarker->drainMarkStack(budget);
  2968     MarkWeakReferences<CompartmentIterT>(rt, gcstats::PHASE_SWEEP_MARK_GRAY_WEAK);
  2970     JS_ASSERT(gcmarker->isDrained());
  2972     gcmarker->setMarkColorBlack();
  2975 static void
  2976 MarkGrayReferencesInCurrentGroup(JSRuntime *rt)
  2978     MarkGrayReferences<GCZoneGroupIter, GCCompartmentGroupIter>(rt);
  2981 #ifdef DEBUG
  2983 static void
  2984 MarkAllWeakReferences(JSRuntime *rt, gcstats::Phase phase)
  2986     MarkWeakReferences<GCCompartmentsIter>(rt, phase);
  2989 static void
  2990 MarkAllGrayReferences(JSRuntime *rt)
  2992     MarkGrayReferences<GCZonesIter, GCCompartmentsIter>(rt);
  2995 class js::gc::MarkingValidator
  2997   public:
  2998     MarkingValidator(JSRuntime *rt);
  2999     ~MarkingValidator();
  3000     void nonIncrementalMark();
  3001     void validate();
  3003   private:
  3004     JSRuntime *runtime;
  3005     bool initialized;
  3007     typedef HashMap<Chunk *, ChunkBitmap *, GCChunkHasher, SystemAllocPolicy> BitmapMap;
  3008     BitmapMap map;
  3009 };
  3011 js::gc::MarkingValidator::MarkingValidator(JSRuntime *rt)
  3012   : runtime(rt),
  3013     initialized(false)
  3014 {}
  3016 js::gc::MarkingValidator::~MarkingValidator()
  3018     if (!map.initialized())
  3019         return;
  3021     for (BitmapMap::Range r(map.all()); !r.empty(); r.popFront())
  3022         js_delete(r.front().value());
  3025 void
  3026 js::gc::MarkingValidator::nonIncrementalMark()
  3028     /*
  3029      * Perform a non-incremental mark for all collecting zones and record
  3030      * the results for later comparison.
  3032      * Currently this does not validate gray marking.
  3033      */
  3035     if (!map.init())
  3036         return;
  3038     GCMarker *gcmarker = &runtime->gcMarker;
  3040     /* Save existing mark bits. */
  3041     for (GCChunkSet::Range r(runtime->gcChunkSet.all()); !r.empty(); r.popFront()) {
  3042         ChunkBitmap *bitmap = &r.front()->bitmap;
  3043 	ChunkBitmap *entry = js_new<ChunkBitmap>();
  3044         if (!entry)
  3045             return;
  3047         memcpy((void *)entry->bitmap, (void *)bitmap->bitmap, sizeof(bitmap->bitmap));
  3048         if (!map.putNew(r.front(), entry))
  3049             return;
  3052     /*
  3053      * Temporarily clear the lists of live weakmaps and array buffers for the
  3054      * compartments we are collecting.
  3055      */
  3057     WeakMapVector weakmaps;
  3058     ArrayBufferVector arrayBuffers;
  3059     for (GCCompartmentsIter c(runtime); !c.done(); c.next()) {
  3060         if (!WeakMapBase::saveCompartmentWeakMapList(c, weakmaps) ||
  3061             !ArrayBufferObject::saveArrayBufferList(c, arrayBuffers))
  3063             return;
  3067     /*
  3068      * After this point, the function should run to completion, so we shouldn't
  3069      * do anything fallible.
  3070      */
  3071     initialized = true;
  3073     for (GCCompartmentsIter c(runtime); !c.done(); c.next()) {
  3074         WeakMapBase::resetCompartmentWeakMapList(c);
  3075         ArrayBufferObject::resetArrayBufferList(c);
  3078     /* Re-do all the marking, but non-incrementally. */
  3079     js::gc::State state = runtime->gcIncrementalState;
  3080     runtime->gcIncrementalState = MARK_ROOTS;
  3082     JS_ASSERT(gcmarker->isDrained());
  3083     gcmarker->reset();
  3085     for (GCChunkSet::Range r(runtime->gcChunkSet.all()); !r.empty(); r.popFront())
  3086         r.front()->bitmap.clear();
  3089         gcstats::AutoPhase ap1(runtime->gcStats, gcstats::PHASE_MARK);
  3090         gcstats::AutoPhase ap2(runtime->gcStats, gcstats::PHASE_MARK_ROOTS);
  3091         MarkRuntime(gcmarker, true);
  3095         gcstats::AutoPhase ap1(runtime->gcStats, gcstats::PHASE_MARK);
  3096         SliceBudget budget;
  3097         runtime->gcIncrementalState = MARK;
  3098         runtime->gcMarker.drainMarkStack(budget);
  3101     runtime->gcIncrementalState = SWEEP;
  3103         gcstats::AutoPhase ap(runtime->gcStats, gcstats::PHASE_SWEEP);
  3104         MarkAllWeakReferences(runtime, gcstats::PHASE_SWEEP_MARK_WEAK);
  3106         /* Update zone state for gray marking. */
  3107         for (GCZonesIter zone(runtime); !zone.done(); zone.next()) {
  3108             JS_ASSERT(zone->isGCMarkingBlack());
  3109             zone->setGCState(Zone::MarkGray);
  3112         MarkAllGrayReferences(runtime);
  3114         /* Restore zone state. */
  3115         for (GCZonesIter zone(runtime); !zone.done(); zone.next()) {
  3116             JS_ASSERT(zone->isGCMarkingGray());
  3117             zone->setGCState(Zone::Mark);
  3121     /* Take a copy of the non-incremental mark state and restore the original. */
  3122     for (GCChunkSet::Range r(runtime->gcChunkSet.all()); !r.empty(); r.popFront()) {
  3123         Chunk *chunk = r.front();
  3124         ChunkBitmap *bitmap = &chunk->bitmap;
  3125         ChunkBitmap *entry = map.lookup(chunk)->value();
  3126         Swap(*entry, *bitmap);
  3129     for (GCCompartmentsIter c(runtime); !c.done(); c.next()) {
  3130         WeakMapBase::resetCompartmentWeakMapList(c);
  3131         ArrayBufferObject::resetArrayBufferList(c);
  3133     WeakMapBase::restoreCompartmentWeakMapLists(weakmaps);
  3134     ArrayBufferObject::restoreArrayBufferLists(arrayBuffers);
  3136     runtime->gcIncrementalState = state;
  3139 void
  3140 js::gc::MarkingValidator::validate()
  3142     /*
  3143      * Validates the incremental marking for a single compartment by comparing
  3144      * the mark bits to those previously recorded for a non-incremental mark.
  3145      */
  3147     if (!initialized)
  3148         return;
  3150     for (GCChunkSet::Range r(runtime->gcChunkSet.all()); !r.empty(); r.popFront()) {
  3151         Chunk *chunk = r.front();
  3152         BitmapMap::Ptr ptr = map.lookup(chunk);
  3153         if (!ptr)
  3154             continue;  /* Allocated after we did the non-incremental mark. */
  3156         ChunkBitmap *bitmap = ptr->value();
  3157         ChunkBitmap *incBitmap = &chunk->bitmap;
  3159         for (size_t i = 0; i < ArenasPerChunk; i++) {
  3160             if (chunk->decommittedArenas.get(i))
  3161                 continue;
  3162             Arena *arena = &chunk->arenas[i];
  3163             if (!arena->aheader.allocated())
  3164                 continue;
  3165             if (!arena->aheader.zone->isGCSweeping())
  3166                 continue;
  3167             if (arena->aheader.allocatedDuringIncremental)
  3168                 continue;
  3170             AllocKind kind = arena->aheader.getAllocKind();
  3171             uintptr_t thing = arena->thingsStart(kind);
  3172             uintptr_t end = arena->thingsEnd();
  3173             while (thing < end) {
  3174                 Cell *cell = (Cell *)thing;
  3176                 /*
  3177                  * If a non-incremental GC wouldn't have collected a cell, then
  3178                  * an incremental GC won't collect it.
  3179                  */
  3180                 JS_ASSERT_IF(bitmap->isMarked(cell, BLACK), incBitmap->isMarked(cell, BLACK));
  3182                 /*
  3183                  * If the cycle collector isn't allowed to collect an object
  3184                  * after a non-incremental GC has run, then it isn't allowed to
  3185                  * collected it after an incremental GC.
  3186                  */
  3187                 JS_ASSERT_IF(!bitmap->isMarked(cell, GRAY), !incBitmap->isMarked(cell, GRAY));
  3189                 thing += Arena::thingSize(kind);
  3195 #endif
  3197 static void
  3198 ComputeNonIncrementalMarkingForValidation(JSRuntime *rt)
  3200 #ifdef DEBUG
  3201     JS_ASSERT(!rt->gcMarkingValidator);
  3202     if (rt->gcIsIncremental && rt->gcValidate)
  3203         rt->gcMarkingValidator = js_new<MarkingValidator>(rt);
  3204     if (rt->gcMarkingValidator)
  3205         rt->gcMarkingValidator->nonIncrementalMark();
  3206 #endif
  3209 static void
  3210 ValidateIncrementalMarking(JSRuntime *rt)
  3212 #ifdef DEBUG
  3213     if (rt->gcMarkingValidator)
  3214         rt->gcMarkingValidator->validate();
  3215 #endif
  3218 static void
  3219 FinishMarkingValidation(JSRuntime *rt)
  3221 #ifdef DEBUG
  3222     js_delete(rt->gcMarkingValidator);
  3223     rt->gcMarkingValidator = nullptr;
  3224 #endif
  3227 static void
  3228 AssertNeedsBarrierFlagsConsistent(JSRuntime *rt)
  3230 #ifdef DEBUG
  3231     bool anyNeedsBarrier = false;
  3232     for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next())
  3233         anyNeedsBarrier |= zone->needsBarrier();
  3234     JS_ASSERT(rt->needsBarrier() == anyNeedsBarrier);
  3235 #endif
  3238 static void
  3239 DropStringWrappers(JSRuntime *rt)
  3241     /*
  3242      * String "wrappers" are dropped on GC because their presence would require
  3243      * us to sweep the wrappers in all compartments every time we sweep a
  3244      * compartment group.
  3245      */
  3246     for (CompartmentsIter c(rt, SkipAtoms); !c.done(); c.next()) {
  3247         for (JSCompartment::WrapperEnum e(c); !e.empty(); e.popFront()) {
  3248             if (e.front().key().kind == CrossCompartmentKey::StringWrapper)
  3249                 e.removeFront();
  3254 /*
  3255  * Group zones that must be swept at the same time.
  3257  * If compartment A has an edge to an unmarked object in compartment B, then we
  3258  * must not sweep A in a later slice than we sweep B. That's because a write
  3259  * barrier in A that could lead to the unmarked object in B becoming
  3260  * marked. However, if we had already swept that object, we would be in trouble.
  3262  * If we consider these dependencies as a graph, then all the compartments in
  3263  * any strongly-connected component of this graph must be swept in the same
  3264  * slice.
  3266  * Tarjan's algorithm is used to calculate the components.
  3267  */
  3269 void
  3270 JSCompartment::findOutgoingEdges(ComponentFinder<JS::Zone> &finder)
  3272     for (js::WrapperMap::Enum e(crossCompartmentWrappers); !e.empty(); e.popFront()) {
  3273         CrossCompartmentKey::Kind kind = e.front().key().kind;
  3274         JS_ASSERT(kind != CrossCompartmentKey::StringWrapper);
  3275         Cell *other = e.front().key().wrapped;
  3276         if (kind == CrossCompartmentKey::ObjectWrapper) {
  3277             /*
  3278              * Add edge to wrapped object compartment if wrapped object is not
  3279              * marked black to indicate that wrapper compartment not be swept
  3280              * after wrapped compartment.
  3281              */
  3282             if (!other->isMarked(BLACK) || other->isMarked(GRAY)) {
  3283                 JS::Zone *w = other->tenuredZone();
  3284                 if (w->isGCMarking())
  3285                     finder.addEdgeTo(w);
  3287         } else {
  3288             JS_ASSERT(kind == CrossCompartmentKey::DebuggerScript ||
  3289                       kind == CrossCompartmentKey::DebuggerSource ||
  3290                       kind == CrossCompartmentKey::DebuggerObject ||
  3291                       kind == CrossCompartmentKey::DebuggerEnvironment);
  3292             /*
  3293              * Add edge for debugger object wrappers, to ensure (in conjuction
  3294              * with call to Debugger::findCompartmentEdges below) that debugger
  3295              * and debuggee objects are always swept in the same group.
  3296              */
  3297             JS::Zone *w = other->tenuredZone();
  3298             if (w->isGCMarking())
  3299                 finder.addEdgeTo(w);
  3303     Debugger::findCompartmentEdges(zone(), finder);
  3306 void
  3307 Zone::findOutgoingEdges(ComponentFinder<JS::Zone> &finder)
  3309     /*
  3310      * Any compartment may have a pointer to an atom in the atoms
  3311      * compartment, and these aren't in the cross compartment map.
  3312      */
  3313     JSRuntime *rt = runtimeFromMainThread();
  3314     if (rt->atomsCompartment()->zone()->isGCMarking())
  3315         finder.addEdgeTo(rt->atomsCompartment()->zone());
  3317     for (CompartmentsInZoneIter comp(this); !comp.done(); comp.next())
  3318         comp->findOutgoingEdges(finder);
  3321 static void
  3322 FindZoneGroups(JSRuntime *rt)
  3324     ComponentFinder<Zone> finder(rt->mainThread.nativeStackLimit[StackForSystemCode]);
  3325     if (!rt->gcIsIncremental)
  3326         finder.useOneComponent();
  3328     for (GCZonesIter zone(rt); !zone.done(); zone.next()) {
  3329         JS_ASSERT(zone->isGCMarking());
  3330         finder.addNode(zone);
  3332     rt->gcZoneGroups = finder.getResultsList();
  3333     rt->gcCurrentZoneGroup = rt->gcZoneGroups;
  3334     rt->gcZoneGroupIndex = 0;
  3335     JS_ASSERT_IF(!rt->gcIsIncremental, !rt->gcCurrentZoneGroup->nextGroup());
  3338 static void
  3339 ResetGrayList(JSCompartment* comp);
  3341 static void
  3342 GetNextZoneGroup(JSRuntime *rt)
  3344     rt->gcCurrentZoneGroup = rt->gcCurrentZoneGroup->nextGroup();
  3345     ++rt->gcZoneGroupIndex;
  3346     if (!rt->gcCurrentZoneGroup) {
  3347         rt->gcAbortSweepAfterCurrentGroup = false;
  3348         return;
  3351     if (!rt->gcIsIncremental)
  3352         ComponentFinder<Zone>::mergeGroups(rt->gcCurrentZoneGroup);
  3354     if (rt->gcAbortSweepAfterCurrentGroup) {
  3355         JS_ASSERT(!rt->gcIsIncremental);
  3356         for (GCZoneGroupIter zone(rt); !zone.done(); zone.next()) {
  3357             JS_ASSERT(!zone->gcNextGraphComponent);
  3358             JS_ASSERT(zone->isGCMarking());
  3359             zone->setNeedsBarrier(false, Zone::UpdateIon);
  3360             zone->setGCState(Zone::NoGC);
  3361             zone->gcGrayRoots.clearAndFree();
  3363         rt->setNeedsBarrier(false);
  3364         AssertNeedsBarrierFlagsConsistent(rt);
  3366         for (GCCompartmentGroupIter comp(rt); !comp.done(); comp.next()) {
  3367             ArrayBufferObject::resetArrayBufferList(comp);
  3368             ResetGrayList(comp);
  3371         rt->gcAbortSweepAfterCurrentGroup = false;
  3372         rt->gcCurrentZoneGroup = nullptr;
  3376 /*
  3377  * Gray marking:
  3379  * At the end of collection, anything reachable from a gray root that has not
  3380  * otherwise been marked black must be marked gray.
  3382  * This means that when marking things gray we must not allow marking to leave
  3383  * the current compartment group, as that could result in things being marked
  3384  * grey when they might subsequently be marked black.  To achieve this, when we
  3385  * find a cross compartment pointer we don't mark the referent but add it to a
  3386  * singly-linked list of incoming gray pointers that is stored with each
  3387  * compartment.
  3389  * The list head is stored in JSCompartment::gcIncomingGrayPointers and contains
  3390  * cross compartment wrapper objects. The next pointer is stored in the second
  3391  * extra slot of the cross compartment wrapper.
  3393  * The list is created during gray marking when one of the
  3394  * MarkCrossCompartmentXXX functions is called for a pointer that leaves the
  3395  * current compartent group.  This calls DelayCrossCompartmentGrayMarking to
  3396  * push the referring object onto the list.
  3398  * The list is traversed and then unlinked in
  3399  * MarkIncomingCrossCompartmentPointers.
  3400  */
  3402 static bool
  3403 IsGrayListObject(JSObject *obj)
  3405     JS_ASSERT(obj);
  3406     return obj->is<CrossCompartmentWrapperObject>() && !IsDeadProxyObject(obj);
  3409 /* static */ unsigned
  3410 ProxyObject::grayLinkSlot(JSObject *obj)
  3412     JS_ASSERT(IsGrayListObject(obj));
  3413     return ProxyObject::EXTRA_SLOT + 1;
  3416 #ifdef DEBUG
  3417 static void
  3418 AssertNotOnGrayList(JSObject *obj)
  3420     JS_ASSERT_IF(IsGrayListObject(obj),
  3421                  obj->getReservedSlot(ProxyObject::grayLinkSlot(obj)).isUndefined());
  3423 #endif
  3425 static JSObject *
  3426 CrossCompartmentPointerReferent(JSObject *obj)
  3428     JS_ASSERT(IsGrayListObject(obj));
  3429     return &obj->as<ProxyObject>().private_().toObject();
  3432 static JSObject *
  3433 NextIncomingCrossCompartmentPointer(JSObject *prev, bool unlink)
  3435     unsigned slot = ProxyObject::grayLinkSlot(prev);
  3436     JSObject *next = prev->getReservedSlot(slot).toObjectOrNull();
  3437     JS_ASSERT_IF(next, IsGrayListObject(next));
  3439     if (unlink)
  3440         prev->setSlot(slot, UndefinedValue());
  3442     return next;
  3445 void
  3446 js::DelayCrossCompartmentGrayMarking(JSObject *src)
  3448     JS_ASSERT(IsGrayListObject(src));
  3450     /* Called from MarkCrossCompartmentXXX functions. */
  3451     unsigned slot = ProxyObject::grayLinkSlot(src);
  3452     JSObject *dest = CrossCompartmentPointerReferent(src);
  3453     JSCompartment *comp = dest->compartment();
  3455     if (src->getReservedSlot(slot).isUndefined()) {
  3456         src->setCrossCompartmentSlot(slot, ObjectOrNullValue(comp->gcIncomingGrayPointers));
  3457         comp->gcIncomingGrayPointers = src;
  3458     } else {
  3459         JS_ASSERT(src->getReservedSlot(slot).isObjectOrNull());
  3462 #ifdef DEBUG
  3463     /*
  3464      * Assert that the object is in our list, also walking the list to check its
  3465      * integrity.
  3466      */
  3467     JSObject *obj = comp->gcIncomingGrayPointers;
  3468     bool found = false;
  3469     while (obj) {
  3470         if (obj == src)
  3471             found = true;
  3472         obj = NextIncomingCrossCompartmentPointer(obj, false);
  3474     JS_ASSERT(found);
  3475 #endif
  3478 static void
  3479 MarkIncomingCrossCompartmentPointers(JSRuntime *rt, const uint32_t color)
  3481     JS_ASSERT(color == BLACK || color == GRAY);
  3483     gcstats::AutoPhase ap(rt->gcStats, gcstats::PHASE_SWEEP_MARK);
  3484     static const gcstats::Phase statsPhases[] = {
  3485         gcstats::PHASE_SWEEP_MARK_INCOMING_BLACK,
  3486         gcstats::PHASE_SWEEP_MARK_INCOMING_GRAY
  3487     };
  3488     gcstats::AutoPhase ap1(rt->gcStats, statsPhases[color]);
  3490     bool unlinkList = color == GRAY;
  3492     for (GCCompartmentGroupIter c(rt); !c.done(); c.next()) {
  3493         JS_ASSERT_IF(color == GRAY, c->zone()->isGCMarkingGray());
  3494         JS_ASSERT_IF(color == BLACK, c->zone()->isGCMarkingBlack());
  3495         JS_ASSERT_IF(c->gcIncomingGrayPointers, IsGrayListObject(c->gcIncomingGrayPointers));
  3497         for (JSObject *src = c->gcIncomingGrayPointers;
  3498              src;
  3499              src = NextIncomingCrossCompartmentPointer(src, unlinkList))
  3501             JSObject *dst = CrossCompartmentPointerReferent(src);
  3502             JS_ASSERT(dst->compartment() == c);
  3504             if (color == GRAY) {
  3505                 if (IsObjectMarked(&src) && src->isMarked(GRAY))
  3506                     MarkGCThingUnbarriered(&rt->gcMarker, (void**)&dst,
  3507                                            "cross-compartment gray pointer");
  3508             } else {
  3509                 if (IsObjectMarked(&src) && !src->isMarked(GRAY))
  3510                     MarkGCThingUnbarriered(&rt->gcMarker, (void**)&dst,
  3511                                            "cross-compartment black pointer");
  3515         if (unlinkList)
  3516             c->gcIncomingGrayPointers = nullptr;
  3519     SliceBudget budget;
  3520     rt->gcMarker.drainMarkStack(budget);
  3523 static bool
  3524 RemoveFromGrayList(JSObject *wrapper)
  3526     if (!IsGrayListObject(wrapper))
  3527         return false;
  3529     unsigned slot = ProxyObject::grayLinkSlot(wrapper);
  3530     if (wrapper->getReservedSlot(slot).isUndefined())
  3531         return false;  /* Not on our list. */
  3533     JSObject *tail = wrapper->getReservedSlot(slot).toObjectOrNull();
  3534     wrapper->setReservedSlot(slot, UndefinedValue());
  3536     JSCompartment *comp = CrossCompartmentPointerReferent(wrapper)->compartment();
  3537     JSObject *obj = comp->gcIncomingGrayPointers;
  3538     if (obj == wrapper) {
  3539         comp->gcIncomingGrayPointers = tail;
  3540         return true;
  3543     while (obj) {
  3544         unsigned slot = ProxyObject::grayLinkSlot(obj);
  3545         JSObject *next = obj->getReservedSlot(slot).toObjectOrNull();
  3546         if (next == wrapper) {
  3547             obj->setCrossCompartmentSlot(slot, ObjectOrNullValue(tail));
  3548             return true;
  3550         obj = next;
  3553     MOZ_ASSUME_UNREACHABLE("object not found in gray link list");
  3556 static void
  3557 ResetGrayList(JSCompartment *comp)
  3559     JSObject *src = comp->gcIncomingGrayPointers;
  3560     while (src)
  3561         src = NextIncomingCrossCompartmentPointer(src, true);
  3562     comp->gcIncomingGrayPointers = nullptr;
  3565 void
  3566 js::NotifyGCNukeWrapper(JSObject *obj)
  3568     /*
  3569      * References to target of wrapper are being removed, we no longer have to
  3570      * remember to mark it.
  3571      */
  3572     RemoveFromGrayList(obj);
  3575 enum {
  3576     JS_GC_SWAP_OBJECT_A_REMOVED = 1 << 0,
  3577     JS_GC_SWAP_OBJECT_B_REMOVED = 1 << 1
  3578 };
  3580 unsigned
  3581 js::NotifyGCPreSwap(JSObject *a, JSObject *b)
  3583     /*
  3584      * Two objects in the same compartment are about to have had their contents
  3585      * swapped.  If either of them are in our gray pointer list, then we remove
  3586      * them from the lists, returning a bitset indicating what happened.
  3587      */
  3588     return (RemoveFromGrayList(a) ? JS_GC_SWAP_OBJECT_A_REMOVED : 0) |
  3589            (RemoveFromGrayList(b) ? JS_GC_SWAP_OBJECT_B_REMOVED : 0);
  3592 void
  3593 js::NotifyGCPostSwap(JSObject *a, JSObject *b, unsigned removedFlags)
  3595     /*
  3596      * Two objects in the same compartment have had their contents swapped.  If
  3597      * either of them were in our gray pointer list, we re-add them again.
  3598      */
  3599     if (removedFlags & JS_GC_SWAP_OBJECT_A_REMOVED)
  3600         DelayCrossCompartmentGrayMarking(b);
  3601     if (removedFlags & JS_GC_SWAP_OBJECT_B_REMOVED)
  3602         DelayCrossCompartmentGrayMarking(a);
  3605 static void
  3606 EndMarkingZoneGroup(JSRuntime *rt)
  3608     /*
  3609      * Mark any incoming black pointers from previously swept compartments
  3610      * whose referents are not marked. This can occur when gray cells become
  3611      * black by the action of UnmarkGray.
  3612      */
  3613     MarkIncomingCrossCompartmentPointers(rt, BLACK);
  3615     MarkWeakReferencesInCurrentGroup(rt, gcstats::PHASE_SWEEP_MARK_WEAK);
  3617     /*
  3618      * Change state of current group to MarkGray to restrict marking to this
  3619      * group.  Note that there may be pointers to the atoms compartment, and
  3620      * these will be marked through, as they are not marked with
  3621      * MarkCrossCompartmentXXX.
  3622      */
  3623     for (GCZoneGroupIter zone(rt); !zone.done(); zone.next()) {
  3624         JS_ASSERT(zone->isGCMarkingBlack());
  3625         zone->setGCState(Zone::MarkGray);
  3628     /* Mark incoming gray pointers from previously swept compartments. */
  3629     rt->gcMarker.setMarkColorGray();
  3630     MarkIncomingCrossCompartmentPointers(rt, GRAY);
  3631     rt->gcMarker.setMarkColorBlack();
  3633     /* Mark gray roots and mark transitively inside the current compartment group. */
  3634     MarkGrayReferencesInCurrentGroup(rt);
  3636     /* Restore marking state. */
  3637     for (GCZoneGroupIter zone(rt); !zone.done(); zone.next()) {
  3638         JS_ASSERT(zone->isGCMarkingGray());
  3639         zone->setGCState(Zone::Mark);
  3642     JS_ASSERT(rt->gcMarker.isDrained());
  3645 static void
  3646 BeginSweepingZoneGroup(JSRuntime *rt)
  3648     /*
  3649      * Begin sweeping the group of zones in gcCurrentZoneGroup,
  3650      * performing actions that must be done before yielding to caller.
  3651      */
  3653     bool sweepingAtoms = false;
  3654     for (GCZoneGroupIter zone(rt); !zone.done(); zone.next()) {
  3655         /* Set the GC state to sweeping. */
  3656         JS_ASSERT(zone->isGCMarking());
  3657         zone->setGCState(Zone::Sweep);
  3659         /* Purge the ArenaLists before sweeping. */
  3660         zone->allocator.arenas.purge();
  3662         if (rt->isAtomsZone(zone))
  3663             sweepingAtoms = true;
  3665         if (rt->sweepZoneCallback)
  3666             rt->sweepZoneCallback(zone);
  3669     ValidateIncrementalMarking(rt);
  3671     FreeOp fop(rt, rt->gcSweepOnBackgroundThread);
  3674         gcstats::AutoPhase ap(rt->gcStats, gcstats::PHASE_FINALIZE_START);
  3675         if (rt->gcFinalizeCallback)
  3676             rt->gcFinalizeCallback(&fop, JSFINALIZE_GROUP_START, !rt->gcIsFull /* unused */);
  3679     if (sweepingAtoms) {
  3680         gcstats::AutoPhase ap(rt->gcStats, gcstats::PHASE_SWEEP_ATOMS);
  3681         rt->sweepAtoms();
  3684     /* Prune out dead views from ArrayBuffer's view lists. */
  3685     for (GCCompartmentGroupIter c(rt); !c.done(); c.next())
  3686         ArrayBufferObject::sweep(c);
  3688     /* Collect watch points associated with unreachable objects. */
  3689     WatchpointMap::sweepAll(rt);
  3691     /* Detach unreachable debuggers and global objects from each other. */
  3692     Debugger::sweepAll(&fop);
  3695         gcstats::AutoPhase ap(rt->gcStats, gcstats::PHASE_SWEEP_COMPARTMENTS);
  3697         for (GCZoneGroupIter zone(rt); !zone.done(); zone.next()) {
  3698             gcstats::AutoPhase ap(rt->gcStats, gcstats::PHASE_SWEEP_DISCARD_CODE);
  3699             zone->discardJitCode(&fop);
  3702         bool releaseTypes = ReleaseObservedTypes(rt);
  3703         for (GCCompartmentGroupIter c(rt); !c.done(); c.next()) {
  3704             gcstats::AutoSCC scc(rt->gcStats, rt->gcZoneGroupIndex);
  3705             c->sweep(&fop, releaseTypes && !c->zone()->isPreservingCode());
  3708         for (GCZoneGroupIter zone(rt); !zone.done(); zone.next()) {
  3709             gcstats::AutoSCC scc(rt->gcStats, rt->gcZoneGroupIndex);
  3711             // If there is an OOM while sweeping types, the type information
  3712             // will be deoptimized so that it is still correct (i.e.
  3713             // overapproximates the possible types in the zone), but the
  3714             // constraints might not have been triggered on the deoptimization
  3715             // or even copied over completely. In this case, destroy all JIT
  3716             // code and new script addendums in the zone, the only things whose
  3717             // correctness depends on the type constraints.
  3718             bool oom = false;
  3719             zone->sweep(&fop, releaseTypes && !zone->isPreservingCode(), &oom);
  3721             if (oom) {
  3722                 zone->setPreservingCode(false);
  3723                 zone->discardJitCode(&fop);
  3724                 zone->types.clearAllNewScriptAddendumsOnOOM();
  3729     /*
  3730      * Queue all GC things in all zones for sweeping, either in the
  3731      * foreground or on the background thread.
  3733      * Note that order is important here for the background case.
  3735      * Objects are finalized immediately but this may change in the future.
  3736      */
  3737     for (GCZoneGroupIter zone(rt); !zone.done(); zone.next()) {
  3738         gcstats::AutoSCC scc(rt->gcStats, rt->gcZoneGroupIndex);
  3739         zone->allocator.arenas.queueObjectsForSweep(&fop);
  3741     for (GCZoneGroupIter zone(rt); !zone.done(); zone.next()) {
  3742         gcstats::AutoSCC scc(rt->gcStats, rt->gcZoneGroupIndex);
  3743         zone->allocator.arenas.queueStringsForSweep(&fop);
  3745     for (GCZoneGroupIter zone(rt); !zone.done(); zone.next()) {
  3746         gcstats::AutoSCC scc(rt->gcStats, rt->gcZoneGroupIndex);
  3747         zone->allocator.arenas.queueScriptsForSweep(&fop);
  3749 #ifdef JS_ION
  3750     for (GCZoneGroupIter zone(rt); !zone.done(); zone.next()) {
  3751         gcstats::AutoSCC scc(rt->gcStats, rt->gcZoneGroupIndex);
  3752         zone->allocator.arenas.queueJitCodeForSweep(&fop);
  3754 #endif
  3755     for (GCZoneGroupIter zone(rt); !zone.done(); zone.next()) {
  3756         gcstats::AutoSCC scc(rt->gcStats, rt->gcZoneGroupIndex);
  3757         zone->allocator.arenas.queueShapesForSweep(&fop);
  3758         zone->allocator.arenas.gcShapeArenasToSweep =
  3759             zone->allocator.arenas.arenaListsToSweep[FINALIZE_SHAPE];
  3762     rt->gcSweepPhase = 0;
  3763     rt->gcSweepZone = rt->gcCurrentZoneGroup;
  3764     rt->gcSweepKindIndex = 0;
  3767         gcstats::AutoPhase ap(rt->gcStats, gcstats::PHASE_FINALIZE_END);
  3768         if (rt->gcFinalizeCallback)
  3769             rt->gcFinalizeCallback(&fop, JSFINALIZE_GROUP_END, !rt->gcIsFull /* unused */);
  3773 static void
  3774 EndSweepingZoneGroup(JSRuntime *rt)
  3776     /* Update the GC state for zones we have swept and unlink the list. */
  3777     for (GCZoneGroupIter zone(rt); !zone.done(); zone.next()) {
  3778         JS_ASSERT(zone->isGCSweeping());
  3779         zone->setGCState(Zone::Finished);
  3782     /* Reset the list of arenas marked as being allocated during sweep phase. */
  3783     while (ArenaHeader *arena = rt->gcArenasAllocatedDuringSweep) {
  3784         rt->gcArenasAllocatedDuringSweep = arena->getNextAllocDuringSweep();
  3785         arena->unsetAllocDuringSweep();
  3789 static void
  3790 BeginSweepPhase(JSRuntime *rt, bool lastGC)
  3792     /*
  3793      * Sweep phase.
  3795      * Finalize as we sweep, outside of rt->gcLock but with rt->isHeapBusy()
  3796      * true so that any attempt to allocate a GC-thing from a finalizer will
  3797      * fail, rather than nest badly and leave the unmarked newborn to be swept.
  3798      */
  3800     JS_ASSERT(!rt->gcAbortSweepAfterCurrentGroup);
  3802     ComputeNonIncrementalMarkingForValidation(rt);
  3804     gcstats::AutoPhase ap(rt->gcStats, gcstats::PHASE_SWEEP);
  3806 #ifdef JS_THREADSAFE
  3807     rt->gcSweepOnBackgroundThread = !lastGC && rt->useHelperThreads();
  3808 #endif
  3810 #ifdef DEBUG
  3811     for (CompartmentsIter c(rt, SkipAtoms); !c.done(); c.next()) {
  3812         JS_ASSERT(!c->gcIncomingGrayPointers);
  3813         for (JSCompartment::WrapperEnum e(c); !e.empty(); e.popFront()) {
  3814             if (e.front().key().kind != CrossCompartmentKey::StringWrapper)
  3815                 AssertNotOnGrayList(&e.front().value().get().toObject());
  3818 #endif
  3820     DropStringWrappers(rt);
  3821     FindZoneGroups(rt);
  3822     EndMarkingZoneGroup(rt);
  3823     BeginSweepingZoneGroup(rt);
  3826 bool
  3827 ArenaLists::foregroundFinalize(FreeOp *fop, AllocKind thingKind, SliceBudget &sliceBudget)
  3829     if (!arenaListsToSweep[thingKind])
  3830         return true;
  3832     ArenaList &dest = arenaLists[thingKind];
  3833     return FinalizeArenas(fop, &arenaListsToSweep[thingKind], dest, thingKind, sliceBudget);
  3836 static bool
  3837 DrainMarkStack(JSRuntime *rt, SliceBudget &sliceBudget, gcstats::Phase phase)
  3839     /* Run a marking slice and return whether the stack is now empty. */
  3840     gcstats::AutoPhase ap(rt->gcStats, phase);
  3841     return rt->gcMarker.drainMarkStack(sliceBudget);
  3844 static bool
  3845 SweepPhase(JSRuntime *rt, SliceBudget &sliceBudget)
  3847     gcstats::AutoPhase ap(rt->gcStats, gcstats::PHASE_SWEEP);
  3848     FreeOp fop(rt, rt->gcSweepOnBackgroundThread);
  3850     bool finished = DrainMarkStack(rt, sliceBudget, gcstats::PHASE_SWEEP_MARK);
  3851     if (!finished)
  3852         return false;
  3854     for (;;) {
  3855         /* Finalize foreground finalized things. */
  3856         for (; rt->gcSweepPhase < FinalizePhaseCount ; ++rt->gcSweepPhase) {
  3857             gcstats::AutoPhase ap(rt->gcStats, FinalizePhaseStatsPhase[rt->gcSweepPhase]);
  3859             for (; rt->gcSweepZone; rt->gcSweepZone = rt->gcSweepZone->nextNodeInGroup()) {
  3860                 Zone *zone = rt->gcSweepZone;
  3862                 while (rt->gcSweepKindIndex < FinalizePhaseLength[rt->gcSweepPhase]) {
  3863                     AllocKind kind = FinalizePhases[rt->gcSweepPhase][rt->gcSweepKindIndex];
  3865                     if (!zone->allocator.arenas.foregroundFinalize(&fop, kind, sliceBudget))
  3866                         return false;  /* Yield to the mutator. */
  3868                     ++rt->gcSweepKindIndex;
  3870                 rt->gcSweepKindIndex = 0;
  3872             rt->gcSweepZone = rt->gcCurrentZoneGroup;
  3875         /* Remove dead shapes from the shape tree, but don't finalize them yet. */
  3877             gcstats::AutoPhase ap(rt->gcStats, gcstats::PHASE_SWEEP_SHAPE);
  3879             for (; rt->gcSweepZone; rt->gcSweepZone = rt->gcSweepZone->nextNodeInGroup()) {
  3880                 Zone *zone = rt->gcSweepZone;
  3881                 while (ArenaHeader *arena = zone->allocator.arenas.gcShapeArenasToSweep) {
  3882                     for (CellIterUnderGC i(arena); !i.done(); i.next()) {
  3883                         Shape *shape = i.get<Shape>();
  3884                         if (!shape->isMarked())
  3885                             shape->sweep();
  3888                     zone->allocator.arenas.gcShapeArenasToSweep = arena->next;
  3889                     sliceBudget.step(Arena::thingsPerArena(Arena::thingSize(FINALIZE_SHAPE)));
  3890                     if (sliceBudget.isOverBudget())
  3891                         return false;  /* Yield to the mutator. */
  3896         EndSweepingZoneGroup(rt);
  3897         GetNextZoneGroup(rt);
  3898         if (!rt->gcCurrentZoneGroup)
  3899             return true;  /* We're finished. */
  3900         EndMarkingZoneGroup(rt);
  3901         BeginSweepingZoneGroup(rt);
  3905 static void
  3906 EndSweepPhase(JSRuntime *rt, JSGCInvocationKind gckind, bool lastGC)
  3908     gcstats::AutoPhase ap(rt->gcStats, gcstats::PHASE_SWEEP);
  3909     FreeOp fop(rt, rt->gcSweepOnBackgroundThread);
  3911     JS_ASSERT_IF(lastGC, !rt->gcSweepOnBackgroundThread);
  3913     JS_ASSERT(rt->gcMarker.isDrained());
  3914     rt->gcMarker.stop();
  3916     /*
  3917      * Recalculate whether GC was full or not as this may have changed due to
  3918      * newly created zones.  Can only change from full to not full.
  3919      */
  3920     if (rt->gcIsFull) {
  3921         for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) {
  3922             if (!zone->isCollecting()) {
  3923                 rt->gcIsFull = false;
  3924                 break;
  3929     /*
  3930      * If we found any black->gray edges during marking, we completely clear the
  3931      * mark bits of all uncollected zones, or if a reset has occured, zones that
  3932      * will no longer be collected. This is safe, although it may
  3933      * prevent the cycle collector from collecting some dead objects.
  3934      */
  3935     if (rt->gcFoundBlackGrayEdges) {
  3936         for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) {
  3937             if (!zone->isCollecting())
  3938                 zone->allocator.arenas.unmarkAll();
  3943         gcstats::AutoPhase ap(rt->gcStats, gcstats::PHASE_DESTROY);
  3945         /*
  3946          * Sweep script filenames after sweeping functions in the generic loop
  3947          * above. In this way when a scripted function's finalizer destroys the
  3948          * script and calls rt->destroyScriptHook, the hook can still access the
  3949          * script's filename. See bug 323267.
  3950          */
  3951         if (rt->gcIsFull)
  3952             SweepScriptData(rt);
  3954         /* Clear out any small pools that we're hanging on to. */
  3955         if (JSC::ExecutableAllocator *execAlloc = rt->maybeExecAlloc())
  3956             execAlloc->purge();
  3958         /*
  3959          * This removes compartments from rt->compartment, so we do it last to make
  3960          * sure we don't miss sweeping any compartments.
  3961          */
  3962         if (!lastGC)
  3963             SweepZones(&fop, lastGC);
  3965         if (!rt->gcSweepOnBackgroundThread) {
  3966             /*
  3967              * Destroy arenas after we finished the sweeping so finalizers can
  3968              * safely use IsAboutToBeFinalized(). This is done on the
  3969              * GCHelperThread if possible. We acquire the lock only because
  3970              * Expire needs to unlock it for other callers.
  3971              */
  3972             AutoLockGC lock(rt);
  3973             ExpireChunksAndArenas(rt, gckind == GC_SHRINK);
  3978         gcstats::AutoPhase ap(rt->gcStats, gcstats::PHASE_FINALIZE_END);
  3980         if (rt->gcFinalizeCallback)
  3981             rt->gcFinalizeCallback(&fop, JSFINALIZE_COLLECTION_END, !rt->gcIsFull);
  3983         /* If we finished a full GC, then the gray bits are correct. */
  3984         if (rt->gcIsFull)
  3985             rt->gcGrayBitsValid = true;
  3988     /* Set up list of zones for sweeping of background things. */
  3989     JS_ASSERT(!rt->gcSweepingZones);
  3990     for (GCZonesIter zone(rt); !zone.done(); zone.next()) {
  3991         zone->gcNextGraphNode = rt->gcSweepingZones;
  3992         rt->gcSweepingZones = zone;
  3995     /* If not sweeping on background thread then we must do it here. */
  3996     if (!rt->gcSweepOnBackgroundThread) {
  3997         gcstats::AutoPhase ap(rt->gcStats, gcstats::PHASE_DESTROY);
  3999         SweepBackgroundThings(rt, false);
  4001         rt->freeLifoAlloc.freeAll();
  4003         /* Ensure the compartments get swept if it's the last GC. */
  4004         if (lastGC)
  4005             SweepZones(&fop, lastGC);
  4008     for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) {
  4009         zone->setGCLastBytes(zone->gcBytes, gckind);
  4010         if (zone->isCollecting()) {
  4011             JS_ASSERT(zone->isGCFinished());
  4012             zone->setGCState(Zone::NoGC);
  4015 #ifdef DEBUG
  4016         JS_ASSERT(!zone->isCollecting());
  4017         JS_ASSERT(!zone->wasGCStarted());
  4019         for (unsigned i = 0 ; i < FINALIZE_LIMIT ; ++i) {
  4020             JS_ASSERT_IF(!IsBackgroundFinalized(AllocKind(i)) ||
  4021                          !rt->gcSweepOnBackgroundThread,
  4022                          !zone->allocator.arenas.arenaListsToSweep[i]);
  4024 #endif
  4027 #ifdef DEBUG
  4028     for (CompartmentsIter c(rt, SkipAtoms); !c.done(); c.next()) {
  4029         JS_ASSERT(!c->gcIncomingGrayPointers);
  4030         JS_ASSERT(c->gcLiveArrayBuffers.empty());
  4032         for (JSCompartment::WrapperEnum e(c); !e.empty(); e.popFront()) {
  4033             if (e.front().key().kind != CrossCompartmentKey::StringWrapper)
  4034                 AssertNotOnGrayList(&e.front().value().get().toObject());
  4037 #endif
  4039     FinishMarkingValidation(rt);
  4041     rt->gcLastGCTime = PRMJ_Now();
  4044 namespace {
  4046 /* ...while this class is to be used only for garbage collection. */
  4047 class AutoGCSession
  4049     JSRuntime *runtime;
  4050     AutoTraceSession session;
  4051     bool canceled;
  4053   public:
  4054     explicit AutoGCSession(JSRuntime *rt);
  4055     ~AutoGCSession();
  4057     void cancel() { canceled = true; }
  4058 };
  4060 } /* anonymous namespace */
  4062 /* Start a new heap session. */
  4063 AutoTraceSession::AutoTraceSession(JSRuntime *rt, js::HeapState heapState)
  4064   : lock(rt),
  4065     runtime(rt),
  4066     prevState(rt->heapState)
  4068     JS_ASSERT(!rt->noGCOrAllocationCheck);
  4069     JS_ASSERT(!rt->isHeapBusy());
  4070     JS_ASSERT(heapState != Idle);
  4071 #ifdef JSGC_GENERATIONAL
  4072     JS_ASSERT_IF(heapState == MajorCollecting, rt->gcNursery.isEmpty());
  4073 #endif
  4075     // Threads with an exclusive context can hit refillFreeList while holding
  4076     // the exclusive access lock. To avoid deadlocking when we try to acquire
  4077     // this lock during GC and the other thread is waiting, make sure we hold
  4078     // the exclusive access lock during GC sessions.
  4079     JS_ASSERT(rt->currentThreadHasExclusiveAccess());
  4081     if (rt->exclusiveThreadsPresent()) {
  4082         // Lock the worker thread state when changing the heap state in the
  4083         // presence of exclusive threads, to avoid racing with refillFreeList.
  4084 #ifdef JS_THREADSAFE
  4085         AutoLockWorkerThreadState lock;
  4086         rt->heapState = heapState;
  4087 #else
  4088         MOZ_CRASH();
  4089 #endif
  4090     } else {
  4091         rt->heapState = heapState;
  4095 AutoTraceSession::~AutoTraceSession()
  4097     JS_ASSERT(runtime->isHeapBusy());
  4099     if (runtime->exclusiveThreadsPresent()) {
  4100 #ifdef JS_THREADSAFE
  4101         AutoLockWorkerThreadState lock;
  4102         runtime->heapState = prevState;
  4104         // Notify any worker threads waiting for the trace session to end.
  4105         WorkerThreadState().notifyAll(GlobalWorkerThreadState::PRODUCER);
  4106 #else
  4107         MOZ_CRASH();
  4108 #endif
  4109     } else {
  4110         runtime->heapState = prevState;
  4114 AutoGCSession::AutoGCSession(JSRuntime *rt)
  4115   : runtime(rt),
  4116     session(rt, MajorCollecting),
  4117     canceled(false)
  4119     runtime->gcIsNeeded = false;
  4120     runtime->gcInterFrameGC = true;
  4122     runtime->gcNumber++;
  4124     // It's ok if threads other than the main thread have suppressGC set, as
  4125     // they are operating on zones which will not be collected from here.
  4126     JS_ASSERT(!runtime->mainThread.suppressGC);
  4129 AutoGCSession::~AutoGCSession()
  4131     if (canceled)
  4132         return;
  4134 #ifndef JS_MORE_DETERMINISTIC
  4135     runtime->gcNextFullGCTime = PRMJ_Now() + GC_IDLE_FULL_SPAN;
  4136 #endif
  4138     runtime->gcChunkAllocationSinceLastGC = false;
  4140 #ifdef JS_GC_ZEAL
  4141     /* Keeping these around after a GC is dangerous. */
  4142     runtime->gcSelectedForMarking.clearAndFree();
  4143 #endif
  4145     /* Clear gcMallocBytes for all compartments */
  4146     for (ZonesIter zone(runtime, WithAtoms); !zone.done(); zone.next()) {
  4147         zone->resetGCMallocBytes();
  4148         zone->unscheduleGC();
  4151     runtime->resetGCMallocBytes();
  4154 AutoCopyFreeListToArenas::AutoCopyFreeListToArenas(JSRuntime *rt, ZoneSelector selector)
  4155   : runtime(rt),
  4156     selector(selector)
  4158     for (ZonesIter zone(rt, selector); !zone.done(); zone.next())
  4159         zone->allocator.arenas.copyFreeListsToArenas();
  4162 AutoCopyFreeListToArenas::~AutoCopyFreeListToArenas()
  4164     for (ZonesIter zone(runtime, selector); !zone.done(); zone.next())
  4165         zone->allocator.arenas.clearFreeListsInArenas();
  4168 class AutoCopyFreeListToArenasForGC
  4170     JSRuntime *runtime;
  4172   public:
  4173     AutoCopyFreeListToArenasForGC(JSRuntime *rt) : runtime(rt) {
  4174         JS_ASSERT(rt->currentThreadHasExclusiveAccess());
  4175         for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next())
  4176             zone->allocator.arenas.copyFreeListsToArenas();
  4178     ~AutoCopyFreeListToArenasForGC() {
  4179         for (ZonesIter zone(runtime, WithAtoms); !zone.done(); zone.next())
  4180             zone->allocator.arenas.clearFreeListsInArenas();
  4182 };
  4184 static void
  4185 IncrementalCollectSlice(JSRuntime *rt,
  4186                         int64_t budget,
  4187                         JS::gcreason::Reason gcReason,
  4188                         JSGCInvocationKind gcKind);
  4190 static void
  4191 ResetIncrementalGC(JSRuntime *rt, const char *reason)
  4193     switch (rt->gcIncrementalState) {
  4194       case NO_INCREMENTAL:
  4195         return;
  4197       case MARK: {
  4198         /* Cancel any ongoing marking. */
  4199         AutoCopyFreeListToArenasForGC copy(rt);
  4201         rt->gcMarker.reset();
  4202         rt->gcMarker.stop();
  4204         for (GCCompartmentsIter c(rt); !c.done(); c.next()) {
  4205             ArrayBufferObject::resetArrayBufferList(c);
  4206             ResetGrayList(c);
  4209         for (GCZonesIter zone(rt); !zone.done(); zone.next()) {
  4210             JS_ASSERT(zone->isGCMarking());
  4211             zone->setNeedsBarrier(false, Zone::UpdateIon);
  4212             zone->setGCState(Zone::NoGC);
  4214         rt->setNeedsBarrier(false);
  4215         AssertNeedsBarrierFlagsConsistent(rt);
  4217         rt->gcIncrementalState = NO_INCREMENTAL;
  4219         JS_ASSERT(!rt->gcStrictCompartmentChecking);
  4221         break;
  4224       case SWEEP:
  4225         rt->gcMarker.reset();
  4227         for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next())
  4228             zone->scheduledForDestruction = false;
  4230         /* Finish sweeping the current zone group, then abort. */
  4231         rt->gcAbortSweepAfterCurrentGroup = true;
  4232         IncrementalCollectSlice(rt, SliceBudget::Unlimited, JS::gcreason::RESET, GC_NORMAL);
  4235             gcstats::AutoPhase ap(rt->gcStats, gcstats::PHASE_WAIT_BACKGROUND_THREAD);
  4236             rt->gcHelperThread.waitBackgroundSweepOrAllocEnd();
  4238         break;
  4240       default:
  4241         MOZ_ASSUME_UNREACHABLE("Invalid incremental GC state");
  4244     rt->gcStats.reset(reason);
  4246 #ifdef DEBUG
  4247     for (CompartmentsIter c(rt, SkipAtoms); !c.done(); c.next())
  4248         JS_ASSERT(c->gcLiveArrayBuffers.empty());
  4250     for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) {
  4251         JS_ASSERT(!zone->needsBarrier());
  4252         for (unsigned i = 0; i < FINALIZE_LIMIT; ++i)
  4253             JS_ASSERT(!zone->allocator.arenas.arenaListsToSweep[i]);
  4255 #endif
  4258 namespace {
  4260 class AutoGCSlice {
  4261   public:
  4262     AutoGCSlice(JSRuntime *rt);
  4263     ~AutoGCSlice();
  4265   private:
  4266     JSRuntime *runtime;
  4267 };
  4269 } /* anonymous namespace */
  4271 AutoGCSlice::AutoGCSlice(JSRuntime *rt)
  4272   : runtime(rt)
  4274     /*
  4275      * During incremental GC, the compartment's active flag determines whether
  4276      * there are stack frames active for any of its scripts. Normally this flag
  4277      * is set at the beginning of the mark phase. During incremental GC, we also
  4278      * set it at the start of every phase.
  4279      */
  4280     for (ActivationIterator iter(rt); !iter.done(); ++iter)
  4281         iter->compartment()->zone()->active = true;
  4283     for (GCZonesIter zone(rt); !zone.done(); zone.next()) {
  4284         /*
  4285          * Clear needsBarrier early so we don't do any write barriers during
  4286          * GC. We don't need to update the Ion barriers (which is expensive)
  4287          * because Ion code doesn't run during GC. If need be, we'll update the
  4288          * Ion barriers in ~AutoGCSlice.
  4289          */
  4290         if (zone->isGCMarking()) {
  4291             JS_ASSERT(zone->needsBarrier());
  4292             zone->setNeedsBarrier(false, Zone::DontUpdateIon);
  4293         } else {
  4294             JS_ASSERT(!zone->needsBarrier());
  4297     rt->setNeedsBarrier(false);
  4298     AssertNeedsBarrierFlagsConsistent(rt);
  4301 AutoGCSlice::~AutoGCSlice()
  4303     /* We can't use GCZonesIter if this is the end of the last slice. */
  4304     bool haveBarriers = false;
  4305     for (ZonesIter zone(runtime, WithAtoms); !zone.done(); zone.next()) {
  4306         if (zone->isGCMarking()) {
  4307             zone->setNeedsBarrier(true, Zone::UpdateIon);
  4308             zone->allocator.arenas.prepareForIncrementalGC(runtime);
  4309             haveBarriers = true;
  4310         } else {
  4311             zone->setNeedsBarrier(false, Zone::UpdateIon);
  4314     runtime->setNeedsBarrier(haveBarriers);
  4315     AssertNeedsBarrierFlagsConsistent(runtime);
  4318 static void
  4319 PushZealSelectedObjects(JSRuntime *rt)
  4321 #ifdef JS_GC_ZEAL
  4322     /* Push selected objects onto the mark stack and clear the list. */
  4323     for (JSObject **obj = rt->gcSelectedForMarking.begin();
  4324          obj != rt->gcSelectedForMarking.end(); obj++)
  4326         MarkObjectUnbarriered(&rt->gcMarker, obj, "selected obj");
  4328 #endif
  4331 static void
  4332 IncrementalCollectSlice(JSRuntime *rt,
  4333                         int64_t budget,
  4334                         JS::gcreason::Reason reason,
  4335                         JSGCInvocationKind gckind)
  4337     JS_ASSERT(rt->currentThreadHasExclusiveAccess());
  4339     AutoCopyFreeListToArenasForGC copy(rt);
  4340     AutoGCSlice slice(rt);
  4342     bool lastGC = (reason == JS::gcreason::DESTROY_RUNTIME);
  4344     gc::State initialState = rt->gcIncrementalState;
  4346     int zeal = 0;
  4347 #ifdef JS_GC_ZEAL
  4348     if (reason == JS::gcreason::DEBUG_GC && budget != SliceBudget::Unlimited) {
  4349         /*
  4350          * Do the incremental collection type specified by zeal mode if the
  4351          * collection was triggered by RunDebugGC() and incremental GC has not
  4352          * been cancelled by ResetIncrementalGC.
  4353          */
  4354         zeal = rt->gcZeal();
  4356 #endif
  4358     JS_ASSERT_IF(rt->gcIncrementalState != NO_INCREMENTAL, rt->gcIsIncremental);
  4359     rt->gcIsIncremental = budget != SliceBudget::Unlimited;
  4361     if (zeal == ZealIncrementalRootsThenFinish || zeal == ZealIncrementalMarkAllThenFinish) {
  4362         /*
  4363          * Yields between slices occurs at predetermined points in these modes;
  4364          * the budget is not used.
  4365          */
  4366         budget = SliceBudget::Unlimited;
  4369     SliceBudget sliceBudget(budget);
  4371     if (rt->gcIncrementalState == NO_INCREMENTAL) {
  4372         rt->gcIncrementalState = MARK_ROOTS;
  4373         rt->gcLastMarkSlice = false;
  4376     if (rt->gcIncrementalState == MARK)
  4377         AutoGCRooter::traceAllWrappers(&rt->gcMarker);
  4379     switch (rt->gcIncrementalState) {
  4381       case MARK_ROOTS:
  4382         if (!BeginMarkPhase(rt)) {
  4383             rt->gcIncrementalState = NO_INCREMENTAL;
  4384             return;
  4387         if (!lastGC)
  4388             PushZealSelectedObjects(rt);
  4390         rt->gcIncrementalState = MARK;
  4392         if (rt->gcIsIncremental && zeal == ZealIncrementalRootsThenFinish)
  4393             break;
  4395         /* fall through */
  4397       case MARK: {
  4398         /* If we needed delayed marking for gray roots, then collect until done. */
  4399         if (!rt->gcMarker.hasBufferedGrayRoots()) {
  4400             sliceBudget.reset();
  4401             rt->gcIsIncremental = false;
  4404         bool finished = DrainMarkStack(rt, sliceBudget, gcstats::PHASE_MARK);
  4405         if (!finished)
  4406             break;
  4408         JS_ASSERT(rt->gcMarker.isDrained());
  4410         if (!rt->gcLastMarkSlice && rt->gcIsIncremental &&
  4411             ((initialState == MARK && zeal != ZealIncrementalRootsThenFinish) ||
  4412              zeal == ZealIncrementalMarkAllThenFinish))
  4414             /*
  4415              * Yield with the aim of starting the sweep in the next
  4416              * slice.  We will need to mark anything new on the stack
  4417              * when we resume, so we stay in MARK state.
  4418              */
  4419             rt->gcLastMarkSlice = true;
  4420             break;
  4423         rt->gcIncrementalState = SWEEP;
  4425         /*
  4426          * This runs to completion, but we don't continue if the budget is
  4427          * now exhasted.
  4428          */
  4429         BeginSweepPhase(rt, lastGC);
  4430         if (sliceBudget.isOverBudget())
  4431             break;
  4433         /*
  4434          * Always yield here when running in incremental multi-slice zeal
  4435          * mode, so RunDebugGC can reset the slice buget.
  4436          */
  4437         if (rt->gcIsIncremental && zeal == ZealIncrementalMultipleSlices)
  4438             break;
  4440         /* fall through */
  4443       case SWEEP: {
  4444         bool finished = SweepPhase(rt, sliceBudget);
  4445         if (!finished)
  4446             break;
  4448         EndSweepPhase(rt, gckind, lastGC);
  4450         if (rt->gcSweepOnBackgroundThread)
  4451             rt->gcHelperThread.startBackgroundSweep(gckind == GC_SHRINK);
  4453         rt->gcIncrementalState = NO_INCREMENTAL;
  4454         break;
  4457       default:
  4458         JS_ASSERT(false);
  4462 IncrementalSafety
  4463 gc::IsIncrementalGCSafe(JSRuntime *rt)
  4465     JS_ASSERT(!rt->mainThread.suppressGC);
  4467     if (rt->keepAtoms())
  4468         return IncrementalSafety::Unsafe("keepAtoms set");
  4470     if (!rt->gcIncrementalEnabled)
  4471         return IncrementalSafety::Unsafe("incremental permanently disabled");
  4473     return IncrementalSafety::Safe();
  4476 static void
  4477 BudgetIncrementalGC(JSRuntime *rt, int64_t *budget)
  4479     IncrementalSafety safe = IsIncrementalGCSafe(rt);
  4480     if (!safe) {
  4481         ResetIncrementalGC(rt, safe.reason());
  4482         *budget = SliceBudget::Unlimited;
  4483         rt->gcStats.nonincremental(safe.reason());
  4484         return;
  4487     if (rt->gcMode() != JSGC_MODE_INCREMENTAL) {
  4488         ResetIncrementalGC(rt, "GC mode change");
  4489         *budget = SliceBudget::Unlimited;
  4490         rt->gcStats.nonincremental("GC mode");
  4491         return;
  4494     if (rt->isTooMuchMalloc()) {
  4495         *budget = SliceBudget::Unlimited;
  4496         rt->gcStats.nonincremental("malloc bytes trigger");
  4499     bool reset = false;
  4500     for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) {
  4501         if (zone->gcBytes >= zone->gcTriggerBytes) {
  4502             *budget = SliceBudget::Unlimited;
  4503             rt->gcStats.nonincremental("allocation trigger");
  4506         if (rt->gcIncrementalState != NO_INCREMENTAL &&
  4507             zone->isGCScheduled() != zone->wasGCStarted())
  4509             reset = true;
  4512         if (zone->isTooMuchMalloc()) {
  4513             *budget = SliceBudget::Unlimited;
  4514             rt->gcStats.nonincremental("malloc bytes trigger");
  4518     if (reset)
  4519         ResetIncrementalGC(rt, "zone change");
  4522 /*
  4523  * Run one GC "cycle" (either a slice of incremental GC or an entire
  4524  * non-incremental GC. We disable inlining to ensure that the bottom of the
  4525  * stack with possible GC roots recorded in MarkRuntime excludes any pointers we
  4526  * use during the marking implementation.
  4528  * Returns true if we "reset" an existing incremental GC, which would force us
  4529  * to run another cycle.
  4530  */
  4531 static MOZ_NEVER_INLINE bool
  4532 GCCycle(JSRuntime *rt, bool incremental, int64_t budget,
  4533         JSGCInvocationKind gckind, JS::gcreason::Reason reason)
  4535     AutoGCSession gcsession(rt);
  4537     /*
  4538      * As we about to purge caches and clear the mark bits we must wait for
  4539      * any background finalization to finish. We must also wait for the
  4540      * background allocation to finish so we can avoid taking the GC lock
  4541      * when manipulating the chunks during the GC.
  4542      */
  4544         gcstats::AutoPhase ap(rt->gcStats, gcstats::PHASE_WAIT_BACKGROUND_THREAD);
  4545         rt->gcHelperThread.waitBackgroundSweepOrAllocEnd();
  4548     State prevState = rt->gcIncrementalState;
  4550     if (!incremental) {
  4551         /* If non-incremental GC was requested, reset incremental GC. */
  4552         ResetIncrementalGC(rt, "requested");
  4553         rt->gcStats.nonincremental("requested");
  4554         budget = SliceBudget::Unlimited;
  4555     } else {
  4556         BudgetIncrementalGC(rt, &budget);
  4559     /* The GC was reset, so we need a do-over. */
  4560     if (prevState != NO_INCREMENTAL && rt->gcIncrementalState == NO_INCREMENTAL) {
  4561         gcsession.cancel();
  4562         return true;
  4565     IncrementalCollectSlice(rt, budget, reason, gckind);
  4566     return false;
  4569 #ifdef JS_GC_ZEAL
  4570 static bool
  4571 IsDeterministicGCReason(JS::gcreason::Reason reason)
  4573     if (reason > JS::gcreason::DEBUG_GC &&
  4574         reason != JS::gcreason::CC_FORCED && reason != JS::gcreason::SHUTDOWN_CC)
  4576         return false;
  4579     if (reason == JS::gcreason::MAYBEGC)
  4580         return false;
  4582     return true;
  4584 #endif
  4586 static bool
  4587 ShouldCleanUpEverything(JSRuntime *rt, JS::gcreason::Reason reason, JSGCInvocationKind gckind)
  4589     // During shutdown, we must clean everything up, for the sake of leak
  4590     // detection. When a runtime has no contexts, or we're doing a GC before a
  4591     // shutdown CC, those are strong indications that we're shutting down.
  4592     return reason == JS::gcreason::DESTROY_RUNTIME ||
  4593            reason == JS::gcreason::SHUTDOWN_CC ||
  4594            gckind == GC_SHRINK;
  4597 namespace {
  4599 #ifdef JSGC_GENERATIONAL
  4600 class AutoDisableStoreBuffer
  4602     JSRuntime *runtime;
  4603     bool prior;
  4605   public:
  4606     AutoDisableStoreBuffer(JSRuntime *rt) : runtime(rt) {
  4607         prior = rt->gcStoreBuffer.isEnabled();
  4608         rt->gcStoreBuffer.disable();
  4610     ~AutoDisableStoreBuffer() {
  4611         if (prior)
  4612             runtime->gcStoreBuffer.enable();
  4614 };
  4615 #else
  4616 struct AutoDisableStoreBuffer
  4618     AutoDisableStoreBuffer(JSRuntime *) {}
  4619 };
  4620 #endif
  4622 } /* anonymous namespace */
  4624 static void
  4625 Collect(JSRuntime *rt, bool incremental, int64_t budget,
  4626         JSGCInvocationKind gckind, JS::gcreason::Reason reason)
  4628     /* GC shouldn't be running in parallel execution mode */
  4629     JS_ASSERT(!InParallelSection());
  4631     JS_AbortIfWrongThread(rt);
  4633     /* If we attempt to invoke the GC while we are running in the GC, assert. */
  4634     JS_ASSERT(!rt->isHeapBusy());
  4636     if (rt->mainThread.suppressGC)
  4637         return;
  4639     TraceLogger *logger = TraceLoggerForMainThread(rt);
  4640     AutoTraceLog logGC(logger, TraceLogger::GC);
  4642 #ifdef JS_GC_ZEAL
  4643     if (rt->gcDeterministicOnly && !IsDeterministicGCReason(reason))
  4644         return;
  4645 #endif
  4647     JS_ASSERT_IF(!incremental || budget != SliceBudget::Unlimited, JSGC_INCREMENTAL);
  4649     AutoStopVerifyingBarriers av(rt, reason == JS::gcreason::SHUTDOWN_CC ||
  4650                                      reason == JS::gcreason::DESTROY_RUNTIME);
  4652     RecordNativeStackTopForGC(rt);
  4654     int zoneCount = 0;
  4655     int compartmentCount = 0;
  4656     int collectedCount = 0;
  4657     for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) {
  4658         if (rt->gcMode() == JSGC_MODE_GLOBAL)
  4659             zone->scheduleGC();
  4661         /* This is a heuristic to avoid resets. */
  4662         if (rt->gcIncrementalState != NO_INCREMENTAL && zone->needsBarrier())
  4663             zone->scheduleGC();
  4665         zoneCount++;
  4666         if (zone->isGCScheduled())
  4667             collectedCount++;
  4670     for (CompartmentsIter c(rt, WithAtoms); !c.done(); c.next())
  4671         compartmentCount++;
  4673     rt->gcShouldCleanUpEverything = ShouldCleanUpEverything(rt, reason, gckind);
  4675     bool repeat = false;
  4676     do {
  4677         MinorGC(rt, reason);
  4679         /*
  4680          * Marking can trigger many incidental post barriers, some of them for
  4681          * objects which are not going to be live after the GC.
  4682          */
  4683         AutoDisableStoreBuffer adsb(rt);
  4685         gcstats::AutoGCSlice agc(rt->gcStats, collectedCount, zoneCount, compartmentCount, reason);
  4687         /*
  4688          * Let the API user decide to defer a GC if it wants to (unless this
  4689          * is the last context). Invoke the callback regardless.
  4690          */
  4691         if (rt->gcIncrementalState == NO_INCREMENTAL) {
  4692             gcstats::AutoPhase ap(rt->gcStats, gcstats::PHASE_GC_BEGIN);
  4693             if (JSGCCallback callback = rt->gcCallback)
  4694                 callback(rt, JSGC_BEGIN, rt->gcCallbackData);
  4697         rt->gcPoke = false;
  4698         bool wasReset = GCCycle(rt, incremental, budget, gckind, reason);
  4700         if (rt->gcIncrementalState == NO_INCREMENTAL) {
  4701             gcstats::AutoPhase ap(rt->gcStats, gcstats::PHASE_GC_END);
  4702             if (JSGCCallback callback = rt->gcCallback)
  4703                 callback(rt, JSGC_END, rt->gcCallbackData);
  4706         /* Need to re-schedule all zones for GC. */
  4707         if (rt->gcPoke && rt->gcShouldCleanUpEverything)
  4708             JS::PrepareForFullGC(rt);
  4710         /*
  4711          * If we reset an existing GC, we need to start a new one. Also, we
  4712          * repeat GCs that happen during shutdown (the gcShouldCleanUpEverything
  4713          * case) until we can be sure that no additional garbage is created
  4714          * (which typically happens if roots are dropped during finalizers).
  4715          */
  4716         repeat = (rt->gcPoke && rt->gcShouldCleanUpEverything) || wasReset;
  4717     } while (repeat);
  4719     if (rt->gcIncrementalState == NO_INCREMENTAL) {
  4720 #ifdef JS_THREADSAFE
  4721         EnqueuePendingParseTasksAfterGC(rt);
  4722 #endif
  4726 void
  4727 js::GC(JSRuntime *rt, JSGCInvocationKind gckind, JS::gcreason::Reason reason)
  4729     Collect(rt, false, SliceBudget::Unlimited, gckind, reason);
  4732 void
  4733 js::GCSlice(JSRuntime *rt, JSGCInvocationKind gckind, JS::gcreason::Reason reason, int64_t millis)
  4735     int64_t sliceBudget;
  4736     if (millis)
  4737         sliceBudget = SliceBudget::TimeBudget(millis);
  4738     else if (rt->gcHighFrequencyGC && rt->gcDynamicMarkSlice)
  4739         sliceBudget = rt->gcSliceBudget * IGC_MARK_SLICE_MULTIPLIER;
  4740     else
  4741         sliceBudget = rt->gcSliceBudget;
  4743     Collect(rt, true, sliceBudget, gckind, reason);
  4746 void
  4747 js::GCFinalSlice(JSRuntime *rt, JSGCInvocationKind gckind, JS::gcreason::Reason reason)
  4749     Collect(rt, true, SliceBudget::Unlimited, gckind, reason);
  4752 static bool
  4753 ZonesSelected(JSRuntime *rt)
  4755     for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) {
  4756         if (zone->isGCScheduled())
  4757             return true;
  4759     return false;
  4762 void
  4763 js::GCDebugSlice(JSRuntime *rt, bool limit, int64_t objCount)
  4765     int64_t budget = limit ? SliceBudget::WorkBudget(objCount) : SliceBudget::Unlimited;
  4766     if (!ZonesSelected(rt)) {
  4767         if (JS::IsIncrementalGCInProgress(rt))
  4768             JS::PrepareForIncrementalGC(rt);
  4769         else
  4770             JS::PrepareForFullGC(rt);
  4772     Collect(rt, true, budget, GC_NORMAL, JS::gcreason::DEBUG_GC);
  4775 /* Schedule a full GC unless a zone will already be collected. */
  4776 void
  4777 js::PrepareForDebugGC(JSRuntime *rt)
  4779     if (!ZonesSelected(rt))
  4780         JS::PrepareForFullGC(rt);
  4783 JS_FRIEND_API(void)
  4784 JS::ShrinkGCBuffers(JSRuntime *rt)
  4786     AutoLockGC lock(rt);
  4787     JS_ASSERT(!rt->isHeapBusy());
  4789     if (!rt->useHelperThreads())
  4790         ExpireChunksAndArenas(rt, true);
  4791     else
  4792         rt->gcHelperThread.startBackgroundShrink();
  4795 void
  4796 js::MinorGC(JSRuntime *rt, JS::gcreason::Reason reason)
  4798 #ifdef JSGC_GENERATIONAL
  4799     TraceLogger *logger = TraceLoggerForMainThread(rt);
  4800     AutoTraceLog logMinorGC(logger, TraceLogger::MinorGC);
  4801     rt->gcNursery.collect(rt, reason, nullptr);
  4802     JS_ASSERT_IF(!rt->mainThread.suppressGC, rt->gcNursery.isEmpty());
  4803 #endif
  4806 void
  4807 js::MinorGC(JSContext *cx, JS::gcreason::Reason reason)
  4809     // Alternate to the runtime-taking form above which allows marking type
  4810     // objects as needing pretenuring.
  4811 #ifdef JSGC_GENERATIONAL
  4812     TraceLogger *logger = TraceLoggerForMainThread(cx->runtime());
  4813     AutoTraceLog logMinorGC(logger, TraceLogger::MinorGC);
  4815     Nursery::TypeObjectList pretenureTypes;
  4816     JSRuntime *rt = cx->runtime();
  4817     rt->gcNursery.collect(cx->runtime(), reason, &pretenureTypes);
  4818     for (size_t i = 0; i < pretenureTypes.length(); i++) {
  4819         if (pretenureTypes[i]->canPreTenure())
  4820             pretenureTypes[i]->setShouldPreTenure(cx);
  4822     JS_ASSERT_IF(!rt->mainThread.suppressGC, rt->gcNursery.isEmpty());
  4823 #endif
  4826 void
  4827 js::gc::GCIfNeeded(JSContext *cx)
  4829     JSRuntime *rt = cx->runtime();
  4831 #ifdef JSGC_GENERATIONAL
  4832     /*
  4833      * In case of store buffer overflow perform minor GC first so that the
  4834      * correct reason is seen in the logs.
  4835      */
  4836     if (rt->gcStoreBuffer.isAboutToOverflow())
  4837         MinorGC(cx, JS::gcreason::FULL_STORE_BUFFER);
  4838 #endif
  4840     if (rt->gcIsNeeded)
  4841         GCSlice(rt, GC_NORMAL, rt->gcTriggerReason);
  4844 void
  4845 js::gc::FinishBackgroundFinalize(JSRuntime *rt)
  4847     rt->gcHelperThread.waitBackgroundSweepEnd();
  4850 AutoFinishGC::AutoFinishGC(JSRuntime *rt)
  4852     if (JS::IsIncrementalGCInProgress(rt)) {
  4853         JS::PrepareForIncrementalGC(rt);
  4854         JS::FinishIncrementalGC(rt, JS::gcreason::API);
  4857     gc::FinishBackgroundFinalize(rt);
  4860 AutoPrepareForTracing::AutoPrepareForTracing(JSRuntime *rt, ZoneSelector selector)
  4861   : finish(rt),
  4862     session(rt),
  4863     copy(rt, selector)
  4865     RecordNativeStackTopForGC(rt);
  4868 JSCompartment *
  4869 js::NewCompartment(JSContext *cx, Zone *zone, JSPrincipals *principals,
  4870                    const JS::CompartmentOptions &options)
  4872     JSRuntime *rt = cx->runtime();
  4873     JS_AbortIfWrongThread(rt);
  4875     ScopedJSDeletePtr<Zone> zoneHolder;
  4876     if (!zone) {
  4877         zone = cx->new_<Zone>(rt);
  4878         if (!zone)
  4879             return nullptr;
  4881         zoneHolder.reset(zone);
  4883         zone->setGCLastBytes(8192, GC_NORMAL);
  4885         const JSPrincipals *trusted = rt->trustedPrincipals();
  4886         zone->isSystem = principals && principals == trusted;
  4889     ScopedJSDeletePtr<JSCompartment> compartment(cx->new_<JSCompartment>(zone, options));
  4890     if (!compartment || !compartment->init(cx))
  4891         return nullptr;
  4893     // Set up the principals.
  4894     JS_SetCompartmentPrincipals(compartment, principals);
  4896     AutoLockGC lock(rt);
  4898     if (!zone->compartments.append(compartment.get())) {
  4899         js_ReportOutOfMemory(cx);
  4900         return nullptr;
  4903     if (zoneHolder && !rt->zones.append(zone)) {
  4904         js_ReportOutOfMemory(cx);
  4905         return nullptr;
  4908     zoneHolder.forget();
  4909     return compartment.forget();
  4912 void
  4913 gc::MergeCompartments(JSCompartment *source, JSCompartment *target)
  4915     // The source compartment must be specifically flagged as mergable.  This
  4916     // also implies that the compartment is not visible to the debugger.
  4917     JS_ASSERT(source->options_.mergeable());
  4919     JSRuntime *rt = source->runtimeFromMainThread();
  4921     AutoPrepareForTracing prepare(rt, SkipAtoms);
  4923     // Cleanup tables and other state in the source compartment that will be
  4924     // meaningless after merging into the target compartment.
  4926     source->clearTables();
  4928     // Fixup compartment pointers in source to refer to target.
  4930     for (CellIter iter(source->zone(), FINALIZE_SCRIPT); !iter.done(); iter.next()) {
  4931         JSScript *script = iter.get<JSScript>();
  4932         JS_ASSERT(script->compartment() == source);
  4933         script->compartment_ = target;
  4936     for (CellIter iter(source->zone(), FINALIZE_BASE_SHAPE); !iter.done(); iter.next()) {
  4937         BaseShape *base = iter.get<BaseShape>();
  4938         JS_ASSERT(base->compartment() == source);
  4939         base->compartment_ = target;
  4942     // Fixup zone pointers in source's zone to refer to target's zone.
  4944     for (size_t thingKind = 0; thingKind != FINALIZE_LIMIT; thingKind++) {
  4945         for (ArenaIter aiter(source->zone(), AllocKind(thingKind)); !aiter.done(); aiter.next()) {
  4946             ArenaHeader *aheader = aiter.get();
  4947             aheader->zone = target->zone();
  4951     // The source should be the only compartment in its zone.
  4952     for (CompartmentsInZoneIter c(source->zone()); !c.done(); c.next())
  4953         JS_ASSERT(c.get() == source);
  4955     // Merge the allocator in source's zone into target's zone.
  4956     target->zone()->allocator.arenas.adoptArenas(rt, &source->zone()->allocator.arenas);
  4957     target->zone()->gcBytes += source->zone()->gcBytes;
  4958     source->zone()->gcBytes = 0;
  4960     // Merge other info in source's zone into target's zone.
  4961     target->zone()->types.typeLifoAlloc.transferFrom(&source->zone()->types.typeLifoAlloc);
  4964 void
  4965 gc::RunDebugGC(JSContext *cx)
  4967 #ifdef JS_GC_ZEAL
  4968     JSRuntime *rt = cx->runtime();
  4969     int type = rt->gcZeal();
  4971     if (rt->mainThread.suppressGC)
  4972         return;
  4974     if (type == js::gc::ZealGenerationalGCValue)
  4975         return MinorGC(rt, JS::gcreason::DEBUG_GC);
  4977     PrepareForDebugGC(cx->runtime());
  4979     if (type == ZealIncrementalRootsThenFinish ||
  4980         type == ZealIncrementalMarkAllThenFinish ||
  4981         type == ZealIncrementalMultipleSlices)
  4983         js::gc::State initialState = rt->gcIncrementalState;
  4984         int64_t budget;
  4985         if (type == ZealIncrementalMultipleSlices) {
  4986             /*
  4987              * Start with a small slice limit and double it every slice. This
  4988              * ensure that we get multiple slices, and collection runs to
  4989              * completion.
  4990              */
  4991             if (initialState == NO_INCREMENTAL)
  4992                 rt->gcIncrementalLimit = rt->gcZealFrequency / 2;
  4993             else
  4994                 rt->gcIncrementalLimit *= 2;
  4995             budget = SliceBudget::WorkBudget(rt->gcIncrementalLimit);
  4996         } else {
  4997             // This triggers incremental GC but is actually ignored by IncrementalMarkSlice.
  4998             budget = SliceBudget::WorkBudget(1);
  5001         Collect(rt, true, budget, GC_NORMAL, JS::gcreason::DEBUG_GC);
  5003         /*
  5004          * For multi-slice zeal, reset the slice size when we get to the sweep
  5005          * phase.
  5006          */
  5007         if (type == ZealIncrementalMultipleSlices &&
  5008             initialState == MARK && rt->gcIncrementalState == SWEEP)
  5010             rt->gcIncrementalLimit = rt->gcZealFrequency / 2;
  5012     } else {
  5013         Collect(rt, false, SliceBudget::Unlimited, GC_NORMAL, JS::gcreason::DEBUG_GC);
  5016 #endif
  5019 void
  5020 gc::SetDeterministicGC(JSContext *cx, bool enabled)
  5022 #ifdef JS_GC_ZEAL
  5023     JSRuntime *rt = cx->runtime();
  5024     rt->gcDeterministicOnly = enabled;
  5025 #endif
  5028 void
  5029 gc::SetValidateGC(JSContext *cx, bool enabled)
  5031     JSRuntime *rt = cx->runtime();
  5032     rt->gcValidate = enabled;
  5035 void
  5036 gc::SetFullCompartmentChecks(JSContext *cx, bool enabled)
  5038     JSRuntime *rt = cx->runtime();
  5039     rt->gcFullCompartmentChecks = enabled;
  5042 #ifdef DEBUG
  5044 /* Should only be called manually under gdb */
  5045 void PreventGCDuringInteractiveDebug()
  5047     TlsPerThreadData.get()->suppressGC++;
  5050 #endif
  5052 void
  5053 js::ReleaseAllJITCode(FreeOp *fop)
  5055 #ifdef JS_ION
  5057 # ifdef JSGC_GENERATIONAL
  5058     /*
  5059      * Scripts can entrain nursery things, inserting references to the script
  5060      * into the store buffer. Clear the store buffer before discarding scripts.
  5061      */
  5062     MinorGC(fop->runtime(), JS::gcreason::EVICT_NURSERY);
  5063 # endif
  5065     for (ZonesIter zone(fop->runtime(), SkipAtoms); !zone.done(); zone.next()) {
  5066         if (!zone->jitZone())
  5067             continue;
  5069 # ifdef DEBUG
  5070         /* Assert no baseline scripts are marked as active. */
  5071         for (CellIter i(zone, FINALIZE_SCRIPT); !i.done(); i.next()) {
  5072             JSScript *script = i.get<JSScript>();
  5073             JS_ASSERT_IF(script->hasBaselineScript(), !script->baselineScript()->active());
  5075 # endif
  5077         /* Mark baseline scripts on the stack as active. */
  5078         jit::MarkActiveBaselineScripts(zone);
  5080         jit::InvalidateAll(fop, zone);
  5082         for (CellIter i(zone, FINALIZE_SCRIPT); !i.done(); i.next()) {
  5083             JSScript *script = i.get<JSScript>();
  5084             jit::FinishInvalidation<SequentialExecution>(fop, script);
  5085             jit::FinishInvalidation<ParallelExecution>(fop, script);
  5087             /*
  5088              * Discard baseline script if it's not marked as active. Note that
  5089              * this also resets the active flag.
  5090              */
  5091             jit::FinishDiscardBaselineScript(fop, script);
  5094         zone->jitZone()->optimizedStubSpace()->free();
  5096 #endif
  5099 /*
  5100  * There are three possible PCCount profiling states:
  5102  * 1. None: Neither scripts nor the runtime have count information.
  5103  * 2. Profile: Active scripts have count information, the runtime does not.
  5104  * 3. Query: Scripts do not have count information, the runtime does.
  5106  * When starting to profile scripts, counting begins immediately, with all JIT
  5107  * code discarded and recompiled with counts as necessary. Active interpreter
  5108  * frames will not begin profiling until they begin executing another script
  5109  * (via a call or return).
  5111  * The below API functions manage transitions to new states, according
  5112  * to the table below.
  5114  *                                  Old State
  5115  *                          -------------------------
  5116  * Function                 None      Profile   Query
  5117  * --------
  5118  * StartPCCountProfiling    Profile   Profile   Profile
  5119  * StopPCCountProfiling     None      Query     Query
  5120  * PurgePCCounts            None      None      None
  5121  */
  5123 static void
  5124 ReleaseScriptCounts(FreeOp *fop)
  5126     JSRuntime *rt = fop->runtime();
  5127     JS_ASSERT(rt->scriptAndCountsVector);
  5129     ScriptAndCountsVector &vec = *rt->scriptAndCountsVector;
  5131     for (size_t i = 0; i < vec.length(); i++)
  5132         vec[i].scriptCounts.destroy(fop);
  5134     fop->delete_(rt->scriptAndCountsVector);
  5135     rt->scriptAndCountsVector = nullptr;
  5138 JS_FRIEND_API(void)
  5139 js::StartPCCountProfiling(JSContext *cx)
  5141     JSRuntime *rt = cx->runtime();
  5143     if (rt->profilingScripts)
  5144         return;
  5146     if (rt->scriptAndCountsVector)
  5147         ReleaseScriptCounts(rt->defaultFreeOp());
  5149     ReleaseAllJITCode(rt->defaultFreeOp());
  5151     rt->profilingScripts = true;
  5154 JS_FRIEND_API(void)
  5155 js::StopPCCountProfiling(JSContext *cx)
  5157     JSRuntime *rt = cx->runtime();
  5159     if (!rt->profilingScripts)
  5160         return;
  5161     JS_ASSERT(!rt->scriptAndCountsVector);
  5163     ReleaseAllJITCode(rt->defaultFreeOp());
  5165     ScriptAndCountsVector *vec = cx->new_<ScriptAndCountsVector>(SystemAllocPolicy());
  5166     if (!vec)
  5167         return;
  5169     for (ZonesIter zone(rt, SkipAtoms); !zone.done(); zone.next()) {
  5170         for (CellIter i(zone, FINALIZE_SCRIPT); !i.done(); i.next()) {
  5171             JSScript *script = i.get<JSScript>();
  5172             if (script->hasScriptCounts() && script->types) {
  5173                 ScriptAndCounts sac;
  5174                 sac.script = script;
  5175                 sac.scriptCounts.set(script->releaseScriptCounts());
  5176                 if (!vec->append(sac))
  5177                     sac.scriptCounts.destroy(rt->defaultFreeOp());
  5182     rt->profilingScripts = false;
  5183     rt->scriptAndCountsVector = vec;
  5186 JS_FRIEND_API(void)
  5187 js::PurgePCCounts(JSContext *cx)
  5189     JSRuntime *rt = cx->runtime();
  5191     if (!rt->scriptAndCountsVector)
  5192         return;
  5193     JS_ASSERT(!rt->profilingScripts);
  5195     ReleaseScriptCounts(rt->defaultFreeOp());
  5198 void
  5199 js::PurgeJITCaches(Zone *zone)
  5201 #ifdef JS_ION
  5202     for (CellIterUnderGC i(zone, FINALIZE_SCRIPT); !i.done(); i.next()) {
  5203         JSScript *script = i.get<JSScript>();
  5205         /* Discard Ion caches. */
  5206         jit::PurgeCaches(script);
  5208 #endif
  5211 void
  5212 ArenaLists::normalizeBackgroundFinalizeState(AllocKind thingKind)
  5214     volatile uintptr_t *bfs = &backgroundFinalizeState[thingKind];
  5215     switch (*bfs) {
  5216       case BFS_DONE:
  5217         break;
  5218       case BFS_JUST_FINISHED:
  5219         // No allocations between end of last sweep and now.
  5220         // Transfering over arenas is a kind of allocation.
  5221         *bfs = BFS_DONE;
  5222         break;
  5223       default:
  5224         JS_ASSERT(!"Background finalization in progress, but it should not be.");
  5225         break;
  5229 void
  5230 ArenaLists::adoptArenas(JSRuntime *rt, ArenaLists *fromArenaLists)
  5232     // The other parallel threads have all completed now, and GC
  5233     // should be inactive, but still take the lock as a kind of read
  5234     // fence.
  5235     AutoLockGC lock(rt);
  5237     fromArenaLists->purge();
  5239     for (size_t thingKind = 0; thingKind != FINALIZE_LIMIT; thingKind++) {
  5240 #ifdef JS_THREADSAFE
  5241         // When we enter a parallel section, we join the background
  5242         // thread, and we do not run GC while in the parallel section,
  5243         // so no finalizer should be active!
  5244         normalizeBackgroundFinalizeState(AllocKind(thingKind));
  5245         fromArenaLists->normalizeBackgroundFinalizeState(AllocKind(thingKind));
  5246 #endif
  5247         ArenaList *fromList = &fromArenaLists->arenaLists[thingKind];
  5248         ArenaList *toList = &arenaLists[thingKind];
  5249         while (fromList->head != nullptr) {
  5250             // Remove entry from |fromList|
  5251             ArenaHeader *fromHeader = fromList->head;
  5252             fromList->head = fromHeader->next;
  5253             fromHeader->next = nullptr;
  5255             // During parallel execution, we sometimes keep empty arenas
  5256             // on the lists rather than sending them back to the chunk.
  5257             // Therefore, if fromHeader is empty, send it back to the
  5258             // chunk now. Otherwise, attach to |toList|.
  5259             if (fromHeader->isEmpty())
  5260                 fromHeader->chunk()->releaseArena(fromHeader);
  5261             else
  5262                 toList->insert(fromHeader);
  5264         fromList->cursor = &fromList->head;
  5268 bool
  5269 ArenaLists::containsArena(JSRuntime *rt, ArenaHeader *needle)
  5271     AutoLockGC lock(rt);
  5272     size_t allocKind = needle->getAllocKind();
  5273     for (ArenaHeader *aheader = arenaLists[allocKind].head;
  5274          aheader != nullptr;
  5275          aheader = aheader->next)
  5277         if (aheader == needle)
  5278             return true;
  5280     return false;
  5284 AutoMaybeTouchDeadZones::AutoMaybeTouchDeadZones(JSContext *cx)
  5285   : runtime(cx->runtime()),
  5286     markCount(runtime->gcObjectsMarkedInDeadZones),
  5287     inIncremental(JS::IsIncrementalGCInProgress(runtime)),
  5288     manipulatingDeadZones(runtime->gcManipulatingDeadZones)
  5290     runtime->gcManipulatingDeadZones = true;
  5293 AutoMaybeTouchDeadZones::AutoMaybeTouchDeadZones(JSObject *obj)
  5294   : runtime(obj->compartment()->runtimeFromMainThread()),
  5295     markCount(runtime->gcObjectsMarkedInDeadZones),
  5296     inIncremental(JS::IsIncrementalGCInProgress(runtime)),
  5297     manipulatingDeadZones(runtime->gcManipulatingDeadZones)
  5299     runtime->gcManipulatingDeadZones = true;
  5302 AutoMaybeTouchDeadZones::~AutoMaybeTouchDeadZones()
  5304     runtime->gcManipulatingDeadZones = manipulatingDeadZones;
  5306     if (inIncremental && runtime->gcObjectsMarkedInDeadZones != markCount) {
  5307         JS::PrepareForFullGC(runtime);
  5308         js::GC(runtime, GC_NORMAL, JS::gcreason::TRANSPLANT);
  5312 AutoSuppressGC::AutoSuppressGC(ExclusiveContext *cx)
  5313   : suppressGC_(cx->perThreadData->suppressGC)
  5315     suppressGC_++;
  5318 AutoSuppressGC::AutoSuppressGC(JSCompartment *comp)
  5319   : suppressGC_(comp->runtimeFromMainThread()->mainThread.suppressGC)
  5321     suppressGC_++;
  5324 AutoSuppressGC::AutoSuppressGC(JSRuntime *rt)
  5325   : suppressGC_(rt->mainThread.suppressGC)
  5327     suppressGC_++;
  5330 bool
  5331 js::UninlinedIsInsideNursery(JSRuntime *rt, const void *thing)
  5333     return IsInsideNursery(rt, thing);
  5336 #ifdef DEBUG
  5337 AutoDisableProxyCheck::AutoDisableProxyCheck(JSRuntime *rt
  5338                                              MOZ_GUARD_OBJECT_NOTIFIER_PARAM_IN_IMPL)
  5339   : count(rt->gcDisableStrictProxyCheckingCount)
  5341     MOZ_GUARD_OBJECT_NOTIFIER_INIT;
  5342     count++;
  5345 JS_FRIEND_API(void)
  5346 JS::AssertGCThingMustBeTenured(JSObject *obj)
  5348     JS_ASSERT((!IsNurseryAllocable(obj->tenuredGetAllocKind()) || obj->getClass()->finalize) &&
  5349               obj->isTenured());
  5352 JS_FRIEND_API(size_t)
  5353 JS::GetGCNumber()
  5355     JSRuntime *rt = js::TlsPerThreadData.get()->runtimeFromMainThread();
  5356     if (!rt)
  5357         return 0;
  5358     return rt->gcNumber;
  5361 JS::AutoAssertNoGC::AutoAssertNoGC()
  5362   : runtime(nullptr), gcNumber(0)
  5364     js::PerThreadData *data = js::TlsPerThreadData.get();
  5365     if (data) {
  5366         /*
  5367          * GC's from off-thread will always assert, so off-thread is implicitly
  5368          * AutoAssertNoGC. We still need to allow AutoAssertNoGC to be used in
  5369          * code that works from both threads, however. We also use this to
  5370          * annotate the off thread run loops.
  5371          */
  5372         runtime = data->runtimeIfOnOwnerThread();
  5373         if (runtime)
  5374             gcNumber = runtime->gcNumber;
  5378 JS::AutoAssertNoGC::AutoAssertNoGC(JSRuntime *rt)
  5379   : runtime(rt), gcNumber(rt->gcNumber)
  5383 JS::AutoAssertNoGC::~AutoAssertNoGC()
  5385     if (runtime)
  5386         MOZ_ASSERT(gcNumber == runtime->gcNumber, "GC ran inside an AutoAssertNoGC scope.");
  5388 #endif

mercurial