Sat, 03 Jan 2015 20:18:00 +0100
Conditionally enable double key logic according to:
private browsing mode or privacy.thirdparty.isolate preference and
implement in GetCookieStringCommon and FindCookie where it counts...
With some reservations of how to convince FindCookie users to test
condition and pass a nullptr when disabling double key logic.
michael@0 | 1 | /* -*- Mode: C++; tab-width: 8; indent-tabs-mode: nil; c-basic-offset: 4 -*- |
michael@0 | 2 | * vim: set ts=8 sts=4 et sw=4 tw=99: |
michael@0 | 3 | * This Source Code Form is subject to the terms of the Mozilla Public |
michael@0 | 4 | * License, v. 2.0. If a copy of the MPL was not distributed with this |
michael@0 | 5 | * file, You can obtain one at http://mozilla.org/MPL/2.0/. */ |
michael@0 | 6 | |
michael@0 | 7 | /* |
michael@0 | 8 | * This code implements an incremental mark-and-sweep garbage collector, with |
michael@0 | 9 | * most sweeping carried out in the background on a parallel thread. |
michael@0 | 10 | * |
michael@0 | 11 | * Full vs. zone GC |
michael@0 | 12 | * ---------------- |
michael@0 | 13 | * |
michael@0 | 14 | * The collector can collect all zones at once, or a subset. These types of |
michael@0 | 15 | * collection are referred to as a full GC and a zone GC respectively. |
michael@0 | 16 | * |
michael@0 | 17 | * The atoms zone is only collected in a full GC since objects in any zone may |
michael@0 | 18 | * have pointers to atoms, and these are not recorded in the cross compartment |
michael@0 | 19 | * pointer map. Also, the atoms zone is not collected if any thread has an |
michael@0 | 20 | * AutoKeepAtoms instance on the stack, or there are any exclusive threads using |
michael@0 | 21 | * the runtime. |
michael@0 | 22 | * |
michael@0 | 23 | * It is possible for an incremental collection that started out as a full GC to |
michael@0 | 24 | * become a zone GC if new zones are created during the course of the |
michael@0 | 25 | * collection. |
michael@0 | 26 | * |
michael@0 | 27 | * Incremental collection |
michael@0 | 28 | * ---------------------- |
michael@0 | 29 | * |
michael@0 | 30 | * For a collection to be carried out incrementally the following conditions |
michael@0 | 31 | * must be met: |
michael@0 | 32 | * - the collection must be run by calling js::GCSlice() rather than js::GC() |
michael@0 | 33 | * - the GC mode must have been set to JSGC_MODE_INCREMENTAL with |
michael@0 | 34 | * JS_SetGCParameter() |
michael@0 | 35 | * - no thread may have an AutoKeepAtoms instance on the stack |
michael@0 | 36 | * - all native objects that have their own trace hook must indicate that they |
michael@0 | 37 | * implement read and write barriers with the JSCLASS_IMPLEMENTS_BARRIERS |
michael@0 | 38 | * flag |
michael@0 | 39 | * |
michael@0 | 40 | * The last condition is an engine-internal mechanism to ensure that incremental |
michael@0 | 41 | * collection is not carried out without the correct barriers being implemented. |
michael@0 | 42 | * For more information see 'Incremental marking' below. |
michael@0 | 43 | * |
michael@0 | 44 | * If the collection is not incremental, all foreground activity happens inside |
michael@0 | 45 | * a single call to GC() or GCSlice(). However the collection is not complete |
michael@0 | 46 | * until the background sweeping activity has finished. |
michael@0 | 47 | * |
michael@0 | 48 | * An incremental collection proceeds as a series of slices, interleaved with |
michael@0 | 49 | * mutator activity, i.e. running JavaScript code. Slices are limited by a time |
michael@0 | 50 | * budget. The slice finishes as soon as possible after the requested time has |
michael@0 | 51 | * passed. |
michael@0 | 52 | * |
michael@0 | 53 | * Collector states |
michael@0 | 54 | * ---------------- |
michael@0 | 55 | * |
michael@0 | 56 | * The collector proceeds through the following states, the current state being |
michael@0 | 57 | * held in JSRuntime::gcIncrementalState: |
michael@0 | 58 | * |
michael@0 | 59 | * - MARK_ROOTS - marks the stack and other roots |
michael@0 | 60 | * - MARK - incrementally marks reachable things |
michael@0 | 61 | * - SWEEP - sweeps zones in groups and continues marking unswept zones |
michael@0 | 62 | * |
michael@0 | 63 | * The MARK_ROOTS activity always takes place in the first slice. The next two |
michael@0 | 64 | * states can take place over one or more slices. |
michael@0 | 65 | * |
michael@0 | 66 | * In other words an incremental collection proceeds like this: |
michael@0 | 67 | * |
michael@0 | 68 | * Slice 1: MARK_ROOTS: Roots pushed onto the mark stack. |
michael@0 | 69 | * MARK: The mark stack is processed by popping an element, |
michael@0 | 70 | * marking it, and pushing its children. |
michael@0 | 71 | * |
michael@0 | 72 | * ... JS code runs ... |
michael@0 | 73 | * |
michael@0 | 74 | * Slice 2: MARK: More mark stack processing. |
michael@0 | 75 | * |
michael@0 | 76 | * ... JS code runs ... |
michael@0 | 77 | * |
michael@0 | 78 | * Slice n-1: MARK: More mark stack processing. |
michael@0 | 79 | * |
michael@0 | 80 | * ... JS code runs ... |
michael@0 | 81 | * |
michael@0 | 82 | * Slice n: MARK: Mark stack is completely drained. |
michael@0 | 83 | * SWEEP: Select first group of zones to sweep and sweep them. |
michael@0 | 84 | * |
michael@0 | 85 | * ... JS code runs ... |
michael@0 | 86 | * |
michael@0 | 87 | * Slice n+1: SWEEP: Mark objects in unswept zones that were newly |
michael@0 | 88 | * identified as alive (see below). Then sweep more zone |
michael@0 | 89 | * groups. |
michael@0 | 90 | * |
michael@0 | 91 | * ... JS code runs ... |
michael@0 | 92 | * |
michael@0 | 93 | * Slice n+2: SWEEP: Mark objects in unswept zones that were newly |
michael@0 | 94 | * identified as alive. Then sweep more zone groups. |
michael@0 | 95 | * |
michael@0 | 96 | * ... JS code runs ... |
michael@0 | 97 | * |
michael@0 | 98 | * Slice m: SWEEP: Sweeping is finished, and background sweeping |
michael@0 | 99 | * started on the helper thread. |
michael@0 | 100 | * |
michael@0 | 101 | * ... JS code runs, remaining sweeping done on background thread ... |
michael@0 | 102 | * |
michael@0 | 103 | * When background sweeping finishes the GC is complete. |
michael@0 | 104 | * |
michael@0 | 105 | * Incremental marking |
michael@0 | 106 | * ------------------- |
michael@0 | 107 | * |
michael@0 | 108 | * Incremental collection requires close collaboration with the mutator (i.e., |
michael@0 | 109 | * JS code) to guarantee correctness. |
michael@0 | 110 | * |
michael@0 | 111 | * - During an incremental GC, if a memory location (except a root) is written |
michael@0 | 112 | * to, then the value it previously held must be marked. Write barriers |
michael@0 | 113 | * ensure this. |
michael@0 | 114 | * |
michael@0 | 115 | * - Any object that is allocated during incremental GC must start out marked. |
michael@0 | 116 | * |
michael@0 | 117 | * - Roots are marked in the first slice and hence don't need write barriers. |
michael@0 | 118 | * Roots are things like the C stack and the VM stack. |
michael@0 | 119 | * |
michael@0 | 120 | * The problem that write barriers solve is that between slices the mutator can |
michael@0 | 121 | * change the object graph. We must ensure that it cannot do this in such a way |
michael@0 | 122 | * that makes us fail to mark a reachable object (marking an unreachable object |
michael@0 | 123 | * is tolerable). |
michael@0 | 124 | * |
michael@0 | 125 | * We use a snapshot-at-the-beginning algorithm to do this. This means that we |
michael@0 | 126 | * promise to mark at least everything that is reachable at the beginning of |
michael@0 | 127 | * collection. To implement it we mark the old contents of every non-root memory |
michael@0 | 128 | * location written to by the mutator while the collection is in progress, using |
michael@0 | 129 | * write barriers. This is described in gc/Barrier.h. |
michael@0 | 130 | * |
michael@0 | 131 | * Incremental sweeping |
michael@0 | 132 | * -------------------- |
michael@0 | 133 | * |
michael@0 | 134 | * Sweeping is difficult to do incrementally because object finalizers must be |
michael@0 | 135 | * run at the start of sweeping, before any mutator code runs. The reason is |
michael@0 | 136 | * that some objects use their finalizers to remove themselves from caches. If |
michael@0 | 137 | * mutator code was allowed to run after the start of sweeping, it could observe |
michael@0 | 138 | * the state of the cache and create a new reference to an object that was just |
michael@0 | 139 | * about to be destroyed. |
michael@0 | 140 | * |
michael@0 | 141 | * Sweeping all finalizable objects in one go would introduce long pauses, so |
michael@0 | 142 | * instead sweeping broken up into groups of zones. Zones which are not yet |
michael@0 | 143 | * being swept are still marked, so the issue above does not apply. |
michael@0 | 144 | * |
michael@0 | 145 | * The order of sweeping is restricted by cross compartment pointers - for |
michael@0 | 146 | * example say that object |a| from zone A points to object |b| in zone B and |
michael@0 | 147 | * neither object was marked when we transitioned to the SWEEP phase. Imagine we |
michael@0 | 148 | * sweep B first and then return to the mutator. It's possible that the mutator |
michael@0 | 149 | * could cause |a| to become alive through a read barrier (perhaps it was a |
michael@0 | 150 | * shape that was accessed via a shape table). Then we would need to mark |b|, |
michael@0 | 151 | * which |a| points to, but |b| has already been swept. |
michael@0 | 152 | * |
michael@0 | 153 | * So if there is such a pointer then marking of zone B must not finish before |
michael@0 | 154 | * marking of zone A. Pointers which form a cycle between zones therefore |
michael@0 | 155 | * restrict those zones to being swept at the same time, and these are found |
michael@0 | 156 | * using Tarjan's algorithm for finding the strongly connected components of a |
michael@0 | 157 | * graph. |
michael@0 | 158 | * |
michael@0 | 159 | * GC things without finalizers, and things with finalizers that are able to run |
michael@0 | 160 | * in the background, are swept on the background thread. This accounts for most |
michael@0 | 161 | * of the sweeping work. |
michael@0 | 162 | * |
michael@0 | 163 | * Reset |
michael@0 | 164 | * ----- |
michael@0 | 165 | * |
michael@0 | 166 | * During incremental collection it is possible, although unlikely, for |
michael@0 | 167 | * conditions to change such that incremental collection is no longer safe. In |
michael@0 | 168 | * this case, the collection is 'reset' by ResetIncrementalGC(). If we are in |
michael@0 | 169 | * the mark state, this just stops marking, but if we have started sweeping |
michael@0 | 170 | * already, we continue until we have swept the current zone group. Following a |
michael@0 | 171 | * reset, a new non-incremental collection is started. |
michael@0 | 172 | */ |
michael@0 | 173 | |
michael@0 | 174 | #include "jsgcinlines.h" |
michael@0 | 175 | |
michael@0 | 176 | #include "mozilla/ArrayUtils.h" |
michael@0 | 177 | #include "mozilla/DebugOnly.h" |
michael@0 | 178 | #include "mozilla/MemoryReporting.h" |
michael@0 | 179 | #include "mozilla/Move.h" |
michael@0 | 180 | |
michael@0 | 181 | #include <string.h> /* for memset used when DEBUG */ |
michael@0 | 182 | #ifndef XP_WIN |
michael@0 | 183 | # include <unistd.h> |
michael@0 | 184 | #endif |
michael@0 | 185 | |
michael@0 | 186 | #include "jsapi.h" |
michael@0 | 187 | #include "jsatom.h" |
michael@0 | 188 | #include "jscntxt.h" |
michael@0 | 189 | #include "jscompartment.h" |
michael@0 | 190 | #include "jsobj.h" |
michael@0 | 191 | #include "jsscript.h" |
michael@0 | 192 | #include "jstypes.h" |
michael@0 | 193 | #include "jsutil.h" |
michael@0 | 194 | #include "jswatchpoint.h" |
michael@0 | 195 | #include "jsweakmap.h" |
michael@0 | 196 | #ifdef XP_WIN |
michael@0 | 197 | # include "jswin.h" |
michael@0 | 198 | #endif |
michael@0 | 199 | #include "prmjtime.h" |
michael@0 | 200 | |
michael@0 | 201 | #include "gc/FindSCCs.h" |
michael@0 | 202 | #include "gc/GCInternals.h" |
michael@0 | 203 | #include "gc/Marking.h" |
michael@0 | 204 | #include "gc/Memory.h" |
michael@0 | 205 | #ifdef JS_ION |
michael@0 | 206 | # include "jit/BaselineJIT.h" |
michael@0 | 207 | #endif |
michael@0 | 208 | #include "jit/IonCode.h" |
michael@0 | 209 | #include "js/SliceBudget.h" |
michael@0 | 210 | #include "vm/Debugger.h" |
michael@0 | 211 | #include "vm/ForkJoin.h" |
michael@0 | 212 | #include "vm/ProxyObject.h" |
michael@0 | 213 | #include "vm/Shape.h" |
michael@0 | 214 | #include "vm/String.h" |
michael@0 | 215 | #include "vm/TraceLogging.h" |
michael@0 | 216 | #include "vm/WrapperObject.h" |
michael@0 | 217 | |
michael@0 | 218 | #include "jsobjinlines.h" |
michael@0 | 219 | #include "jsscriptinlines.h" |
michael@0 | 220 | |
michael@0 | 221 | #include "vm/Stack-inl.h" |
michael@0 | 222 | #include "vm/String-inl.h" |
michael@0 | 223 | |
michael@0 | 224 | using namespace js; |
michael@0 | 225 | using namespace js::gc; |
michael@0 | 226 | |
michael@0 | 227 | using mozilla::ArrayEnd; |
michael@0 | 228 | using mozilla::DebugOnly; |
michael@0 | 229 | using mozilla::Maybe; |
michael@0 | 230 | using mozilla::Swap; |
michael@0 | 231 | |
michael@0 | 232 | /* Perform a Full GC every 20 seconds if MaybeGC is called */ |
michael@0 | 233 | static const uint64_t GC_IDLE_FULL_SPAN = 20 * 1000 * 1000; |
michael@0 | 234 | |
michael@0 | 235 | /* Increase the IGC marking slice time if we are in highFrequencyGC mode. */ |
michael@0 | 236 | static const int IGC_MARK_SLICE_MULTIPLIER = 2; |
michael@0 | 237 | |
michael@0 | 238 | #if defined(ANDROID) || defined(MOZ_B2G) |
michael@0 | 239 | static const int MAX_EMPTY_CHUNK_COUNT = 2; |
michael@0 | 240 | #else |
michael@0 | 241 | static const int MAX_EMPTY_CHUNK_COUNT = 30; |
michael@0 | 242 | #endif |
michael@0 | 243 | |
michael@0 | 244 | /* This array should be const, but that doesn't link right under GCC. */ |
michael@0 | 245 | const AllocKind gc::slotsToThingKind[] = { |
michael@0 | 246 | /* 0 */ FINALIZE_OBJECT0, FINALIZE_OBJECT2, FINALIZE_OBJECT2, FINALIZE_OBJECT4, |
michael@0 | 247 | /* 4 */ FINALIZE_OBJECT4, FINALIZE_OBJECT8, FINALIZE_OBJECT8, FINALIZE_OBJECT8, |
michael@0 | 248 | /* 8 */ FINALIZE_OBJECT8, FINALIZE_OBJECT12, FINALIZE_OBJECT12, FINALIZE_OBJECT12, |
michael@0 | 249 | /* 12 */ FINALIZE_OBJECT12, FINALIZE_OBJECT16, FINALIZE_OBJECT16, FINALIZE_OBJECT16, |
michael@0 | 250 | /* 16 */ FINALIZE_OBJECT16 |
michael@0 | 251 | }; |
michael@0 | 252 | |
michael@0 | 253 | static_assert(JS_ARRAY_LENGTH(slotsToThingKind) == SLOTS_TO_THING_KIND_LIMIT, |
michael@0 | 254 | "We have defined a slot count for each kind."); |
michael@0 | 255 | |
michael@0 | 256 | const uint32_t Arena::ThingSizes[] = { |
michael@0 | 257 | sizeof(JSObject), /* FINALIZE_OBJECT0 */ |
michael@0 | 258 | sizeof(JSObject), /* FINALIZE_OBJECT0_BACKGROUND */ |
michael@0 | 259 | sizeof(JSObject_Slots2), /* FINALIZE_OBJECT2 */ |
michael@0 | 260 | sizeof(JSObject_Slots2), /* FINALIZE_OBJECT2_BACKGROUND */ |
michael@0 | 261 | sizeof(JSObject_Slots4), /* FINALIZE_OBJECT4 */ |
michael@0 | 262 | sizeof(JSObject_Slots4), /* FINALIZE_OBJECT4_BACKGROUND */ |
michael@0 | 263 | sizeof(JSObject_Slots8), /* FINALIZE_OBJECT8 */ |
michael@0 | 264 | sizeof(JSObject_Slots8), /* FINALIZE_OBJECT8_BACKGROUND */ |
michael@0 | 265 | sizeof(JSObject_Slots12), /* FINALIZE_OBJECT12 */ |
michael@0 | 266 | sizeof(JSObject_Slots12), /* FINALIZE_OBJECT12_BACKGROUND */ |
michael@0 | 267 | sizeof(JSObject_Slots16), /* FINALIZE_OBJECT16 */ |
michael@0 | 268 | sizeof(JSObject_Slots16), /* FINALIZE_OBJECT16_BACKGROUND */ |
michael@0 | 269 | sizeof(JSScript), /* FINALIZE_SCRIPT */ |
michael@0 | 270 | sizeof(LazyScript), /* FINALIZE_LAZY_SCRIPT */ |
michael@0 | 271 | sizeof(Shape), /* FINALIZE_SHAPE */ |
michael@0 | 272 | sizeof(BaseShape), /* FINALIZE_BASE_SHAPE */ |
michael@0 | 273 | sizeof(types::TypeObject), /* FINALIZE_TYPE_OBJECT */ |
michael@0 | 274 | sizeof(JSFatInlineString), /* FINALIZE_FAT_INLINE_STRING */ |
michael@0 | 275 | sizeof(JSString), /* FINALIZE_STRING */ |
michael@0 | 276 | sizeof(JSExternalString), /* FINALIZE_EXTERNAL_STRING */ |
michael@0 | 277 | sizeof(jit::JitCode), /* FINALIZE_JITCODE */ |
michael@0 | 278 | }; |
michael@0 | 279 | |
michael@0 | 280 | #define OFFSET(type) uint32_t(sizeof(ArenaHeader) + (ArenaSize - sizeof(ArenaHeader)) % sizeof(type)) |
michael@0 | 281 | |
michael@0 | 282 | const uint32_t Arena::FirstThingOffsets[] = { |
michael@0 | 283 | OFFSET(JSObject), /* FINALIZE_OBJECT0 */ |
michael@0 | 284 | OFFSET(JSObject), /* FINALIZE_OBJECT0_BACKGROUND */ |
michael@0 | 285 | OFFSET(JSObject_Slots2), /* FINALIZE_OBJECT2 */ |
michael@0 | 286 | OFFSET(JSObject_Slots2), /* FINALIZE_OBJECT2_BACKGROUND */ |
michael@0 | 287 | OFFSET(JSObject_Slots4), /* FINALIZE_OBJECT4 */ |
michael@0 | 288 | OFFSET(JSObject_Slots4), /* FINALIZE_OBJECT4_BACKGROUND */ |
michael@0 | 289 | OFFSET(JSObject_Slots8), /* FINALIZE_OBJECT8 */ |
michael@0 | 290 | OFFSET(JSObject_Slots8), /* FINALIZE_OBJECT8_BACKGROUND */ |
michael@0 | 291 | OFFSET(JSObject_Slots12), /* FINALIZE_OBJECT12 */ |
michael@0 | 292 | OFFSET(JSObject_Slots12), /* FINALIZE_OBJECT12_BACKGROUND */ |
michael@0 | 293 | OFFSET(JSObject_Slots16), /* FINALIZE_OBJECT16 */ |
michael@0 | 294 | OFFSET(JSObject_Slots16), /* FINALIZE_OBJECT16_BACKGROUND */ |
michael@0 | 295 | OFFSET(JSScript), /* FINALIZE_SCRIPT */ |
michael@0 | 296 | OFFSET(LazyScript), /* FINALIZE_LAZY_SCRIPT */ |
michael@0 | 297 | OFFSET(Shape), /* FINALIZE_SHAPE */ |
michael@0 | 298 | OFFSET(BaseShape), /* FINALIZE_BASE_SHAPE */ |
michael@0 | 299 | OFFSET(types::TypeObject), /* FINALIZE_TYPE_OBJECT */ |
michael@0 | 300 | OFFSET(JSFatInlineString), /* FINALIZE_FAT_INLINE_STRING */ |
michael@0 | 301 | OFFSET(JSString), /* FINALIZE_STRING */ |
michael@0 | 302 | OFFSET(JSExternalString), /* FINALIZE_EXTERNAL_STRING */ |
michael@0 | 303 | OFFSET(jit::JitCode), /* FINALIZE_JITCODE */ |
michael@0 | 304 | }; |
michael@0 | 305 | |
michael@0 | 306 | #undef OFFSET |
michael@0 | 307 | |
michael@0 | 308 | /* |
michael@0 | 309 | * Finalization order for incrementally swept things. |
michael@0 | 310 | */ |
michael@0 | 311 | |
michael@0 | 312 | static const AllocKind FinalizePhaseStrings[] = { |
michael@0 | 313 | FINALIZE_EXTERNAL_STRING |
michael@0 | 314 | }; |
michael@0 | 315 | |
michael@0 | 316 | static const AllocKind FinalizePhaseScripts[] = { |
michael@0 | 317 | FINALIZE_SCRIPT, |
michael@0 | 318 | FINALIZE_LAZY_SCRIPT |
michael@0 | 319 | }; |
michael@0 | 320 | |
michael@0 | 321 | static const AllocKind FinalizePhaseJitCode[] = { |
michael@0 | 322 | FINALIZE_JITCODE |
michael@0 | 323 | }; |
michael@0 | 324 | |
michael@0 | 325 | static const AllocKind * const FinalizePhases[] = { |
michael@0 | 326 | FinalizePhaseStrings, |
michael@0 | 327 | FinalizePhaseScripts, |
michael@0 | 328 | FinalizePhaseJitCode |
michael@0 | 329 | }; |
michael@0 | 330 | static const int FinalizePhaseCount = sizeof(FinalizePhases) / sizeof(AllocKind*); |
michael@0 | 331 | |
michael@0 | 332 | static const int FinalizePhaseLength[] = { |
michael@0 | 333 | sizeof(FinalizePhaseStrings) / sizeof(AllocKind), |
michael@0 | 334 | sizeof(FinalizePhaseScripts) / sizeof(AllocKind), |
michael@0 | 335 | sizeof(FinalizePhaseJitCode) / sizeof(AllocKind) |
michael@0 | 336 | }; |
michael@0 | 337 | |
michael@0 | 338 | static const gcstats::Phase FinalizePhaseStatsPhase[] = { |
michael@0 | 339 | gcstats::PHASE_SWEEP_STRING, |
michael@0 | 340 | gcstats::PHASE_SWEEP_SCRIPT, |
michael@0 | 341 | gcstats::PHASE_SWEEP_JITCODE |
michael@0 | 342 | }; |
michael@0 | 343 | |
michael@0 | 344 | /* |
michael@0 | 345 | * Finalization order for things swept in the background. |
michael@0 | 346 | */ |
michael@0 | 347 | |
michael@0 | 348 | static const AllocKind BackgroundPhaseObjects[] = { |
michael@0 | 349 | FINALIZE_OBJECT0_BACKGROUND, |
michael@0 | 350 | FINALIZE_OBJECT2_BACKGROUND, |
michael@0 | 351 | FINALIZE_OBJECT4_BACKGROUND, |
michael@0 | 352 | FINALIZE_OBJECT8_BACKGROUND, |
michael@0 | 353 | FINALIZE_OBJECT12_BACKGROUND, |
michael@0 | 354 | FINALIZE_OBJECT16_BACKGROUND |
michael@0 | 355 | }; |
michael@0 | 356 | |
michael@0 | 357 | static const AllocKind BackgroundPhaseStrings[] = { |
michael@0 | 358 | FINALIZE_FAT_INLINE_STRING, |
michael@0 | 359 | FINALIZE_STRING |
michael@0 | 360 | }; |
michael@0 | 361 | |
michael@0 | 362 | static const AllocKind BackgroundPhaseShapes[] = { |
michael@0 | 363 | FINALIZE_SHAPE, |
michael@0 | 364 | FINALIZE_BASE_SHAPE, |
michael@0 | 365 | FINALIZE_TYPE_OBJECT |
michael@0 | 366 | }; |
michael@0 | 367 | |
michael@0 | 368 | static const AllocKind * const BackgroundPhases[] = { |
michael@0 | 369 | BackgroundPhaseObjects, |
michael@0 | 370 | BackgroundPhaseStrings, |
michael@0 | 371 | BackgroundPhaseShapes |
michael@0 | 372 | }; |
michael@0 | 373 | static const int BackgroundPhaseCount = sizeof(BackgroundPhases) / sizeof(AllocKind*); |
michael@0 | 374 | |
michael@0 | 375 | static const int BackgroundPhaseLength[] = { |
michael@0 | 376 | sizeof(BackgroundPhaseObjects) / sizeof(AllocKind), |
michael@0 | 377 | sizeof(BackgroundPhaseStrings) / sizeof(AllocKind), |
michael@0 | 378 | sizeof(BackgroundPhaseShapes) / sizeof(AllocKind) |
michael@0 | 379 | }; |
michael@0 | 380 | |
michael@0 | 381 | #ifdef DEBUG |
michael@0 | 382 | void |
michael@0 | 383 | ArenaHeader::checkSynchronizedWithFreeList() const |
michael@0 | 384 | { |
michael@0 | 385 | /* |
michael@0 | 386 | * Do not allow to access the free list when its real head is still stored |
michael@0 | 387 | * in FreeLists and is not synchronized with this one. |
michael@0 | 388 | */ |
michael@0 | 389 | JS_ASSERT(allocated()); |
michael@0 | 390 | |
michael@0 | 391 | /* |
michael@0 | 392 | * We can be called from the background finalization thread when the free |
michael@0 | 393 | * list in the zone can mutate at any moment. We cannot do any |
michael@0 | 394 | * checks in this case. |
michael@0 | 395 | */ |
michael@0 | 396 | if (IsBackgroundFinalized(getAllocKind()) && zone->runtimeFromAnyThread()->gcHelperThread.onBackgroundThread()) |
michael@0 | 397 | return; |
michael@0 | 398 | |
michael@0 | 399 | FreeSpan firstSpan = FreeSpan::decodeOffsets(arenaAddress(), firstFreeSpanOffsets); |
michael@0 | 400 | if (firstSpan.isEmpty()) |
michael@0 | 401 | return; |
michael@0 | 402 | const FreeSpan *list = zone->allocator.arenas.getFreeList(getAllocKind()); |
michael@0 | 403 | if (list->isEmpty() || firstSpan.arenaAddress() != list->arenaAddress()) |
michael@0 | 404 | return; |
michael@0 | 405 | |
michael@0 | 406 | /* |
michael@0 | 407 | * Here this arena has free things, FreeList::lists[thingKind] is not |
michael@0 | 408 | * empty and also points to this arena. Thus they must the same. |
michael@0 | 409 | */ |
michael@0 | 410 | JS_ASSERT(firstSpan.isSameNonEmptySpan(list)); |
michael@0 | 411 | } |
michael@0 | 412 | #endif |
michael@0 | 413 | |
michael@0 | 414 | /* static */ void |
michael@0 | 415 | Arena::staticAsserts() |
michael@0 | 416 | { |
michael@0 | 417 | static_assert(JS_ARRAY_LENGTH(ThingSizes) == FINALIZE_LIMIT, "We have defined all thing sizes."); |
michael@0 | 418 | static_assert(JS_ARRAY_LENGTH(FirstThingOffsets) == FINALIZE_LIMIT, "We have defined all offsets."); |
michael@0 | 419 | } |
michael@0 | 420 | |
michael@0 | 421 | void |
michael@0 | 422 | Arena::setAsFullyUnused(AllocKind thingKind) |
michael@0 | 423 | { |
michael@0 | 424 | FreeSpan entireList; |
michael@0 | 425 | entireList.first = thingsStart(thingKind); |
michael@0 | 426 | uintptr_t arenaAddr = aheader.arenaAddress(); |
michael@0 | 427 | entireList.last = arenaAddr | ArenaMask; |
michael@0 | 428 | aheader.setFirstFreeSpan(&entireList); |
michael@0 | 429 | } |
michael@0 | 430 | |
michael@0 | 431 | template<typename T> |
michael@0 | 432 | inline bool |
michael@0 | 433 | Arena::finalize(FreeOp *fop, AllocKind thingKind, size_t thingSize) |
michael@0 | 434 | { |
michael@0 | 435 | /* Enforce requirements on size of T. */ |
michael@0 | 436 | JS_ASSERT(thingSize % CellSize == 0); |
michael@0 | 437 | JS_ASSERT(thingSize <= 255); |
michael@0 | 438 | |
michael@0 | 439 | JS_ASSERT(aheader.allocated()); |
michael@0 | 440 | JS_ASSERT(thingKind == aheader.getAllocKind()); |
michael@0 | 441 | JS_ASSERT(thingSize == aheader.getThingSize()); |
michael@0 | 442 | JS_ASSERT(!aheader.hasDelayedMarking); |
michael@0 | 443 | JS_ASSERT(!aheader.markOverflow); |
michael@0 | 444 | JS_ASSERT(!aheader.allocatedDuringIncremental); |
michael@0 | 445 | |
michael@0 | 446 | uintptr_t thing = thingsStart(thingKind); |
michael@0 | 447 | uintptr_t lastByte = thingsEnd() - 1; |
michael@0 | 448 | |
michael@0 | 449 | FreeSpan nextFree(aheader.getFirstFreeSpan()); |
michael@0 | 450 | nextFree.checkSpan(); |
michael@0 | 451 | |
michael@0 | 452 | FreeSpan newListHead; |
michael@0 | 453 | FreeSpan *newListTail = &newListHead; |
michael@0 | 454 | uintptr_t newFreeSpanStart = 0; |
michael@0 | 455 | bool allClear = true; |
michael@0 | 456 | DebugOnly<size_t> nmarked = 0; |
michael@0 | 457 | for (;; thing += thingSize) { |
michael@0 | 458 | JS_ASSERT(thing <= lastByte + 1); |
michael@0 | 459 | if (thing == nextFree.first) { |
michael@0 | 460 | JS_ASSERT(nextFree.last <= lastByte); |
michael@0 | 461 | if (nextFree.last == lastByte) |
michael@0 | 462 | break; |
michael@0 | 463 | JS_ASSERT(Arena::isAligned(nextFree.last, thingSize)); |
michael@0 | 464 | if (!newFreeSpanStart) |
michael@0 | 465 | newFreeSpanStart = thing; |
michael@0 | 466 | thing = nextFree.last; |
michael@0 | 467 | nextFree = *nextFree.nextSpan(); |
michael@0 | 468 | nextFree.checkSpan(); |
michael@0 | 469 | } else { |
michael@0 | 470 | T *t = reinterpret_cast<T *>(thing); |
michael@0 | 471 | if (t->isMarked()) { |
michael@0 | 472 | allClear = false; |
michael@0 | 473 | nmarked++; |
michael@0 | 474 | if (newFreeSpanStart) { |
michael@0 | 475 | JS_ASSERT(thing >= thingsStart(thingKind) + thingSize); |
michael@0 | 476 | newListTail->first = newFreeSpanStart; |
michael@0 | 477 | newListTail->last = thing - thingSize; |
michael@0 | 478 | newListTail = newListTail->nextSpanUnchecked(thingSize); |
michael@0 | 479 | newFreeSpanStart = 0; |
michael@0 | 480 | } |
michael@0 | 481 | } else { |
michael@0 | 482 | if (!newFreeSpanStart) |
michael@0 | 483 | newFreeSpanStart = thing; |
michael@0 | 484 | t->finalize(fop); |
michael@0 | 485 | JS_POISON(t, JS_SWEPT_TENURED_PATTERN, thingSize); |
michael@0 | 486 | } |
michael@0 | 487 | } |
michael@0 | 488 | } |
michael@0 | 489 | |
michael@0 | 490 | if (allClear) { |
michael@0 | 491 | JS_ASSERT(newListTail == &newListHead); |
michael@0 | 492 | JS_ASSERT(!newFreeSpanStart || |
michael@0 | 493 | newFreeSpanStart == thingsStart(thingKind)); |
michael@0 | 494 | JS_EXTRA_POISON(data, JS_SWEPT_TENURED_PATTERN, sizeof(data)); |
michael@0 | 495 | return true; |
michael@0 | 496 | } |
michael@0 | 497 | |
michael@0 | 498 | newListTail->first = newFreeSpanStart ? newFreeSpanStart : nextFree.first; |
michael@0 | 499 | JS_ASSERT(Arena::isAligned(newListTail->first, thingSize)); |
michael@0 | 500 | newListTail->last = lastByte; |
michael@0 | 501 | |
michael@0 | 502 | #ifdef DEBUG |
michael@0 | 503 | size_t nfree = 0; |
michael@0 | 504 | for (const FreeSpan *span = &newListHead; span != newListTail; span = span->nextSpan()) { |
michael@0 | 505 | span->checkSpan(); |
michael@0 | 506 | JS_ASSERT(Arena::isAligned(span->first, thingSize)); |
michael@0 | 507 | JS_ASSERT(Arena::isAligned(span->last, thingSize)); |
michael@0 | 508 | nfree += (span->last - span->first) / thingSize + 1; |
michael@0 | 509 | JS_ASSERT(nfree + nmarked <= thingsPerArena(thingSize)); |
michael@0 | 510 | } |
michael@0 | 511 | nfree += (newListTail->last + 1 - newListTail->first) / thingSize; |
michael@0 | 512 | JS_ASSERT(nfree + nmarked == thingsPerArena(thingSize)); |
michael@0 | 513 | #endif |
michael@0 | 514 | aheader.setFirstFreeSpan(&newListHead); |
michael@0 | 515 | |
michael@0 | 516 | return false; |
michael@0 | 517 | } |
michael@0 | 518 | |
michael@0 | 519 | /* |
michael@0 | 520 | * Insert an arena into the list in appropriate position and update the cursor |
michael@0 | 521 | * to ensure that any arena before the cursor is full. |
michael@0 | 522 | */ |
michael@0 | 523 | void ArenaList::insert(ArenaHeader *a) |
michael@0 | 524 | { |
michael@0 | 525 | JS_ASSERT(a); |
michael@0 | 526 | JS_ASSERT_IF(!head, cursor == &head); |
michael@0 | 527 | a->next = *cursor; |
michael@0 | 528 | *cursor = a; |
michael@0 | 529 | if (!a->hasFreeThings()) |
michael@0 | 530 | cursor = &a->next; |
michael@0 | 531 | } |
michael@0 | 532 | |
michael@0 | 533 | template<typename T> |
michael@0 | 534 | static inline bool |
michael@0 | 535 | FinalizeTypedArenas(FreeOp *fop, |
michael@0 | 536 | ArenaHeader **src, |
michael@0 | 537 | ArenaList &dest, |
michael@0 | 538 | AllocKind thingKind, |
michael@0 | 539 | SliceBudget &budget) |
michael@0 | 540 | { |
michael@0 | 541 | /* |
michael@0 | 542 | * Finalize arenas from src list, releasing empty arenas and inserting the |
michael@0 | 543 | * others into dest in an appropriate position. |
michael@0 | 544 | */ |
michael@0 | 545 | |
michael@0 | 546 | /* |
michael@0 | 547 | * During parallel sections, we sometimes finalize the parallel arenas, |
michael@0 | 548 | * but in that case, we want to hold on to the memory in our arena |
michael@0 | 549 | * lists, not offer it up for reuse. |
michael@0 | 550 | */ |
michael@0 | 551 | bool releaseArenas = !InParallelSection(); |
michael@0 | 552 | |
michael@0 | 553 | size_t thingSize = Arena::thingSize(thingKind); |
michael@0 | 554 | |
michael@0 | 555 | while (ArenaHeader *aheader = *src) { |
michael@0 | 556 | *src = aheader->next; |
michael@0 | 557 | bool allClear = aheader->getArena()->finalize<T>(fop, thingKind, thingSize); |
michael@0 | 558 | if (!allClear) |
michael@0 | 559 | dest.insert(aheader); |
michael@0 | 560 | else if (releaseArenas) |
michael@0 | 561 | aheader->chunk()->releaseArena(aheader); |
michael@0 | 562 | else |
michael@0 | 563 | aheader->chunk()->recycleArena(aheader, dest, thingKind); |
michael@0 | 564 | |
michael@0 | 565 | budget.step(Arena::thingsPerArena(thingSize)); |
michael@0 | 566 | if (budget.isOverBudget()) |
michael@0 | 567 | return false; |
michael@0 | 568 | } |
michael@0 | 569 | |
michael@0 | 570 | return true; |
michael@0 | 571 | } |
michael@0 | 572 | |
michael@0 | 573 | /* |
michael@0 | 574 | * Finalize the list. On return al->cursor points to the first non-empty arena |
michael@0 | 575 | * after the al->head. |
michael@0 | 576 | */ |
michael@0 | 577 | static bool |
michael@0 | 578 | FinalizeArenas(FreeOp *fop, |
michael@0 | 579 | ArenaHeader **src, |
michael@0 | 580 | ArenaList &dest, |
michael@0 | 581 | AllocKind thingKind, |
michael@0 | 582 | SliceBudget &budget) |
michael@0 | 583 | { |
michael@0 | 584 | switch(thingKind) { |
michael@0 | 585 | case FINALIZE_OBJECT0: |
michael@0 | 586 | case FINALIZE_OBJECT0_BACKGROUND: |
michael@0 | 587 | case FINALIZE_OBJECT2: |
michael@0 | 588 | case FINALIZE_OBJECT2_BACKGROUND: |
michael@0 | 589 | case FINALIZE_OBJECT4: |
michael@0 | 590 | case FINALIZE_OBJECT4_BACKGROUND: |
michael@0 | 591 | case FINALIZE_OBJECT8: |
michael@0 | 592 | case FINALIZE_OBJECT8_BACKGROUND: |
michael@0 | 593 | case FINALIZE_OBJECT12: |
michael@0 | 594 | case FINALIZE_OBJECT12_BACKGROUND: |
michael@0 | 595 | case FINALIZE_OBJECT16: |
michael@0 | 596 | case FINALIZE_OBJECT16_BACKGROUND: |
michael@0 | 597 | return FinalizeTypedArenas<JSObject>(fop, src, dest, thingKind, budget); |
michael@0 | 598 | case FINALIZE_SCRIPT: |
michael@0 | 599 | return FinalizeTypedArenas<JSScript>(fop, src, dest, thingKind, budget); |
michael@0 | 600 | case FINALIZE_LAZY_SCRIPT: |
michael@0 | 601 | return FinalizeTypedArenas<LazyScript>(fop, src, dest, thingKind, budget); |
michael@0 | 602 | case FINALIZE_SHAPE: |
michael@0 | 603 | return FinalizeTypedArenas<Shape>(fop, src, dest, thingKind, budget); |
michael@0 | 604 | case FINALIZE_BASE_SHAPE: |
michael@0 | 605 | return FinalizeTypedArenas<BaseShape>(fop, src, dest, thingKind, budget); |
michael@0 | 606 | case FINALIZE_TYPE_OBJECT: |
michael@0 | 607 | return FinalizeTypedArenas<types::TypeObject>(fop, src, dest, thingKind, budget); |
michael@0 | 608 | case FINALIZE_STRING: |
michael@0 | 609 | return FinalizeTypedArenas<JSString>(fop, src, dest, thingKind, budget); |
michael@0 | 610 | case FINALIZE_FAT_INLINE_STRING: |
michael@0 | 611 | return FinalizeTypedArenas<JSFatInlineString>(fop, src, dest, thingKind, budget); |
michael@0 | 612 | case FINALIZE_EXTERNAL_STRING: |
michael@0 | 613 | return FinalizeTypedArenas<JSExternalString>(fop, src, dest, thingKind, budget); |
michael@0 | 614 | case FINALIZE_JITCODE: |
michael@0 | 615 | #ifdef JS_ION |
michael@0 | 616 | { |
michael@0 | 617 | // JitCode finalization may release references on an executable |
michael@0 | 618 | // allocator that is accessed when requesting interrupts. |
michael@0 | 619 | JSRuntime::AutoLockForInterrupt lock(fop->runtime()); |
michael@0 | 620 | return FinalizeTypedArenas<jit::JitCode>(fop, src, dest, thingKind, budget); |
michael@0 | 621 | } |
michael@0 | 622 | #endif |
michael@0 | 623 | default: |
michael@0 | 624 | MOZ_ASSUME_UNREACHABLE("Invalid alloc kind"); |
michael@0 | 625 | } |
michael@0 | 626 | } |
michael@0 | 627 | |
michael@0 | 628 | static inline Chunk * |
michael@0 | 629 | AllocChunk(JSRuntime *rt) |
michael@0 | 630 | { |
michael@0 | 631 | return static_cast<Chunk *>(MapAlignedPages(rt, ChunkSize, ChunkSize)); |
michael@0 | 632 | } |
michael@0 | 633 | |
michael@0 | 634 | static inline void |
michael@0 | 635 | FreeChunk(JSRuntime *rt, Chunk *p) |
michael@0 | 636 | { |
michael@0 | 637 | UnmapPages(rt, static_cast<void *>(p), ChunkSize); |
michael@0 | 638 | } |
michael@0 | 639 | |
michael@0 | 640 | inline bool |
michael@0 | 641 | ChunkPool::wantBackgroundAllocation(JSRuntime *rt) const |
michael@0 | 642 | { |
michael@0 | 643 | /* |
michael@0 | 644 | * To minimize memory waste we do not want to run the background chunk |
michael@0 | 645 | * allocation if we have empty chunks or when the runtime needs just few |
michael@0 | 646 | * of them. |
michael@0 | 647 | */ |
michael@0 | 648 | return rt->gcHelperThread.canBackgroundAllocate() && |
michael@0 | 649 | emptyCount == 0 && |
michael@0 | 650 | rt->gcChunkSet.count() >= 4; |
michael@0 | 651 | } |
michael@0 | 652 | |
michael@0 | 653 | /* Must be called with the GC lock taken. */ |
michael@0 | 654 | inline Chunk * |
michael@0 | 655 | ChunkPool::get(JSRuntime *rt) |
michael@0 | 656 | { |
michael@0 | 657 | JS_ASSERT(this == &rt->gcChunkPool); |
michael@0 | 658 | |
michael@0 | 659 | Chunk *chunk = emptyChunkListHead; |
michael@0 | 660 | if (chunk) { |
michael@0 | 661 | JS_ASSERT(emptyCount); |
michael@0 | 662 | emptyChunkListHead = chunk->info.next; |
michael@0 | 663 | --emptyCount; |
michael@0 | 664 | } else { |
michael@0 | 665 | JS_ASSERT(!emptyCount); |
michael@0 | 666 | chunk = Chunk::allocate(rt); |
michael@0 | 667 | if (!chunk) |
michael@0 | 668 | return nullptr; |
michael@0 | 669 | JS_ASSERT(chunk->info.numArenasFreeCommitted == 0); |
michael@0 | 670 | } |
michael@0 | 671 | JS_ASSERT(chunk->unused()); |
michael@0 | 672 | JS_ASSERT(!rt->gcChunkSet.has(chunk)); |
michael@0 | 673 | |
michael@0 | 674 | if (wantBackgroundAllocation(rt)) |
michael@0 | 675 | rt->gcHelperThread.startBackgroundAllocationIfIdle(); |
michael@0 | 676 | |
michael@0 | 677 | return chunk; |
michael@0 | 678 | } |
michael@0 | 679 | |
michael@0 | 680 | /* Must be called either during the GC or with the GC lock taken. */ |
michael@0 | 681 | inline void |
michael@0 | 682 | ChunkPool::put(Chunk *chunk) |
michael@0 | 683 | { |
michael@0 | 684 | chunk->info.age = 0; |
michael@0 | 685 | chunk->info.next = emptyChunkListHead; |
michael@0 | 686 | emptyChunkListHead = chunk; |
michael@0 | 687 | emptyCount++; |
michael@0 | 688 | } |
michael@0 | 689 | |
michael@0 | 690 | /* Must be called either during the GC or with the GC lock taken. */ |
michael@0 | 691 | Chunk * |
michael@0 | 692 | ChunkPool::expire(JSRuntime *rt, bool releaseAll) |
michael@0 | 693 | { |
michael@0 | 694 | JS_ASSERT(this == &rt->gcChunkPool); |
michael@0 | 695 | |
michael@0 | 696 | /* |
michael@0 | 697 | * Return old empty chunks to the system while preserving the order of |
michael@0 | 698 | * other chunks in the list. This way, if the GC runs several times |
michael@0 | 699 | * without emptying the list, the older chunks will stay at the tail |
michael@0 | 700 | * and are more likely to reach the max age. |
michael@0 | 701 | */ |
michael@0 | 702 | Chunk *freeList = nullptr; |
michael@0 | 703 | int freeChunkCount = 0; |
michael@0 | 704 | for (Chunk **chunkp = &emptyChunkListHead; *chunkp; ) { |
michael@0 | 705 | JS_ASSERT(emptyCount); |
michael@0 | 706 | Chunk *chunk = *chunkp; |
michael@0 | 707 | JS_ASSERT(chunk->unused()); |
michael@0 | 708 | JS_ASSERT(!rt->gcChunkSet.has(chunk)); |
michael@0 | 709 | JS_ASSERT(chunk->info.age <= MAX_EMPTY_CHUNK_AGE); |
michael@0 | 710 | if (releaseAll || chunk->info.age == MAX_EMPTY_CHUNK_AGE || |
michael@0 | 711 | freeChunkCount++ > MAX_EMPTY_CHUNK_COUNT) |
michael@0 | 712 | { |
michael@0 | 713 | *chunkp = chunk->info.next; |
michael@0 | 714 | --emptyCount; |
michael@0 | 715 | chunk->prepareToBeFreed(rt); |
michael@0 | 716 | chunk->info.next = freeList; |
michael@0 | 717 | freeList = chunk; |
michael@0 | 718 | } else { |
michael@0 | 719 | /* Keep the chunk but increase its age. */ |
michael@0 | 720 | ++chunk->info.age; |
michael@0 | 721 | chunkp = &chunk->info.next; |
michael@0 | 722 | } |
michael@0 | 723 | } |
michael@0 | 724 | JS_ASSERT_IF(releaseAll, !emptyCount); |
michael@0 | 725 | return freeList; |
michael@0 | 726 | } |
michael@0 | 727 | |
michael@0 | 728 | static void |
michael@0 | 729 | FreeChunkList(JSRuntime *rt, Chunk *chunkListHead) |
michael@0 | 730 | { |
michael@0 | 731 | while (Chunk *chunk = chunkListHead) { |
michael@0 | 732 | JS_ASSERT(!chunk->info.numArenasFreeCommitted); |
michael@0 | 733 | chunkListHead = chunk->info.next; |
michael@0 | 734 | FreeChunk(rt, chunk); |
michael@0 | 735 | } |
michael@0 | 736 | } |
michael@0 | 737 | |
michael@0 | 738 | void |
michael@0 | 739 | ChunkPool::expireAndFree(JSRuntime *rt, bool releaseAll) |
michael@0 | 740 | { |
michael@0 | 741 | FreeChunkList(rt, expire(rt, releaseAll)); |
michael@0 | 742 | } |
michael@0 | 743 | |
michael@0 | 744 | /* static */ Chunk * |
michael@0 | 745 | Chunk::allocate(JSRuntime *rt) |
michael@0 | 746 | { |
michael@0 | 747 | Chunk *chunk = AllocChunk(rt); |
michael@0 | 748 | if (!chunk) |
michael@0 | 749 | return nullptr; |
michael@0 | 750 | chunk->init(rt); |
michael@0 | 751 | rt->gcStats.count(gcstats::STAT_NEW_CHUNK); |
michael@0 | 752 | return chunk; |
michael@0 | 753 | } |
michael@0 | 754 | |
michael@0 | 755 | /* Must be called with the GC lock taken. */ |
michael@0 | 756 | /* static */ inline void |
michael@0 | 757 | Chunk::release(JSRuntime *rt, Chunk *chunk) |
michael@0 | 758 | { |
michael@0 | 759 | JS_ASSERT(chunk); |
michael@0 | 760 | chunk->prepareToBeFreed(rt); |
michael@0 | 761 | FreeChunk(rt, chunk); |
michael@0 | 762 | } |
michael@0 | 763 | |
michael@0 | 764 | inline void |
michael@0 | 765 | Chunk::prepareToBeFreed(JSRuntime *rt) |
michael@0 | 766 | { |
michael@0 | 767 | JS_ASSERT(rt->gcNumArenasFreeCommitted >= info.numArenasFreeCommitted); |
michael@0 | 768 | rt->gcNumArenasFreeCommitted -= info.numArenasFreeCommitted; |
michael@0 | 769 | rt->gcStats.count(gcstats::STAT_DESTROY_CHUNK); |
michael@0 | 770 | |
michael@0 | 771 | #ifdef DEBUG |
michael@0 | 772 | /* |
michael@0 | 773 | * Let FreeChunkList detect a missing prepareToBeFreed call before it |
michael@0 | 774 | * frees chunk. |
michael@0 | 775 | */ |
michael@0 | 776 | info.numArenasFreeCommitted = 0; |
michael@0 | 777 | #endif |
michael@0 | 778 | } |
michael@0 | 779 | |
michael@0 | 780 | void |
michael@0 | 781 | Chunk::init(JSRuntime *rt) |
michael@0 | 782 | { |
michael@0 | 783 | JS_POISON(this, JS_FRESH_TENURED_PATTERN, ChunkSize); |
michael@0 | 784 | |
michael@0 | 785 | /* |
michael@0 | 786 | * We clear the bitmap to guard against xpc_IsGrayGCThing being called on |
michael@0 | 787 | * uninitialized data, which would happen before the first GC cycle. |
michael@0 | 788 | */ |
michael@0 | 789 | bitmap.clear(); |
michael@0 | 790 | |
michael@0 | 791 | /* |
michael@0 | 792 | * Decommit the arenas. We do this after poisoning so that if the OS does |
michael@0 | 793 | * not have to recycle the pages, we still get the benefit of poisoning. |
michael@0 | 794 | */ |
michael@0 | 795 | decommitAllArenas(rt); |
michael@0 | 796 | |
michael@0 | 797 | /* Initialize the chunk info. */ |
michael@0 | 798 | info.age = 0; |
michael@0 | 799 | info.trailer.location = ChunkLocationTenuredHeap; |
michael@0 | 800 | info.trailer.runtime = rt; |
michael@0 | 801 | |
michael@0 | 802 | /* The rest of info fields are initialized in PickChunk. */ |
michael@0 | 803 | } |
michael@0 | 804 | |
michael@0 | 805 | static inline Chunk ** |
michael@0 | 806 | GetAvailableChunkList(Zone *zone) |
michael@0 | 807 | { |
michael@0 | 808 | JSRuntime *rt = zone->runtimeFromAnyThread(); |
michael@0 | 809 | return zone->isSystem |
michael@0 | 810 | ? &rt->gcSystemAvailableChunkListHead |
michael@0 | 811 | : &rt->gcUserAvailableChunkListHead; |
michael@0 | 812 | } |
michael@0 | 813 | |
michael@0 | 814 | inline void |
michael@0 | 815 | Chunk::addToAvailableList(Zone *zone) |
michael@0 | 816 | { |
michael@0 | 817 | insertToAvailableList(GetAvailableChunkList(zone)); |
michael@0 | 818 | } |
michael@0 | 819 | |
michael@0 | 820 | inline void |
michael@0 | 821 | Chunk::insertToAvailableList(Chunk **insertPoint) |
michael@0 | 822 | { |
michael@0 | 823 | JS_ASSERT(hasAvailableArenas()); |
michael@0 | 824 | JS_ASSERT(!info.prevp); |
michael@0 | 825 | JS_ASSERT(!info.next); |
michael@0 | 826 | info.prevp = insertPoint; |
michael@0 | 827 | Chunk *insertBefore = *insertPoint; |
michael@0 | 828 | if (insertBefore) { |
michael@0 | 829 | JS_ASSERT(insertBefore->info.prevp == insertPoint); |
michael@0 | 830 | insertBefore->info.prevp = &info.next; |
michael@0 | 831 | } |
michael@0 | 832 | info.next = insertBefore; |
michael@0 | 833 | *insertPoint = this; |
michael@0 | 834 | } |
michael@0 | 835 | |
michael@0 | 836 | inline void |
michael@0 | 837 | Chunk::removeFromAvailableList() |
michael@0 | 838 | { |
michael@0 | 839 | JS_ASSERT(info.prevp); |
michael@0 | 840 | *info.prevp = info.next; |
michael@0 | 841 | if (info.next) { |
michael@0 | 842 | JS_ASSERT(info.next->info.prevp == &info.next); |
michael@0 | 843 | info.next->info.prevp = info.prevp; |
michael@0 | 844 | } |
michael@0 | 845 | info.prevp = nullptr; |
michael@0 | 846 | info.next = nullptr; |
michael@0 | 847 | } |
michael@0 | 848 | |
michael@0 | 849 | /* |
michael@0 | 850 | * Search for and return the next decommitted Arena. Our goal is to keep |
michael@0 | 851 | * lastDecommittedArenaOffset "close" to a free arena. We do this by setting |
michael@0 | 852 | * it to the most recently freed arena when we free, and forcing it to |
michael@0 | 853 | * the last alloc + 1 when we allocate. |
michael@0 | 854 | */ |
michael@0 | 855 | uint32_t |
michael@0 | 856 | Chunk::findDecommittedArenaOffset() |
michael@0 | 857 | { |
michael@0 | 858 | /* Note: lastFreeArenaOffset can be past the end of the list. */ |
michael@0 | 859 | for (unsigned i = info.lastDecommittedArenaOffset; i < ArenasPerChunk; i++) |
michael@0 | 860 | if (decommittedArenas.get(i)) |
michael@0 | 861 | return i; |
michael@0 | 862 | for (unsigned i = 0; i < info.lastDecommittedArenaOffset; i++) |
michael@0 | 863 | if (decommittedArenas.get(i)) |
michael@0 | 864 | return i; |
michael@0 | 865 | MOZ_ASSUME_UNREACHABLE("No decommitted arenas found."); |
michael@0 | 866 | } |
michael@0 | 867 | |
michael@0 | 868 | ArenaHeader * |
michael@0 | 869 | Chunk::fetchNextDecommittedArena() |
michael@0 | 870 | { |
michael@0 | 871 | JS_ASSERT(info.numArenasFreeCommitted == 0); |
michael@0 | 872 | JS_ASSERT(info.numArenasFree > 0); |
michael@0 | 873 | |
michael@0 | 874 | unsigned offset = findDecommittedArenaOffset(); |
michael@0 | 875 | info.lastDecommittedArenaOffset = offset + 1; |
michael@0 | 876 | --info.numArenasFree; |
michael@0 | 877 | decommittedArenas.unset(offset); |
michael@0 | 878 | |
michael@0 | 879 | Arena *arena = &arenas[offset]; |
michael@0 | 880 | MarkPagesInUse(info.trailer.runtime, arena, ArenaSize); |
michael@0 | 881 | arena->aheader.setAsNotAllocated(); |
michael@0 | 882 | |
michael@0 | 883 | return &arena->aheader; |
michael@0 | 884 | } |
michael@0 | 885 | |
michael@0 | 886 | inline ArenaHeader * |
michael@0 | 887 | Chunk::fetchNextFreeArena(JSRuntime *rt) |
michael@0 | 888 | { |
michael@0 | 889 | JS_ASSERT(info.numArenasFreeCommitted > 0); |
michael@0 | 890 | JS_ASSERT(info.numArenasFreeCommitted <= info.numArenasFree); |
michael@0 | 891 | JS_ASSERT(info.numArenasFreeCommitted <= rt->gcNumArenasFreeCommitted); |
michael@0 | 892 | |
michael@0 | 893 | ArenaHeader *aheader = info.freeArenasHead; |
michael@0 | 894 | info.freeArenasHead = aheader->next; |
michael@0 | 895 | --info.numArenasFreeCommitted; |
michael@0 | 896 | --info.numArenasFree; |
michael@0 | 897 | --rt->gcNumArenasFreeCommitted; |
michael@0 | 898 | |
michael@0 | 899 | return aheader; |
michael@0 | 900 | } |
michael@0 | 901 | |
michael@0 | 902 | ArenaHeader * |
michael@0 | 903 | Chunk::allocateArena(Zone *zone, AllocKind thingKind) |
michael@0 | 904 | { |
michael@0 | 905 | JS_ASSERT(hasAvailableArenas()); |
michael@0 | 906 | |
michael@0 | 907 | JSRuntime *rt = zone->runtimeFromAnyThread(); |
michael@0 | 908 | if (!rt->isHeapMinorCollecting() && rt->gcBytes >= rt->gcMaxBytes) |
michael@0 | 909 | return nullptr; |
michael@0 | 910 | |
michael@0 | 911 | ArenaHeader *aheader = MOZ_LIKELY(info.numArenasFreeCommitted > 0) |
michael@0 | 912 | ? fetchNextFreeArena(rt) |
michael@0 | 913 | : fetchNextDecommittedArena(); |
michael@0 | 914 | aheader->init(zone, thingKind); |
michael@0 | 915 | if (MOZ_UNLIKELY(!hasAvailableArenas())) |
michael@0 | 916 | removeFromAvailableList(); |
michael@0 | 917 | |
michael@0 | 918 | rt->gcBytes += ArenaSize; |
michael@0 | 919 | zone->gcBytes += ArenaSize; |
michael@0 | 920 | |
michael@0 | 921 | if (zone->gcBytes >= zone->gcTriggerBytes) { |
michael@0 | 922 | AutoUnlockGC unlock(rt); |
michael@0 | 923 | TriggerZoneGC(zone, JS::gcreason::ALLOC_TRIGGER); |
michael@0 | 924 | } |
michael@0 | 925 | |
michael@0 | 926 | return aheader; |
michael@0 | 927 | } |
michael@0 | 928 | |
michael@0 | 929 | inline void |
michael@0 | 930 | Chunk::addArenaToFreeList(JSRuntime *rt, ArenaHeader *aheader) |
michael@0 | 931 | { |
michael@0 | 932 | JS_ASSERT(!aheader->allocated()); |
michael@0 | 933 | aheader->next = info.freeArenasHead; |
michael@0 | 934 | info.freeArenasHead = aheader; |
michael@0 | 935 | ++info.numArenasFreeCommitted; |
michael@0 | 936 | ++info.numArenasFree; |
michael@0 | 937 | ++rt->gcNumArenasFreeCommitted; |
michael@0 | 938 | } |
michael@0 | 939 | |
michael@0 | 940 | void |
michael@0 | 941 | Chunk::recycleArena(ArenaHeader *aheader, ArenaList &dest, AllocKind thingKind) |
michael@0 | 942 | { |
michael@0 | 943 | aheader->getArena()->setAsFullyUnused(thingKind); |
michael@0 | 944 | dest.insert(aheader); |
michael@0 | 945 | } |
michael@0 | 946 | |
michael@0 | 947 | void |
michael@0 | 948 | Chunk::releaseArena(ArenaHeader *aheader) |
michael@0 | 949 | { |
michael@0 | 950 | JS_ASSERT(aheader->allocated()); |
michael@0 | 951 | JS_ASSERT(!aheader->hasDelayedMarking); |
michael@0 | 952 | Zone *zone = aheader->zone; |
michael@0 | 953 | JSRuntime *rt = zone->runtimeFromAnyThread(); |
michael@0 | 954 | AutoLockGC maybeLock; |
michael@0 | 955 | if (rt->gcHelperThread.sweeping()) |
michael@0 | 956 | maybeLock.lock(rt); |
michael@0 | 957 | |
michael@0 | 958 | JS_ASSERT(rt->gcBytes >= ArenaSize); |
michael@0 | 959 | JS_ASSERT(zone->gcBytes >= ArenaSize); |
michael@0 | 960 | if (rt->gcHelperThread.sweeping()) |
michael@0 | 961 | zone->reduceGCTriggerBytes(zone->gcHeapGrowthFactor * ArenaSize); |
michael@0 | 962 | rt->gcBytes -= ArenaSize; |
michael@0 | 963 | zone->gcBytes -= ArenaSize; |
michael@0 | 964 | |
michael@0 | 965 | aheader->setAsNotAllocated(); |
michael@0 | 966 | addArenaToFreeList(rt, aheader); |
michael@0 | 967 | |
michael@0 | 968 | if (info.numArenasFree == 1) { |
michael@0 | 969 | JS_ASSERT(!info.prevp); |
michael@0 | 970 | JS_ASSERT(!info.next); |
michael@0 | 971 | addToAvailableList(zone); |
michael@0 | 972 | } else if (!unused()) { |
michael@0 | 973 | JS_ASSERT(info.prevp); |
michael@0 | 974 | } else { |
michael@0 | 975 | rt->gcChunkSet.remove(this); |
michael@0 | 976 | removeFromAvailableList(); |
michael@0 | 977 | JS_ASSERT(info.numArenasFree == ArenasPerChunk); |
michael@0 | 978 | decommitAllArenas(rt); |
michael@0 | 979 | rt->gcChunkPool.put(this); |
michael@0 | 980 | } |
michael@0 | 981 | } |
michael@0 | 982 | |
michael@0 | 983 | /* The caller must hold the GC lock. */ |
michael@0 | 984 | static Chunk * |
michael@0 | 985 | PickChunk(Zone *zone) |
michael@0 | 986 | { |
michael@0 | 987 | JSRuntime *rt = zone->runtimeFromAnyThread(); |
michael@0 | 988 | Chunk **listHeadp = GetAvailableChunkList(zone); |
michael@0 | 989 | Chunk *chunk = *listHeadp; |
michael@0 | 990 | if (chunk) |
michael@0 | 991 | return chunk; |
michael@0 | 992 | |
michael@0 | 993 | chunk = rt->gcChunkPool.get(rt); |
michael@0 | 994 | if (!chunk) |
michael@0 | 995 | return nullptr; |
michael@0 | 996 | |
michael@0 | 997 | rt->gcChunkAllocationSinceLastGC = true; |
michael@0 | 998 | |
michael@0 | 999 | /* |
michael@0 | 1000 | * FIXME bug 583732 - chunk is newly allocated and cannot be present in |
michael@0 | 1001 | * the table so using ordinary lookupForAdd is suboptimal here. |
michael@0 | 1002 | */ |
michael@0 | 1003 | GCChunkSet::AddPtr p = rt->gcChunkSet.lookupForAdd(chunk); |
michael@0 | 1004 | JS_ASSERT(!p); |
michael@0 | 1005 | if (!rt->gcChunkSet.add(p, chunk)) { |
michael@0 | 1006 | Chunk::release(rt, chunk); |
michael@0 | 1007 | return nullptr; |
michael@0 | 1008 | } |
michael@0 | 1009 | |
michael@0 | 1010 | chunk->info.prevp = nullptr; |
michael@0 | 1011 | chunk->info.next = nullptr; |
michael@0 | 1012 | chunk->addToAvailableList(zone); |
michael@0 | 1013 | |
michael@0 | 1014 | return chunk; |
michael@0 | 1015 | } |
michael@0 | 1016 | |
michael@0 | 1017 | #ifdef JS_GC_ZEAL |
michael@0 | 1018 | |
michael@0 | 1019 | extern void |
michael@0 | 1020 | js::SetGCZeal(JSRuntime *rt, uint8_t zeal, uint32_t frequency) |
michael@0 | 1021 | { |
michael@0 | 1022 | if (rt->gcVerifyPreData) |
michael@0 | 1023 | VerifyBarriers(rt, PreBarrierVerifier); |
michael@0 | 1024 | if (rt->gcVerifyPostData) |
michael@0 | 1025 | VerifyBarriers(rt, PostBarrierVerifier); |
michael@0 | 1026 | |
michael@0 | 1027 | #ifdef JSGC_GENERATIONAL |
michael@0 | 1028 | if (rt->gcZeal_ == ZealGenerationalGCValue) { |
michael@0 | 1029 | MinorGC(rt, JS::gcreason::DEBUG_GC); |
michael@0 | 1030 | rt->gcNursery.leaveZealMode(); |
michael@0 | 1031 | } |
michael@0 | 1032 | |
michael@0 | 1033 | if (zeal == ZealGenerationalGCValue) |
michael@0 | 1034 | rt->gcNursery.enterZealMode(); |
michael@0 | 1035 | #endif |
michael@0 | 1036 | |
michael@0 | 1037 | bool schedule = zeal >= js::gc::ZealAllocValue; |
michael@0 | 1038 | rt->gcZeal_ = zeal; |
michael@0 | 1039 | rt->gcZealFrequency = frequency; |
michael@0 | 1040 | rt->gcNextScheduled = schedule ? frequency : 0; |
michael@0 | 1041 | } |
michael@0 | 1042 | |
michael@0 | 1043 | static bool |
michael@0 | 1044 | InitGCZeal(JSRuntime *rt) |
michael@0 | 1045 | { |
michael@0 | 1046 | const char *env = getenv("JS_GC_ZEAL"); |
michael@0 | 1047 | if (!env) |
michael@0 | 1048 | return true; |
michael@0 | 1049 | |
michael@0 | 1050 | int zeal = -1; |
michael@0 | 1051 | int frequency = JS_DEFAULT_ZEAL_FREQ; |
michael@0 | 1052 | if (strcmp(env, "help") != 0) { |
michael@0 | 1053 | zeal = atoi(env); |
michael@0 | 1054 | const char *p = strchr(env, ','); |
michael@0 | 1055 | if (p) |
michael@0 | 1056 | frequency = atoi(p + 1); |
michael@0 | 1057 | } |
michael@0 | 1058 | |
michael@0 | 1059 | if (zeal < 0 || zeal > ZealLimit || frequency < 0) { |
michael@0 | 1060 | fprintf(stderr, |
michael@0 | 1061 | "Format: JS_GC_ZEAL=N[,F]\n" |
michael@0 | 1062 | "N indicates \"zealousness\":\n" |
michael@0 | 1063 | " 0: no additional GCs\n" |
michael@0 | 1064 | " 1: additional GCs at common danger points\n" |
michael@0 | 1065 | " 2: GC every F allocations (default: 100)\n" |
michael@0 | 1066 | " 3: GC when the window paints (browser only)\n" |
michael@0 | 1067 | " 4: Verify pre write barriers between instructions\n" |
michael@0 | 1068 | " 5: Verify pre write barriers between paints\n" |
michael@0 | 1069 | " 6: Verify stack rooting\n" |
michael@0 | 1070 | " 7: Collect the nursery every N nursery allocations\n" |
michael@0 | 1071 | " 8: Incremental GC in two slices: 1) mark roots 2) finish collection\n" |
michael@0 | 1072 | " 9: Incremental GC in two slices: 1) mark all 2) new marking and finish\n" |
michael@0 | 1073 | " 10: Incremental GC in multiple slices\n" |
michael@0 | 1074 | " 11: Verify post write barriers between instructions\n" |
michael@0 | 1075 | " 12: Verify post write barriers between paints\n" |
michael@0 | 1076 | " 13: Purge analysis state every F allocations (default: 100)\n"); |
michael@0 | 1077 | return false; |
michael@0 | 1078 | } |
michael@0 | 1079 | |
michael@0 | 1080 | SetGCZeal(rt, zeal, frequency); |
michael@0 | 1081 | return true; |
michael@0 | 1082 | } |
michael@0 | 1083 | |
michael@0 | 1084 | #endif |
michael@0 | 1085 | |
michael@0 | 1086 | /* Lifetime for type sets attached to scripts containing observed types. */ |
michael@0 | 1087 | static const int64_t JIT_SCRIPT_RELEASE_TYPES_INTERVAL = 60 * 1000 * 1000; |
michael@0 | 1088 | |
michael@0 | 1089 | bool |
michael@0 | 1090 | js_InitGC(JSRuntime *rt, uint32_t maxbytes) |
michael@0 | 1091 | { |
michael@0 | 1092 | InitMemorySubsystem(rt); |
michael@0 | 1093 | |
michael@0 | 1094 | if (!rt->gcChunkSet.init(INITIAL_CHUNK_CAPACITY)) |
michael@0 | 1095 | return false; |
michael@0 | 1096 | |
michael@0 | 1097 | if (!rt->gcRootsHash.init(256)) |
michael@0 | 1098 | return false; |
michael@0 | 1099 | |
michael@0 | 1100 | if (!rt->gcHelperThread.init()) |
michael@0 | 1101 | return false; |
michael@0 | 1102 | |
michael@0 | 1103 | /* |
michael@0 | 1104 | * Separate gcMaxMallocBytes from gcMaxBytes but initialize to maxbytes |
michael@0 | 1105 | * for default backward API compatibility. |
michael@0 | 1106 | */ |
michael@0 | 1107 | rt->gcMaxBytes = maxbytes; |
michael@0 | 1108 | rt->setGCMaxMallocBytes(maxbytes); |
michael@0 | 1109 | |
michael@0 | 1110 | #ifndef JS_MORE_DETERMINISTIC |
michael@0 | 1111 | rt->gcJitReleaseTime = PRMJ_Now() + JIT_SCRIPT_RELEASE_TYPES_INTERVAL; |
michael@0 | 1112 | #endif |
michael@0 | 1113 | |
michael@0 | 1114 | #ifdef JSGC_GENERATIONAL |
michael@0 | 1115 | if (!rt->gcNursery.init()) |
michael@0 | 1116 | return false; |
michael@0 | 1117 | |
michael@0 | 1118 | if (!rt->gcStoreBuffer.enable()) |
michael@0 | 1119 | return false; |
michael@0 | 1120 | #endif |
michael@0 | 1121 | |
michael@0 | 1122 | #ifdef JS_GC_ZEAL |
michael@0 | 1123 | if (!InitGCZeal(rt)) |
michael@0 | 1124 | return false; |
michael@0 | 1125 | #endif |
michael@0 | 1126 | |
michael@0 | 1127 | return true; |
michael@0 | 1128 | } |
michael@0 | 1129 | |
michael@0 | 1130 | static void |
michael@0 | 1131 | RecordNativeStackTopForGC(JSRuntime *rt) |
michael@0 | 1132 | { |
michael@0 | 1133 | ConservativeGCData *cgcd = &rt->conservativeGC; |
michael@0 | 1134 | |
michael@0 | 1135 | #ifdef JS_THREADSAFE |
michael@0 | 1136 | /* Record the stack top here only if we are called from a request. */ |
michael@0 | 1137 | if (!rt->requestDepth) |
michael@0 | 1138 | return; |
michael@0 | 1139 | #endif |
michael@0 | 1140 | cgcd->recordStackTop(); |
michael@0 | 1141 | } |
michael@0 | 1142 | |
michael@0 | 1143 | void |
michael@0 | 1144 | js_FinishGC(JSRuntime *rt) |
michael@0 | 1145 | { |
michael@0 | 1146 | /* |
michael@0 | 1147 | * Wait until the background finalization stops and the helper thread |
michael@0 | 1148 | * shuts down before we forcefully release any remaining GC memory. |
michael@0 | 1149 | */ |
michael@0 | 1150 | rt->gcHelperThread.finish(); |
michael@0 | 1151 | |
michael@0 | 1152 | #ifdef JS_GC_ZEAL |
michael@0 | 1153 | /* Free memory associated with GC verification. */ |
michael@0 | 1154 | FinishVerifier(rt); |
michael@0 | 1155 | #endif |
michael@0 | 1156 | |
michael@0 | 1157 | /* Delete all remaining zones. */ |
michael@0 | 1158 | if (rt->gcInitialized) { |
michael@0 | 1159 | for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) { |
michael@0 | 1160 | for (CompartmentsInZoneIter comp(zone); !comp.done(); comp.next()) |
michael@0 | 1161 | js_delete(comp.get()); |
michael@0 | 1162 | js_delete(zone.get()); |
michael@0 | 1163 | } |
michael@0 | 1164 | } |
michael@0 | 1165 | |
michael@0 | 1166 | rt->zones.clear(); |
michael@0 | 1167 | |
michael@0 | 1168 | rt->gcSystemAvailableChunkListHead = nullptr; |
michael@0 | 1169 | rt->gcUserAvailableChunkListHead = nullptr; |
michael@0 | 1170 | if (rt->gcChunkSet.initialized()) { |
michael@0 | 1171 | for (GCChunkSet::Range r(rt->gcChunkSet.all()); !r.empty(); r.popFront()) |
michael@0 | 1172 | Chunk::release(rt, r.front()); |
michael@0 | 1173 | rt->gcChunkSet.clear(); |
michael@0 | 1174 | } |
michael@0 | 1175 | |
michael@0 | 1176 | rt->gcChunkPool.expireAndFree(rt, true); |
michael@0 | 1177 | |
michael@0 | 1178 | if (rt->gcRootsHash.initialized()) |
michael@0 | 1179 | rt->gcRootsHash.clear(); |
michael@0 | 1180 | |
michael@0 | 1181 | rt->functionPersistentRooteds.clear(); |
michael@0 | 1182 | rt->idPersistentRooteds.clear(); |
michael@0 | 1183 | rt->objectPersistentRooteds.clear(); |
michael@0 | 1184 | rt->scriptPersistentRooteds.clear(); |
michael@0 | 1185 | rt->stringPersistentRooteds.clear(); |
michael@0 | 1186 | rt->valuePersistentRooteds.clear(); |
michael@0 | 1187 | } |
michael@0 | 1188 | |
michael@0 | 1189 | template <typename T> struct BarrierOwner {}; |
michael@0 | 1190 | template <typename T> struct BarrierOwner<T *> { typedef T result; }; |
michael@0 | 1191 | template <> struct BarrierOwner<Value> { typedef HeapValue result; }; |
michael@0 | 1192 | |
michael@0 | 1193 | template <typename T> |
michael@0 | 1194 | static bool |
michael@0 | 1195 | AddRoot(JSRuntime *rt, T *rp, const char *name, JSGCRootType rootType) |
michael@0 | 1196 | { |
michael@0 | 1197 | /* |
michael@0 | 1198 | * Sometimes Firefox will hold weak references to objects and then convert |
michael@0 | 1199 | * them to strong references by calling AddRoot (e.g., via PreserveWrapper, |
michael@0 | 1200 | * or ModifyBusyCount in workers). We need a read barrier to cover these |
michael@0 | 1201 | * cases. |
michael@0 | 1202 | */ |
michael@0 | 1203 | if (rt->gcIncrementalState != NO_INCREMENTAL) |
michael@0 | 1204 | BarrierOwner<T>::result::writeBarrierPre(*rp); |
michael@0 | 1205 | |
michael@0 | 1206 | return rt->gcRootsHash.put((void *)rp, RootInfo(name, rootType)); |
michael@0 | 1207 | } |
michael@0 | 1208 | |
michael@0 | 1209 | template <typename T> |
michael@0 | 1210 | static bool |
michael@0 | 1211 | AddRoot(JSContext *cx, T *rp, const char *name, JSGCRootType rootType) |
michael@0 | 1212 | { |
michael@0 | 1213 | bool ok = AddRoot(cx->runtime(), rp, name, rootType); |
michael@0 | 1214 | if (!ok) |
michael@0 | 1215 | JS_ReportOutOfMemory(cx); |
michael@0 | 1216 | return ok; |
michael@0 | 1217 | } |
michael@0 | 1218 | |
michael@0 | 1219 | bool |
michael@0 | 1220 | js::AddValueRoot(JSContext *cx, Value *vp, const char *name) |
michael@0 | 1221 | { |
michael@0 | 1222 | return AddRoot(cx, vp, name, JS_GC_ROOT_VALUE_PTR); |
michael@0 | 1223 | } |
michael@0 | 1224 | |
michael@0 | 1225 | extern bool |
michael@0 | 1226 | js::AddValueRootRT(JSRuntime *rt, js::Value *vp, const char *name) |
michael@0 | 1227 | { |
michael@0 | 1228 | return AddRoot(rt, vp, name, JS_GC_ROOT_VALUE_PTR); |
michael@0 | 1229 | } |
michael@0 | 1230 | |
michael@0 | 1231 | extern bool |
michael@0 | 1232 | js::AddStringRoot(JSContext *cx, JSString **rp, const char *name) |
michael@0 | 1233 | { |
michael@0 | 1234 | return AddRoot(cx, rp, name, JS_GC_ROOT_STRING_PTR); |
michael@0 | 1235 | } |
michael@0 | 1236 | |
michael@0 | 1237 | extern bool |
michael@0 | 1238 | js::AddObjectRoot(JSContext *cx, JSObject **rp, const char *name) |
michael@0 | 1239 | { |
michael@0 | 1240 | return AddRoot(cx, rp, name, JS_GC_ROOT_OBJECT_PTR); |
michael@0 | 1241 | } |
michael@0 | 1242 | |
michael@0 | 1243 | extern bool |
michael@0 | 1244 | js::AddObjectRoot(JSRuntime *rt, JSObject **rp, const char *name) |
michael@0 | 1245 | { |
michael@0 | 1246 | return AddRoot(rt, rp, name, JS_GC_ROOT_OBJECT_PTR); |
michael@0 | 1247 | } |
michael@0 | 1248 | |
michael@0 | 1249 | extern bool |
michael@0 | 1250 | js::AddScriptRoot(JSContext *cx, JSScript **rp, const char *name) |
michael@0 | 1251 | { |
michael@0 | 1252 | return AddRoot(cx, rp, name, JS_GC_ROOT_SCRIPT_PTR); |
michael@0 | 1253 | } |
michael@0 | 1254 | |
michael@0 | 1255 | extern JS_FRIEND_API(bool) |
michael@0 | 1256 | js::AddRawValueRoot(JSContext *cx, Value *vp, const char *name) |
michael@0 | 1257 | { |
michael@0 | 1258 | return AddRoot(cx, vp, name, JS_GC_ROOT_VALUE_PTR); |
michael@0 | 1259 | } |
michael@0 | 1260 | |
michael@0 | 1261 | extern JS_FRIEND_API(void) |
michael@0 | 1262 | js::RemoveRawValueRoot(JSContext *cx, Value *vp) |
michael@0 | 1263 | { |
michael@0 | 1264 | RemoveRoot(cx->runtime(), vp); |
michael@0 | 1265 | } |
michael@0 | 1266 | |
michael@0 | 1267 | void |
michael@0 | 1268 | js::RemoveRoot(JSRuntime *rt, void *rp) |
michael@0 | 1269 | { |
michael@0 | 1270 | rt->gcRootsHash.remove(rp); |
michael@0 | 1271 | rt->gcPoke = true; |
michael@0 | 1272 | } |
michael@0 | 1273 | |
michael@0 | 1274 | typedef RootedValueMap::Range RootRange; |
michael@0 | 1275 | typedef RootedValueMap::Entry RootEntry; |
michael@0 | 1276 | typedef RootedValueMap::Enum RootEnum; |
michael@0 | 1277 | |
michael@0 | 1278 | static size_t |
michael@0 | 1279 | ComputeTriggerBytes(Zone *zone, size_t lastBytes, size_t maxBytes, JSGCInvocationKind gckind) |
michael@0 | 1280 | { |
michael@0 | 1281 | size_t base = gckind == GC_SHRINK ? lastBytes : Max(lastBytes, zone->runtimeFromMainThread()->gcAllocationThreshold); |
michael@0 | 1282 | double trigger = double(base) * zone->gcHeapGrowthFactor; |
michael@0 | 1283 | return size_t(Min(double(maxBytes), trigger)); |
michael@0 | 1284 | } |
michael@0 | 1285 | |
michael@0 | 1286 | void |
michael@0 | 1287 | Zone::setGCLastBytes(size_t lastBytes, JSGCInvocationKind gckind) |
michael@0 | 1288 | { |
michael@0 | 1289 | /* |
michael@0 | 1290 | * The heap growth factor depends on the heap size after a GC and the GC frequency. |
michael@0 | 1291 | * For low frequency GCs (more than 1sec between GCs) we let the heap grow to 150%. |
michael@0 | 1292 | * For high frequency GCs we let the heap grow depending on the heap size: |
michael@0 | 1293 | * lastBytes < highFrequencyLowLimit: 300% |
michael@0 | 1294 | * lastBytes > highFrequencyHighLimit: 150% |
michael@0 | 1295 | * otherwise: linear interpolation between 150% and 300% based on lastBytes |
michael@0 | 1296 | */ |
michael@0 | 1297 | JSRuntime *rt = runtimeFromMainThread(); |
michael@0 | 1298 | |
michael@0 | 1299 | if (!rt->gcDynamicHeapGrowth) { |
michael@0 | 1300 | gcHeapGrowthFactor = 3.0; |
michael@0 | 1301 | } else if (lastBytes < 1 * 1024 * 1024) { |
michael@0 | 1302 | gcHeapGrowthFactor = rt->gcLowFrequencyHeapGrowth; |
michael@0 | 1303 | } else { |
michael@0 | 1304 | JS_ASSERT(rt->gcHighFrequencyHighLimitBytes > rt->gcHighFrequencyLowLimitBytes); |
michael@0 | 1305 | uint64_t now = PRMJ_Now(); |
michael@0 | 1306 | if (rt->gcLastGCTime && rt->gcLastGCTime + rt->gcHighFrequencyTimeThreshold * PRMJ_USEC_PER_MSEC > now) { |
michael@0 | 1307 | if (lastBytes <= rt->gcHighFrequencyLowLimitBytes) { |
michael@0 | 1308 | gcHeapGrowthFactor = rt->gcHighFrequencyHeapGrowthMax; |
michael@0 | 1309 | } else if (lastBytes >= rt->gcHighFrequencyHighLimitBytes) { |
michael@0 | 1310 | gcHeapGrowthFactor = rt->gcHighFrequencyHeapGrowthMin; |
michael@0 | 1311 | } else { |
michael@0 | 1312 | double k = (rt->gcHighFrequencyHeapGrowthMin - rt->gcHighFrequencyHeapGrowthMax) |
michael@0 | 1313 | / (double)(rt->gcHighFrequencyHighLimitBytes - rt->gcHighFrequencyLowLimitBytes); |
michael@0 | 1314 | gcHeapGrowthFactor = (k * (lastBytes - rt->gcHighFrequencyLowLimitBytes) |
michael@0 | 1315 | + rt->gcHighFrequencyHeapGrowthMax); |
michael@0 | 1316 | JS_ASSERT(gcHeapGrowthFactor <= rt->gcHighFrequencyHeapGrowthMax |
michael@0 | 1317 | && gcHeapGrowthFactor >= rt->gcHighFrequencyHeapGrowthMin); |
michael@0 | 1318 | } |
michael@0 | 1319 | rt->gcHighFrequencyGC = true; |
michael@0 | 1320 | } else { |
michael@0 | 1321 | gcHeapGrowthFactor = rt->gcLowFrequencyHeapGrowth; |
michael@0 | 1322 | rt->gcHighFrequencyGC = false; |
michael@0 | 1323 | } |
michael@0 | 1324 | } |
michael@0 | 1325 | gcTriggerBytes = ComputeTriggerBytes(this, lastBytes, rt->gcMaxBytes, gckind); |
michael@0 | 1326 | } |
michael@0 | 1327 | |
michael@0 | 1328 | void |
michael@0 | 1329 | Zone::reduceGCTriggerBytes(size_t amount) |
michael@0 | 1330 | { |
michael@0 | 1331 | JS_ASSERT(amount > 0); |
michael@0 | 1332 | JS_ASSERT(gcTriggerBytes >= amount); |
michael@0 | 1333 | if (gcTriggerBytes - amount < runtimeFromAnyThread()->gcAllocationThreshold * gcHeapGrowthFactor) |
michael@0 | 1334 | return; |
michael@0 | 1335 | gcTriggerBytes -= amount; |
michael@0 | 1336 | } |
michael@0 | 1337 | |
michael@0 | 1338 | Allocator::Allocator(Zone *zone) |
michael@0 | 1339 | : zone_(zone) |
michael@0 | 1340 | {} |
michael@0 | 1341 | |
michael@0 | 1342 | inline void |
michael@0 | 1343 | GCMarker::delayMarkingArena(ArenaHeader *aheader) |
michael@0 | 1344 | { |
michael@0 | 1345 | if (aheader->hasDelayedMarking) { |
michael@0 | 1346 | /* Arena already scheduled to be marked later */ |
michael@0 | 1347 | return; |
michael@0 | 1348 | } |
michael@0 | 1349 | aheader->setNextDelayedMarking(unmarkedArenaStackTop); |
michael@0 | 1350 | unmarkedArenaStackTop = aheader; |
michael@0 | 1351 | markLaterArenas++; |
michael@0 | 1352 | } |
michael@0 | 1353 | |
michael@0 | 1354 | void |
michael@0 | 1355 | GCMarker::delayMarkingChildren(const void *thing) |
michael@0 | 1356 | { |
michael@0 | 1357 | const Cell *cell = reinterpret_cast<const Cell *>(thing); |
michael@0 | 1358 | cell->arenaHeader()->markOverflow = 1; |
michael@0 | 1359 | delayMarkingArena(cell->arenaHeader()); |
michael@0 | 1360 | } |
michael@0 | 1361 | |
michael@0 | 1362 | inline void |
michael@0 | 1363 | ArenaLists::prepareForIncrementalGC(JSRuntime *rt) |
michael@0 | 1364 | { |
michael@0 | 1365 | for (size_t i = 0; i != FINALIZE_LIMIT; ++i) { |
michael@0 | 1366 | FreeSpan *headSpan = &freeLists[i]; |
michael@0 | 1367 | if (!headSpan->isEmpty()) { |
michael@0 | 1368 | ArenaHeader *aheader = headSpan->arenaHeader(); |
michael@0 | 1369 | aheader->allocatedDuringIncremental = true; |
michael@0 | 1370 | rt->gcMarker.delayMarkingArena(aheader); |
michael@0 | 1371 | } |
michael@0 | 1372 | } |
michael@0 | 1373 | } |
michael@0 | 1374 | |
michael@0 | 1375 | static inline void |
michael@0 | 1376 | PushArenaAllocatedDuringSweep(JSRuntime *runtime, ArenaHeader *arena) |
michael@0 | 1377 | { |
michael@0 | 1378 | arena->setNextAllocDuringSweep(runtime->gcArenasAllocatedDuringSweep); |
michael@0 | 1379 | runtime->gcArenasAllocatedDuringSweep = arena; |
michael@0 | 1380 | } |
michael@0 | 1381 | |
michael@0 | 1382 | inline void * |
michael@0 | 1383 | ArenaLists::allocateFromArenaInline(Zone *zone, AllocKind thingKind) |
michael@0 | 1384 | { |
michael@0 | 1385 | /* |
michael@0 | 1386 | * Parallel JS Note: |
michael@0 | 1387 | * |
michael@0 | 1388 | * This function can be called from parallel threads all of which |
michael@0 | 1389 | * are associated with the same compartment. In that case, each |
michael@0 | 1390 | * thread will have a distinct ArenaLists. Therefore, whenever we |
michael@0 | 1391 | * fall through to PickChunk() we must be sure that we are holding |
michael@0 | 1392 | * a lock. |
michael@0 | 1393 | */ |
michael@0 | 1394 | |
michael@0 | 1395 | Chunk *chunk = nullptr; |
michael@0 | 1396 | |
michael@0 | 1397 | ArenaList *al = &arenaLists[thingKind]; |
michael@0 | 1398 | AutoLockGC maybeLock; |
michael@0 | 1399 | |
michael@0 | 1400 | #ifdef JS_THREADSAFE |
michael@0 | 1401 | volatile uintptr_t *bfs = &backgroundFinalizeState[thingKind]; |
michael@0 | 1402 | if (*bfs != BFS_DONE) { |
michael@0 | 1403 | /* |
michael@0 | 1404 | * We cannot search the arena list for free things while the |
michael@0 | 1405 | * background finalization runs and can modify head or cursor at any |
michael@0 | 1406 | * moment. So we always allocate a new arena in that case. |
michael@0 | 1407 | */ |
michael@0 | 1408 | maybeLock.lock(zone->runtimeFromAnyThread()); |
michael@0 | 1409 | if (*bfs == BFS_RUN) { |
michael@0 | 1410 | JS_ASSERT(!*al->cursor); |
michael@0 | 1411 | chunk = PickChunk(zone); |
michael@0 | 1412 | if (!chunk) { |
michael@0 | 1413 | /* |
michael@0 | 1414 | * Let the caller to wait for the background allocation to |
michael@0 | 1415 | * finish and restart the allocation attempt. |
michael@0 | 1416 | */ |
michael@0 | 1417 | return nullptr; |
michael@0 | 1418 | } |
michael@0 | 1419 | } else if (*bfs == BFS_JUST_FINISHED) { |
michael@0 | 1420 | /* See comments before BackgroundFinalizeState definition. */ |
michael@0 | 1421 | *bfs = BFS_DONE; |
michael@0 | 1422 | } else { |
michael@0 | 1423 | JS_ASSERT(*bfs == BFS_DONE); |
michael@0 | 1424 | } |
michael@0 | 1425 | } |
michael@0 | 1426 | #endif /* JS_THREADSAFE */ |
michael@0 | 1427 | |
michael@0 | 1428 | if (!chunk) { |
michael@0 | 1429 | if (ArenaHeader *aheader = *al->cursor) { |
michael@0 | 1430 | JS_ASSERT(aheader->hasFreeThings()); |
michael@0 | 1431 | |
michael@0 | 1432 | /* |
michael@0 | 1433 | * Normally, the empty arenas are returned to the chunk |
michael@0 | 1434 | * and should not present on the list. In parallel |
michael@0 | 1435 | * execution, however, we keep empty arenas in the arena |
michael@0 | 1436 | * list to avoid synchronizing on the chunk. |
michael@0 | 1437 | */ |
michael@0 | 1438 | JS_ASSERT(!aheader->isEmpty() || InParallelSection()); |
michael@0 | 1439 | al->cursor = &aheader->next; |
michael@0 | 1440 | |
michael@0 | 1441 | /* |
michael@0 | 1442 | * Move the free span stored in the arena to the free list and |
michael@0 | 1443 | * allocate from it. |
michael@0 | 1444 | */ |
michael@0 | 1445 | freeLists[thingKind] = aheader->getFirstFreeSpan(); |
michael@0 | 1446 | aheader->setAsFullyUsed(); |
michael@0 | 1447 | if (MOZ_UNLIKELY(zone->wasGCStarted())) { |
michael@0 | 1448 | if (zone->needsBarrier()) { |
michael@0 | 1449 | aheader->allocatedDuringIncremental = true; |
michael@0 | 1450 | zone->runtimeFromMainThread()->gcMarker.delayMarkingArena(aheader); |
michael@0 | 1451 | } else if (zone->isGCSweeping()) { |
michael@0 | 1452 | PushArenaAllocatedDuringSweep(zone->runtimeFromMainThread(), aheader); |
michael@0 | 1453 | } |
michael@0 | 1454 | } |
michael@0 | 1455 | return freeLists[thingKind].infallibleAllocate(Arena::thingSize(thingKind)); |
michael@0 | 1456 | } |
michael@0 | 1457 | |
michael@0 | 1458 | /* Make sure we hold the GC lock before we call PickChunk. */ |
michael@0 | 1459 | if (!maybeLock.locked()) |
michael@0 | 1460 | maybeLock.lock(zone->runtimeFromAnyThread()); |
michael@0 | 1461 | chunk = PickChunk(zone); |
michael@0 | 1462 | if (!chunk) |
michael@0 | 1463 | return nullptr; |
michael@0 | 1464 | } |
michael@0 | 1465 | |
michael@0 | 1466 | /* |
michael@0 | 1467 | * While we still hold the GC lock get an arena from some chunk, mark it |
michael@0 | 1468 | * as full as its single free span is moved to the free lits, and insert |
michael@0 | 1469 | * it to the list as a fully allocated arena. |
michael@0 | 1470 | * |
michael@0 | 1471 | * We add the arena before the the head, not after the tail pointed by the |
michael@0 | 1472 | * cursor, so after the GC the most recently added arena will be used first |
michael@0 | 1473 | * for allocations improving cache locality. |
michael@0 | 1474 | */ |
michael@0 | 1475 | JS_ASSERT(!*al->cursor); |
michael@0 | 1476 | ArenaHeader *aheader = chunk->allocateArena(zone, thingKind); |
michael@0 | 1477 | if (!aheader) |
michael@0 | 1478 | return nullptr; |
michael@0 | 1479 | |
michael@0 | 1480 | if (MOZ_UNLIKELY(zone->wasGCStarted())) { |
michael@0 | 1481 | if (zone->needsBarrier()) { |
michael@0 | 1482 | aheader->allocatedDuringIncremental = true; |
michael@0 | 1483 | zone->runtimeFromMainThread()->gcMarker.delayMarkingArena(aheader); |
michael@0 | 1484 | } else if (zone->isGCSweeping()) { |
michael@0 | 1485 | PushArenaAllocatedDuringSweep(zone->runtimeFromMainThread(), aheader); |
michael@0 | 1486 | } |
michael@0 | 1487 | } |
michael@0 | 1488 | aheader->next = al->head; |
michael@0 | 1489 | if (!al->head) { |
michael@0 | 1490 | JS_ASSERT(al->cursor == &al->head); |
michael@0 | 1491 | al->cursor = &aheader->next; |
michael@0 | 1492 | } |
michael@0 | 1493 | al->head = aheader; |
michael@0 | 1494 | |
michael@0 | 1495 | /* See comments before allocateFromNewArena about this assert. */ |
michael@0 | 1496 | JS_ASSERT(!aheader->hasFreeThings()); |
michael@0 | 1497 | uintptr_t arenaAddr = aheader->arenaAddress(); |
michael@0 | 1498 | return freeLists[thingKind].allocateFromNewArena(arenaAddr, |
michael@0 | 1499 | Arena::firstThingOffset(thingKind), |
michael@0 | 1500 | Arena::thingSize(thingKind)); |
michael@0 | 1501 | } |
michael@0 | 1502 | |
michael@0 | 1503 | void * |
michael@0 | 1504 | ArenaLists::allocateFromArena(JS::Zone *zone, AllocKind thingKind) |
michael@0 | 1505 | { |
michael@0 | 1506 | return allocateFromArenaInline(zone, thingKind); |
michael@0 | 1507 | } |
michael@0 | 1508 | |
michael@0 | 1509 | void |
michael@0 | 1510 | ArenaLists::wipeDuringParallelExecution(JSRuntime *rt) |
michael@0 | 1511 | { |
michael@0 | 1512 | JS_ASSERT(InParallelSection()); |
michael@0 | 1513 | |
michael@0 | 1514 | // First, check that we all objects we have allocated are eligible |
michael@0 | 1515 | // for background finalization. The idea is that we will free |
michael@0 | 1516 | // (below) ALL background finalizable objects, because we know (by |
michael@0 | 1517 | // the rules of parallel execution) they are not reachable except |
michael@0 | 1518 | // by other thread-local objects. However, if there were any |
michael@0 | 1519 | // object ineligible for background finalization, it might retain |
michael@0 | 1520 | // a reference to one of these background finalizable objects, and |
michael@0 | 1521 | // that'd be bad. |
michael@0 | 1522 | for (unsigned i = 0; i < FINALIZE_LAST; i++) { |
michael@0 | 1523 | AllocKind thingKind = AllocKind(i); |
michael@0 | 1524 | if (!IsBackgroundFinalized(thingKind) && arenaLists[thingKind].head) |
michael@0 | 1525 | return; |
michael@0 | 1526 | } |
michael@0 | 1527 | |
michael@0 | 1528 | // Finalize all background finalizable objects immediately and |
michael@0 | 1529 | // return the (now empty) arenas back to arena list. |
michael@0 | 1530 | FreeOp fop(rt, false); |
michael@0 | 1531 | for (unsigned i = 0; i < FINALIZE_OBJECT_LAST; i++) { |
michael@0 | 1532 | AllocKind thingKind = AllocKind(i); |
michael@0 | 1533 | |
michael@0 | 1534 | if (!IsBackgroundFinalized(thingKind)) |
michael@0 | 1535 | continue; |
michael@0 | 1536 | |
michael@0 | 1537 | if (arenaLists[i].head) { |
michael@0 | 1538 | purge(thingKind); |
michael@0 | 1539 | forceFinalizeNow(&fop, thingKind); |
michael@0 | 1540 | } |
michael@0 | 1541 | } |
michael@0 | 1542 | } |
michael@0 | 1543 | |
michael@0 | 1544 | void |
michael@0 | 1545 | ArenaLists::finalizeNow(FreeOp *fop, AllocKind thingKind) |
michael@0 | 1546 | { |
michael@0 | 1547 | JS_ASSERT(!IsBackgroundFinalized(thingKind)); |
michael@0 | 1548 | forceFinalizeNow(fop, thingKind); |
michael@0 | 1549 | } |
michael@0 | 1550 | |
michael@0 | 1551 | void |
michael@0 | 1552 | ArenaLists::forceFinalizeNow(FreeOp *fop, AllocKind thingKind) |
michael@0 | 1553 | { |
michael@0 | 1554 | JS_ASSERT(backgroundFinalizeState[thingKind] == BFS_DONE); |
michael@0 | 1555 | |
michael@0 | 1556 | ArenaHeader *arenas = arenaLists[thingKind].head; |
michael@0 | 1557 | arenaLists[thingKind].clear(); |
michael@0 | 1558 | |
michael@0 | 1559 | SliceBudget budget; |
michael@0 | 1560 | FinalizeArenas(fop, &arenas, arenaLists[thingKind], thingKind, budget); |
michael@0 | 1561 | JS_ASSERT(!arenas); |
michael@0 | 1562 | } |
michael@0 | 1563 | |
michael@0 | 1564 | void |
michael@0 | 1565 | ArenaLists::queueForForegroundSweep(FreeOp *fop, AllocKind thingKind) |
michael@0 | 1566 | { |
michael@0 | 1567 | JS_ASSERT(!IsBackgroundFinalized(thingKind)); |
michael@0 | 1568 | JS_ASSERT(backgroundFinalizeState[thingKind] == BFS_DONE); |
michael@0 | 1569 | JS_ASSERT(!arenaListsToSweep[thingKind]); |
michael@0 | 1570 | |
michael@0 | 1571 | arenaListsToSweep[thingKind] = arenaLists[thingKind].head; |
michael@0 | 1572 | arenaLists[thingKind].clear(); |
michael@0 | 1573 | } |
michael@0 | 1574 | |
michael@0 | 1575 | inline void |
michael@0 | 1576 | ArenaLists::queueForBackgroundSweep(FreeOp *fop, AllocKind thingKind) |
michael@0 | 1577 | { |
michael@0 | 1578 | JS_ASSERT(IsBackgroundFinalized(thingKind)); |
michael@0 | 1579 | |
michael@0 | 1580 | #ifdef JS_THREADSAFE |
michael@0 | 1581 | JS_ASSERT(!fop->runtime()->gcHelperThread.sweeping()); |
michael@0 | 1582 | #endif |
michael@0 | 1583 | |
michael@0 | 1584 | ArenaList *al = &arenaLists[thingKind]; |
michael@0 | 1585 | if (!al->head) { |
michael@0 | 1586 | JS_ASSERT(backgroundFinalizeState[thingKind] == BFS_DONE); |
michael@0 | 1587 | JS_ASSERT(al->cursor == &al->head); |
michael@0 | 1588 | return; |
michael@0 | 1589 | } |
michael@0 | 1590 | |
michael@0 | 1591 | /* |
michael@0 | 1592 | * The state can be done, or just-finished if we have not allocated any GC |
michael@0 | 1593 | * things from the arena list after the previous background finalization. |
michael@0 | 1594 | */ |
michael@0 | 1595 | JS_ASSERT(backgroundFinalizeState[thingKind] == BFS_DONE || |
michael@0 | 1596 | backgroundFinalizeState[thingKind] == BFS_JUST_FINISHED); |
michael@0 | 1597 | |
michael@0 | 1598 | arenaListsToSweep[thingKind] = al->head; |
michael@0 | 1599 | al->clear(); |
michael@0 | 1600 | backgroundFinalizeState[thingKind] = BFS_RUN; |
michael@0 | 1601 | } |
michael@0 | 1602 | |
michael@0 | 1603 | /*static*/ void |
michael@0 | 1604 | ArenaLists::backgroundFinalize(FreeOp *fop, ArenaHeader *listHead, bool onBackgroundThread) |
michael@0 | 1605 | { |
michael@0 | 1606 | JS_ASSERT(listHead); |
michael@0 | 1607 | AllocKind thingKind = listHead->getAllocKind(); |
michael@0 | 1608 | Zone *zone = listHead->zone; |
michael@0 | 1609 | |
michael@0 | 1610 | ArenaList finalized; |
michael@0 | 1611 | SliceBudget budget; |
michael@0 | 1612 | FinalizeArenas(fop, &listHead, finalized, thingKind, budget); |
michael@0 | 1613 | JS_ASSERT(!listHead); |
michael@0 | 1614 | |
michael@0 | 1615 | /* |
michael@0 | 1616 | * After we finish the finalization al->cursor must point to the end of |
michael@0 | 1617 | * the head list as we emptied the list before the background finalization |
michael@0 | 1618 | * and the allocation adds new arenas before the cursor. |
michael@0 | 1619 | */ |
michael@0 | 1620 | ArenaLists *lists = &zone->allocator.arenas; |
michael@0 | 1621 | ArenaList *al = &lists->arenaLists[thingKind]; |
michael@0 | 1622 | |
michael@0 | 1623 | AutoLockGC lock(fop->runtime()); |
michael@0 | 1624 | JS_ASSERT(lists->backgroundFinalizeState[thingKind] == BFS_RUN); |
michael@0 | 1625 | JS_ASSERT(!*al->cursor); |
michael@0 | 1626 | |
michael@0 | 1627 | if (finalized.head) { |
michael@0 | 1628 | *al->cursor = finalized.head; |
michael@0 | 1629 | if (finalized.cursor != &finalized.head) |
michael@0 | 1630 | al->cursor = finalized.cursor; |
michael@0 | 1631 | } |
michael@0 | 1632 | |
michael@0 | 1633 | /* |
michael@0 | 1634 | * We must set the state to BFS_JUST_FINISHED if we are running on the |
michael@0 | 1635 | * background thread and we have touched arenaList list, even if we add to |
michael@0 | 1636 | * the list only fully allocated arenas without any free things. It ensures |
michael@0 | 1637 | * that the allocation thread takes the GC lock and all writes to the free |
michael@0 | 1638 | * list elements are propagated. As we always take the GC lock when |
michael@0 | 1639 | * allocating new arenas from the chunks we can set the state to BFS_DONE if |
michael@0 | 1640 | * we have released all finalized arenas back to their chunks. |
michael@0 | 1641 | */ |
michael@0 | 1642 | if (onBackgroundThread && finalized.head) |
michael@0 | 1643 | lists->backgroundFinalizeState[thingKind] = BFS_JUST_FINISHED; |
michael@0 | 1644 | else |
michael@0 | 1645 | lists->backgroundFinalizeState[thingKind] = BFS_DONE; |
michael@0 | 1646 | |
michael@0 | 1647 | lists->arenaListsToSweep[thingKind] = nullptr; |
michael@0 | 1648 | } |
michael@0 | 1649 | |
michael@0 | 1650 | void |
michael@0 | 1651 | ArenaLists::queueObjectsForSweep(FreeOp *fop) |
michael@0 | 1652 | { |
michael@0 | 1653 | gcstats::AutoPhase ap(fop->runtime()->gcStats, gcstats::PHASE_SWEEP_OBJECT); |
michael@0 | 1654 | |
michael@0 | 1655 | finalizeNow(fop, FINALIZE_OBJECT0); |
michael@0 | 1656 | finalizeNow(fop, FINALIZE_OBJECT2); |
michael@0 | 1657 | finalizeNow(fop, FINALIZE_OBJECT4); |
michael@0 | 1658 | finalizeNow(fop, FINALIZE_OBJECT8); |
michael@0 | 1659 | finalizeNow(fop, FINALIZE_OBJECT12); |
michael@0 | 1660 | finalizeNow(fop, FINALIZE_OBJECT16); |
michael@0 | 1661 | |
michael@0 | 1662 | queueForBackgroundSweep(fop, FINALIZE_OBJECT0_BACKGROUND); |
michael@0 | 1663 | queueForBackgroundSweep(fop, FINALIZE_OBJECT2_BACKGROUND); |
michael@0 | 1664 | queueForBackgroundSweep(fop, FINALIZE_OBJECT4_BACKGROUND); |
michael@0 | 1665 | queueForBackgroundSweep(fop, FINALIZE_OBJECT8_BACKGROUND); |
michael@0 | 1666 | queueForBackgroundSweep(fop, FINALIZE_OBJECT12_BACKGROUND); |
michael@0 | 1667 | queueForBackgroundSweep(fop, FINALIZE_OBJECT16_BACKGROUND); |
michael@0 | 1668 | } |
michael@0 | 1669 | |
michael@0 | 1670 | void |
michael@0 | 1671 | ArenaLists::queueStringsForSweep(FreeOp *fop) |
michael@0 | 1672 | { |
michael@0 | 1673 | gcstats::AutoPhase ap(fop->runtime()->gcStats, gcstats::PHASE_SWEEP_STRING); |
michael@0 | 1674 | |
michael@0 | 1675 | queueForBackgroundSweep(fop, FINALIZE_FAT_INLINE_STRING); |
michael@0 | 1676 | queueForBackgroundSweep(fop, FINALIZE_STRING); |
michael@0 | 1677 | |
michael@0 | 1678 | queueForForegroundSweep(fop, FINALIZE_EXTERNAL_STRING); |
michael@0 | 1679 | } |
michael@0 | 1680 | |
michael@0 | 1681 | void |
michael@0 | 1682 | ArenaLists::queueScriptsForSweep(FreeOp *fop) |
michael@0 | 1683 | { |
michael@0 | 1684 | gcstats::AutoPhase ap(fop->runtime()->gcStats, gcstats::PHASE_SWEEP_SCRIPT); |
michael@0 | 1685 | queueForForegroundSweep(fop, FINALIZE_SCRIPT); |
michael@0 | 1686 | queueForForegroundSweep(fop, FINALIZE_LAZY_SCRIPT); |
michael@0 | 1687 | } |
michael@0 | 1688 | |
michael@0 | 1689 | void |
michael@0 | 1690 | ArenaLists::queueJitCodeForSweep(FreeOp *fop) |
michael@0 | 1691 | { |
michael@0 | 1692 | gcstats::AutoPhase ap(fop->runtime()->gcStats, gcstats::PHASE_SWEEP_JITCODE); |
michael@0 | 1693 | queueForForegroundSweep(fop, FINALIZE_JITCODE); |
michael@0 | 1694 | } |
michael@0 | 1695 | |
michael@0 | 1696 | void |
michael@0 | 1697 | ArenaLists::queueShapesForSweep(FreeOp *fop) |
michael@0 | 1698 | { |
michael@0 | 1699 | gcstats::AutoPhase ap(fop->runtime()->gcStats, gcstats::PHASE_SWEEP_SHAPE); |
michael@0 | 1700 | |
michael@0 | 1701 | queueForBackgroundSweep(fop, FINALIZE_SHAPE); |
michael@0 | 1702 | queueForBackgroundSweep(fop, FINALIZE_BASE_SHAPE); |
michael@0 | 1703 | queueForBackgroundSweep(fop, FINALIZE_TYPE_OBJECT); |
michael@0 | 1704 | } |
michael@0 | 1705 | |
michael@0 | 1706 | static void * |
michael@0 | 1707 | RunLastDitchGC(JSContext *cx, JS::Zone *zone, AllocKind thingKind) |
michael@0 | 1708 | { |
michael@0 | 1709 | /* |
michael@0 | 1710 | * In parallel sections, we do not attempt to refill the free list |
michael@0 | 1711 | * and hence do not encounter last ditch GC. |
michael@0 | 1712 | */ |
michael@0 | 1713 | JS_ASSERT(!InParallelSection()); |
michael@0 | 1714 | |
michael@0 | 1715 | PrepareZoneForGC(zone); |
michael@0 | 1716 | |
michael@0 | 1717 | JSRuntime *rt = cx->runtime(); |
michael@0 | 1718 | |
michael@0 | 1719 | /* The last ditch GC preserves all atoms. */ |
michael@0 | 1720 | AutoKeepAtoms keepAtoms(cx->perThreadData); |
michael@0 | 1721 | GC(rt, GC_NORMAL, JS::gcreason::LAST_DITCH); |
michael@0 | 1722 | |
michael@0 | 1723 | /* |
michael@0 | 1724 | * The JSGC_END callback can legitimately allocate new GC |
michael@0 | 1725 | * things and populate the free list. If that happens, just |
michael@0 | 1726 | * return that list head. |
michael@0 | 1727 | */ |
michael@0 | 1728 | size_t thingSize = Arena::thingSize(thingKind); |
michael@0 | 1729 | if (void *thing = zone->allocator.arenas.allocateFromFreeList(thingKind, thingSize)) |
michael@0 | 1730 | return thing; |
michael@0 | 1731 | |
michael@0 | 1732 | return nullptr; |
michael@0 | 1733 | } |
michael@0 | 1734 | |
michael@0 | 1735 | template <AllowGC allowGC> |
michael@0 | 1736 | /* static */ void * |
michael@0 | 1737 | ArenaLists::refillFreeList(ThreadSafeContext *cx, AllocKind thingKind) |
michael@0 | 1738 | { |
michael@0 | 1739 | JS_ASSERT(cx->allocator()->arenas.freeLists[thingKind].isEmpty()); |
michael@0 | 1740 | JS_ASSERT_IF(cx->isJSContext(), !cx->asJSContext()->runtime()->isHeapBusy()); |
michael@0 | 1741 | |
michael@0 | 1742 | Zone *zone = cx->allocator()->zone_; |
michael@0 | 1743 | |
michael@0 | 1744 | bool runGC = cx->allowGC() && allowGC && |
michael@0 | 1745 | cx->asJSContext()->runtime()->gcIncrementalState != NO_INCREMENTAL && |
michael@0 | 1746 | zone->gcBytes > zone->gcTriggerBytes; |
michael@0 | 1747 | |
michael@0 | 1748 | #ifdef JS_THREADSAFE |
michael@0 | 1749 | JS_ASSERT_IF(cx->isJSContext() && allowGC, |
michael@0 | 1750 | !cx->asJSContext()->runtime()->currentThreadHasExclusiveAccess()); |
michael@0 | 1751 | #endif |
michael@0 | 1752 | |
michael@0 | 1753 | for (;;) { |
michael@0 | 1754 | if (MOZ_UNLIKELY(runGC)) { |
michael@0 | 1755 | if (void *thing = RunLastDitchGC(cx->asJSContext(), zone, thingKind)) |
michael@0 | 1756 | return thing; |
michael@0 | 1757 | } |
michael@0 | 1758 | |
michael@0 | 1759 | if (cx->isJSContext()) { |
michael@0 | 1760 | /* |
michael@0 | 1761 | * allocateFromArena may fail while the background finalization still |
michael@0 | 1762 | * run. If we are on the main thread, we want to wait for it to finish |
michael@0 | 1763 | * and restart. However, checking for that is racy as the background |
michael@0 | 1764 | * finalization could free some things after allocateFromArena decided |
michael@0 | 1765 | * to fail but at this point it may have already stopped. To avoid |
michael@0 | 1766 | * this race we always try to allocate twice. |
michael@0 | 1767 | */ |
michael@0 | 1768 | for (bool secondAttempt = false; ; secondAttempt = true) { |
michael@0 | 1769 | void *thing = cx->allocator()->arenas.allocateFromArenaInline(zone, thingKind); |
michael@0 | 1770 | if (MOZ_LIKELY(!!thing)) |
michael@0 | 1771 | return thing; |
michael@0 | 1772 | if (secondAttempt) |
michael@0 | 1773 | break; |
michael@0 | 1774 | |
michael@0 | 1775 | cx->asJSContext()->runtime()->gcHelperThread.waitBackgroundSweepEnd(); |
michael@0 | 1776 | } |
michael@0 | 1777 | } else { |
michael@0 | 1778 | #ifdef JS_THREADSAFE |
michael@0 | 1779 | /* |
michael@0 | 1780 | * If we're off the main thread, we try to allocate once and |
michael@0 | 1781 | * return whatever value we get. If we aren't in a ForkJoin |
michael@0 | 1782 | * session (i.e. we are in a worker thread async with the main |
michael@0 | 1783 | * thread), we need to first ensure the main thread is not in a GC |
michael@0 | 1784 | * session. |
michael@0 | 1785 | */ |
michael@0 | 1786 | mozilla::Maybe<AutoLockWorkerThreadState> lock; |
michael@0 | 1787 | JSRuntime *rt = zone->runtimeFromAnyThread(); |
michael@0 | 1788 | if (rt->exclusiveThreadsPresent()) { |
michael@0 | 1789 | lock.construct(); |
michael@0 | 1790 | while (rt->isHeapBusy()) |
michael@0 | 1791 | WorkerThreadState().wait(GlobalWorkerThreadState::PRODUCER); |
michael@0 | 1792 | } |
michael@0 | 1793 | |
michael@0 | 1794 | void *thing = cx->allocator()->arenas.allocateFromArenaInline(zone, thingKind); |
michael@0 | 1795 | if (thing) |
michael@0 | 1796 | return thing; |
michael@0 | 1797 | #else |
michael@0 | 1798 | MOZ_CRASH(); |
michael@0 | 1799 | #endif |
michael@0 | 1800 | } |
michael@0 | 1801 | |
michael@0 | 1802 | if (!cx->allowGC() || !allowGC) |
michael@0 | 1803 | return nullptr; |
michael@0 | 1804 | |
michael@0 | 1805 | /* |
michael@0 | 1806 | * We failed to allocate. Run the GC if we haven't done it already. |
michael@0 | 1807 | * Otherwise report OOM. |
michael@0 | 1808 | */ |
michael@0 | 1809 | if (runGC) |
michael@0 | 1810 | break; |
michael@0 | 1811 | runGC = true; |
michael@0 | 1812 | } |
michael@0 | 1813 | |
michael@0 | 1814 | JS_ASSERT(allowGC); |
michael@0 | 1815 | js_ReportOutOfMemory(cx); |
michael@0 | 1816 | return nullptr; |
michael@0 | 1817 | } |
michael@0 | 1818 | |
michael@0 | 1819 | template void * |
michael@0 | 1820 | ArenaLists::refillFreeList<NoGC>(ThreadSafeContext *cx, AllocKind thingKind); |
michael@0 | 1821 | |
michael@0 | 1822 | template void * |
michael@0 | 1823 | ArenaLists::refillFreeList<CanGC>(ThreadSafeContext *cx, AllocKind thingKind); |
michael@0 | 1824 | |
michael@0 | 1825 | JSGCTraceKind |
michael@0 | 1826 | js_GetGCThingTraceKind(void *thing) |
michael@0 | 1827 | { |
michael@0 | 1828 | return GetGCThingTraceKind(thing); |
michael@0 | 1829 | } |
michael@0 | 1830 | |
michael@0 | 1831 | /* static */ int64_t |
michael@0 | 1832 | SliceBudget::TimeBudget(int64_t millis) |
michael@0 | 1833 | { |
michael@0 | 1834 | return millis * PRMJ_USEC_PER_MSEC; |
michael@0 | 1835 | } |
michael@0 | 1836 | |
michael@0 | 1837 | /* static */ int64_t |
michael@0 | 1838 | SliceBudget::WorkBudget(int64_t work) |
michael@0 | 1839 | { |
michael@0 | 1840 | /* For work = 0 not to mean Unlimited, we subtract 1. */ |
michael@0 | 1841 | return -work - 1; |
michael@0 | 1842 | } |
michael@0 | 1843 | |
michael@0 | 1844 | SliceBudget::SliceBudget() |
michael@0 | 1845 | : deadline(INT64_MAX), |
michael@0 | 1846 | counter(INTPTR_MAX) |
michael@0 | 1847 | { |
michael@0 | 1848 | } |
michael@0 | 1849 | |
michael@0 | 1850 | SliceBudget::SliceBudget(int64_t budget) |
michael@0 | 1851 | { |
michael@0 | 1852 | if (budget == Unlimited) { |
michael@0 | 1853 | deadline = INT64_MAX; |
michael@0 | 1854 | counter = INTPTR_MAX; |
michael@0 | 1855 | } else if (budget > 0) { |
michael@0 | 1856 | deadline = PRMJ_Now() + budget; |
michael@0 | 1857 | counter = CounterReset; |
michael@0 | 1858 | } else { |
michael@0 | 1859 | deadline = 0; |
michael@0 | 1860 | counter = -budget - 1; |
michael@0 | 1861 | } |
michael@0 | 1862 | } |
michael@0 | 1863 | |
michael@0 | 1864 | bool |
michael@0 | 1865 | SliceBudget::checkOverBudget() |
michael@0 | 1866 | { |
michael@0 | 1867 | bool over = PRMJ_Now() > deadline; |
michael@0 | 1868 | if (!over) |
michael@0 | 1869 | counter = CounterReset; |
michael@0 | 1870 | return over; |
michael@0 | 1871 | } |
michael@0 | 1872 | |
michael@0 | 1873 | void |
michael@0 | 1874 | js::MarkCompartmentActive(InterpreterFrame *fp) |
michael@0 | 1875 | { |
michael@0 | 1876 | fp->script()->compartment()->zone()->active = true; |
michael@0 | 1877 | } |
michael@0 | 1878 | |
michael@0 | 1879 | static void |
michael@0 | 1880 | RequestInterrupt(JSRuntime *rt, JS::gcreason::Reason reason) |
michael@0 | 1881 | { |
michael@0 | 1882 | if (rt->gcIsNeeded) |
michael@0 | 1883 | return; |
michael@0 | 1884 | |
michael@0 | 1885 | rt->gcIsNeeded = true; |
michael@0 | 1886 | rt->gcTriggerReason = reason; |
michael@0 | 1887 | rt->requestInterrupt(JSRuntime::RequestInterruptMainThread); |
michael@0 | 1888 | } |
michael@0 | 1889 | |
michael@0 | 1890 | bool |
michael@0 | 1891 | js::TriggerGC(JSRuntime *rt, JS::gcreason::Reason reason) |
michael@0 | 1892 | { |
michael@0 | 1893 | /* Wait till end of parallel section to trigger GC. */ |
michael@0 | 1894 | if (InParallelSection()) { |
michael@0 | 1895 | ForkJoinContext::current()->requestGC(reason); |
michael@0 | 1896 | return true; |
michael@0 | 1897 | } |
michael@0 | 1898 | |
michael@0 | 1899 | /* Don't trigger GCs when allocating under the interrupt callback lock. */ |
michael@0 | 1900 | if (rt->currentThreadOwnsInterruptLock()) |
michael@0 | 1901 | return false; |
michael@0 | 1902 | |
michael@0 | 1903 | JS_ASSERT(CurrentThreadCanAccessRuntime(rt)); |
michael@0 | 1904 | |
michael@0 | 1905 | /* GC is already running. */ |
michael@0 | 1906 | if (rt->isHeapCollecting()) |
michael@0 | 1907 | return false; |
michael@0 | 1908 | |
michael@0 | 1909 | JS::PrepareForFullGC(rt); |
michael@0 | 1910 | RequestInterrupt(rt, reason); |
michael@0 | 1911 | return true; |
michael@0 | 1912 | } |
michael@0 | 1913 | |
michael@0 | 1914 | bool |
michael@0 | 1915 | js::TriggerZoneGC(Zone *zone, JS::gcreason::Reason reason) |
michael@0 | 1916 | { |
michael@0 | 1917 | /* |
michael@0 | 1918 | * If parallel threads are running, wait till they |
michael@0 | 1919 | * are stopped to trigger GC. |
michael@0 | 1920 | */ |
michael@0 | 1921 | if (InParallelSection()) { |
michael@0 | 1922 | ForkJoinContext::current()->requestZoneGC(zone, reason); |
michael@0 | 1923 | return true; |
michael@0 | 1924 | } |
michael@0 | 1925 | |
michael@0 | 1926 | /* Zones in use by a thread with an exclusive context can't be collected. */ |
michael@0 | 1927 | if (zone->usedByExclusiveThread) |
michael@0 | 1928 | return false; |
michael@0 | 1929 | |
michael@0 | 1930 | JSRuntime *rt = zone->runtimeFromMainThread(); |
michael@0 | 1931 | |
michael@0 | 1932 | /* Don't trigger GCs when allocating under the interrupt callback lock. */ |
michael@0 | 1933 | if (rt->currentThreadOwnsInterruptLock()) |
michael@0 | 1934 | return false; |
michael@0 | 1935 | |
michael@0 | 1936 | /* GC is already running. */ |
michael@0 | 1937 | if (rt->isHeapCollecting()) |
michael@0 | 1938 | return false; |
michael@0 | 1939 | |
michael@0 | 1940 | if (rt->gcZeal() == ZealAllocValue) { |
michael@0 | 1941 | TriggerGC(rt, reason); |
michael@0 | 1942 | return true; |
michael@0 | 1943 | } |
michael@0 | 1944 | |
michael@0 | 1945 | if (rt->isAtomsZone(zone)) { |
michael@0 | 1946 | /* We can't do a zone GC of the atoms compartment. */ |
michael@0 | 1947 | TriggerGC(rt, reason); |
michael@0 | 1948 | return true; |
michael@0 | 1949 | } |
michael@0 | 1950 | |
michael@0 | 1951 | PrepareZoneForGC(zone); |
michael@0 | 1952 | RequestInterrupt(rt, reason); |
michael@0 | 1953 | return true; |
michael@0 | 1954 | } |
michael@0 | 1955 | |
michael@0 | 1956 | void |
michael@0 | 1957 | js::MaybeGC(JSContext *cx) |
michael@0 | 1958 | { |
michael@0 | 1959 | JSRuntime *rt = cx->runtime(); |
michael@0 | 1960 | JS_ASSERT(CurrentThreadCanAccessRuntime(rt)); |
michael@0 | 1961 | |
michael@0 | 1962 | if (rt->gcZeal() == ZealAllocValue || rt->gcZeal() == ZealPokeValue) { |
michael@0 | 1963 | JS::PrepareForFullGC(rt); |
michael@0 | 1964 | GC(rt, GC_NORMAL, JS::gcreason::MAYBEGC); |
michael@0 | 1965 | return; |
michael@0 | 1966 | } |
michael@0 | 1967 | |
michael@0 | 1968 | if (rt->gcIsNeeded) { |
michael@0 | 1969 | GCSlice(rt, GC_NORMAL, JS::gcreason::MAYBEGC); |
michael@0 | 1970 | return; |
michael@0 | 1971 | } |
michael@0 | 1972 | |
michael@0 | 1973 | double factor = rt->gcHighFrequencyGC ? 0.85 : 0.9; |
michael@0 | 1974 | Zone *zone = cx->zone(); |
michael@0 | 1975 | if (zone->gcBytes > 1024 * 1024 && |
michael@0 | 1976 | zone->gcBytes >= factor * zone->gcTriggerBytes && |
michael@0 | 1977 | rt->gcIncrementalState == NO_INCREMENTAL && |
michael@0 | 1978 | !rt->gcHelperThread.sweeping()) |
michael@0 | 1979 | { |
michael@0 | 1980 | PrepareZoneForGC(zone); |
michael@0 | 1981 | GCSlice(rt, GC_NORMAL, JS::gcreason::MAYBEGC); |
michael@0 | 1982 | return; |
michael@0 | 1983 | } |
michael@0 | 1984 | |
michael@0 | 1985 | #ifndef JS_MORE_DETERMINISTIC |
michael@0 | 1986 | /* |
michael@0 | 1987 | * Access to the counters and, on 32 bit, setting gcNextFullGCTime below |
michael@0 | 1988 | * is not atomic and a race condition could trigger or suppress the GC. We |
michael@0 | 1989 | * tolerate this. |
michael@0 | 1990 | */ |
michael@0 | 1991 | int64_t now = PRMJ_Now(); |
michael@0 | 1992 | if (rt->gcNextFullGCTime && rt->gcNextFullGCTime <= now) { |
michael@0 | 1993 | if (rt->gcChunkAllocationSinceLastGC || |
michael@0 | 1994 | rt->gcNumArenasFreeCommitted > rt->gcDecommitThreshold) |
michael@0 | 1995 | { |
michael@0 | 1996 | JS::PrepareForFullGC(rt); |
michael@0 | 1997 | GCSlice(rt, GC_SHRINK, JS::gcreason::MAYBEGC); |
michael@0 | 1998 | } else { |
michael@0 | 1999 | rt->gcNextFullGCTime = now + GC_IDLE_FULL_SPAN; |
michael@0 | 2000 | } |
michael@0 | 2001 | } |
michael@0 | 2002 | #endif |
michael@0 | 2003 | } |
michael@0 | 2004 | |
michael@0 | 2005 | static void |
michael@0 | 2006 | DecommitArenasFromAvailableList(JSRuntime *rt, Chunk **availableListHeadp) |
michael@0 | 2007 | { |
michael@0 | 2008 | Chunk *chunk = *availableListHeadp; |
michael@0 | 2009 | if (!chunk) |
michael@0 | 2010 | return; |
michael@0 | 2011 | |
michael@0 | 2012 | /* |
michael@0 | 2013 | * Decommit is expensive so we avoid holding the GC lock while calling it. |
michael@0 | 2014 | * |
michael@0 | 2015 | * We decommit from the tail of the list to minimize interference with the |
michael@0 | 2016 | * main thread that may start to allocate things at this point. |
michael@0 | 2017 | * |
michael@0 | 2018 | * The arena that is been decommitted outside the GC lock must not be |
michael@0 | 2019 | * available for allocations either via the free list or via the |
michael@0 | 2020 | * decommittedArenas bitmap. For that we just fetch the arena from the |
michael@0 | 2021 | * free list before the decommit pretending as it was allocated. If this |
michael@0 | 2022 | * arena also is the single free arena in the chunk, then we must remove |
michael@0 | 2023 | * from the available list before we release the lock so the allocation |
michael@0 | 2024 | * thread would not see chunks with no free arenas on the available list. |
michael@0 | 2025 | * |
michael@0 | 2026 | * After we retake the lock, we mark the arena as free and decommitted if |
michael@0 | 2027 | * the decommit was successful. We must also add the chunk back to the |
michael@0 | 2028 | * available list if we removed it previously or when the main thread |
michael@0 | 2029 | * have allocated all remaining free arenas in the chunk. |
michael@0 | 2030 | * |
michael@0 | 2031 | * We also must make sure that the aheader is not accessed again after we |
michael@0 | 2032 | * decommit the arena. |
michael@0 | 2033 | */ |
michael@0 | 2034 | JS_ASSERT(chunk->info.prevp == availableListHeadp); |
michael@0 | 2035 | while (Chunk *next = chunk->info.next) { |
michael@0 | 2036 | JS_ASSERT(next->info.prevp == &chunk->info.next); |
michael@0 | 2037 | chunk = next; |
michael@0 | 2038 | } |
michael@0 | 2039 | |
michael@0 | 2040 | for (;;) { |
michael@0 | 2041 | while (chunk->info.numArenasFreeCommitted != 0) { |
michael@0 | 2042 | ArenaHeader *aheader = chunk->fetchNextFreeArena(rt); |
michael@0 | 2043 | |
michael@0 | 2044 | Chunk **savedPrevp = chunk->info.prevp; |
michael@0 | 2045 | if (!chunk->hasAvailableArenas()) |
michael@0 | 2046 | chunk->removeFromAvailableList(); |
michael@0 | 2047 | |
michael@0 | 2048 | size_t arenaIndex = Chunk::arenaIndex(aheader->arenaAddress()); |
michael@0 | 2049 | bool ok; |
michael@0 | 2050 | { |
michael@0 | 2051 | /* |
michael@0 | 2052 | * If the main thread waits for the decommit to finish, skip |
michael@0 | 2053 | * potentially expensive unlock/lock pair on the contested |
michael@0 | 2054 | * lock. |
michael@0 | 2055 | */ |
michael@0 | 2056 | Maybe<AutoUnlockGC> maybeUnlock; |
michael@0 | 2057 | if (!rt->isHeapBusy()) |
michael@0 | 2058 | maybeUnlock.construct(rt); |
michael@0 | 2059 | ok = MarkPagesUnused(rt, aheader->getArena(), ArenaSize); |
michael@0 | 2060 | } |
michael@0 | 2061 | |
michael@0 | 2062 | if (ok) { |
michael@0 | 2063 | ++chunk->info.numArenasFree; |
michael@0 | 2064 | chunk->decommittedArenas.set(arenaIndex); |
michael@0 | 2065 | } else { |
michael@0 | 2066 | chunk->addArenaToFreeList(rt, aheader); |
michael@0 | 2067 | } |
michael@0 | 2068 | JS_ASSERT(chunk->hasAvailableArenas()); |
michael@0 | 2069 | JS_ASSERT(!chunk->unused()); |
michael@0 | 2070 | if (chunk->info.numArenasFree == 1) { |
michael@0 | 2071 | /* |
michael@0 | 2072 | * Put the chunk back to the available list either at the |
michael@0 | 2073 | * point where it was before to preserve the available list |
michael@0 | 2074 | * that we enumerate, or, when the allocation thread has fully |
michael@0 | 2075 | * used all the previous chunks, at the beginning of the |
michael@0 | 2076 | * available list. |
michael@0 | 2077 | */ |
michael@0 | 2078 | Chunk **insertPoint = savedPrevp; |
michael@0 | 2079 | if (savedPrevp != availableListHeadp) { |
michael@0 | 2080 | Chunk *prev = Chunk::fromPointerToNext(savedPrevp); |
michael@0 | 2081 | if (!prev->hasAvailableArenas()) |
michael@0 | 2082 | insertPoint = availableListHeadp; |
michael@0 | 2083 | } |
michael@0 | 2084 | chunk->insertToAvailableList(insertPoint); |
michael@0 | 2085 | } else { |
michael@0 | 2086 | JS_ASSERT(chunk->info.prevp); |
michael@0 | 2087 | } |
michael@0 | 2088 | |
michael@0 | 2089 | if (rt->gcChunkAllocationSinceLastGC || !ok) { |
michael@0 | 2090 | /* |
michael@0 | 2091 | * The allocator thread has started to get new chunks. We should stop |
michael@0 | 2092 | * to avoid decommitting arenas in just allocated chunks. |
michael@0 | 2093 | */ |
michael@0 | 2094 | return; |
michael@0 | 2095 | } |
michael@0 | 2096 | } |
michael@0 | 2097 | |
michael@0 | 2098 | /* |
michael@0 | 2099 | * chunk->info.prevp becomes null when the allocator thread consumed |
michael@0 | 2100 | * all chunks from the available list. |
michael@0 | 2101 | */ |
michael@0 | 2102 | JS_ASSERT_IF(chunk->info.prevp, *chunk->info.prevp == chunk); |
michael@0 | 2103 | if (chunk->info.prevp == availableListHeadp || !chunk->info.prevp) |
michael@0 | 2104 | break; |
michael@0 | 2105 | |
michael@0 | 2106 | /* |
michael@0 | 2107 | * prevp exists and is not the list head. It must point to the next |
michael@0 | 2108 | * field of the previous chunk. |
michael@0 | 2109 | */ |
michael@0 | 2110 | chunk = chunk->getPrevious(); |
michael@0 | 2111 | } |
michael@0 | 2112 | } |
michael@0 | 2113 | |
michael@0 | 2114 | static void |
michael@0 | 2115 | DecommitArenas(JSRuntime *rt) |
michael@0 | 2116 | { |
michael@0 | 2117 | DecommitArenasFromAvailableList(rt, &rt->gcSystemAvailableChunkListHead); |
michael@0 | 2118 | DecommitArenasFromAvailableList(rt, &rt->gcUserAvailableChunkListHead); |
michael@0 | 2119 | } |
michael@0 | 2120 | |
michael@0 | 2121 | /* Must be called with the GC lock taken. */ |
michael@0 | 2122 | static void |
michael@0 | 2123 | ExpireChunksAndArenas(JSRuntime *rt, bool shouldShrink) |
michael@0 | 2124 | { |
michael@0 | 2125 | if (Chunk *toFree = rt->gcChunkPool.expire(rt, shouldShrink)) { |
michael@0 | 2126 | AutoUnlockGC unlock(rt); |
michael@0 | 2127 | FreeChunkList(rt, toFree); |
michael@0 | 2128 | } |
michael@0 | 2129 | |
michael@0 | 2130 | if (shouldShrink) |
michael@0 | 2131 | DecommitArenas(rt); |
michael@0 | 2132 | } |
michael@0 | 2133 | |
michael@0 | 2134 | static void |
michael@0 | 2135 | SweepBackgroundThings(JSRuntime* rt, bool onBackgroundThread) |
michael@0 | 2136 | { |
michael@0 | 2137 | /* |
michael@0 | 2138 | * We must finalize in the correct order, see comments in |
michael@0 | 2139 | * finalizeObjects. |
michael@0 | 2140 | */ |
michael@0 | 2141 | FreeOp fop(rt, false); |
michael@0 | 2142 | for (int phase = 0 ; phase < BackgroundPhaseCount ; ++phase) { |
michael@0 | 2143 | for (Zone *zone = rt->gcSweepingZones; zone; zone = zone->gcNextGraphNode) { |
michael@0 | 2144 | for (int index = 0 ; index < BackgroundPhaseLength[phase] ; ++index) { |
michael@0 | 2145 | AllocKind kind = BackgroundPhases[phase][index]; |
michael@0 | 2146 | ArenaHeader *arenas = zone->allocator.arenas.arenaListsToSweep[kind]; |
michael@0 | 2147 | if (arenas) |
michael@0 | 2148 | ArenaLists::backgroundFinalize(&fop, arenas, onBackgroundThread); |
michael@0 | 2149 | } |
michael@0 | 2150 | } |
michael@0 | 2151 | } |
michael@0 | 2152 | |
michael@0 | 2153 | rt->gcSweepingZones = nullptr; |
michael@0 | 2154 | } |
michael@0 | 2155 | |
michael@0 | 2156 | #ifdef JS_THREADSAFE |
michael@0 | 2157 | static void |
michael@0 | 2158 | AssertBackgroundSweepingFinished(JSRuntime *rt) |
michael@0 | 2159 | { |
michael@0 | 2160 | JS_ASSERT(!rt->gcSweepingZones); |
michael@0 | 2161 | for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) { |
michael@0 | 2162 | for (unsigned i = 0; i < FINALIZE_LIMIT; ++i) { |
michael@0 | 2163 | JS_ASSERT(!zone->allocator.arenas.arenaListsToSweep[i]); |
michael@0 | 2164 | JS_ASSERT(zone->allocator.arenas.doneBackgroundFinalize(AllocKind(i))); |
michael@0 | 2165 | } |
michael@0 | 2166 | } |
michael@0 | 2167 | } |
michael@0 | 2168 | |
michael@0 | 2169 | unsigned |
michael@0 | 2170 | js::GetCPUCount() |
michael@0 | 2171 | { |
michael@0 | 2172 | static unsigned ncpus = 0; |
michael@0 | 2173 | if (ncpus == 0) { |
michael@0 | 2174 | # ifdef XP_WIN |
michael@0 | 2175 | SYSTEM_INFO sysinfo; |
michael@0 | 2176 | GetSystemInfo(&sysinfo); |
michael@0 | 2177 | ncpus = unsigned(sysinfo.dwNumberOfProcessors); |
michael@0 | 2178 | # else |
michael@0 | 2179 | long n = sysconf(_SC_NPROCESSORS_ONLN); |
michael@0 | 2180 | ncpus = (n > 0) ? unsigned(n) : 1; |
michael@0 | 2181 | # endif |
michael@0 | 2182 | } |
michael@0 | 2183 | return ncpus; |
michael@0 | 2184 | } |
michael@0 | 2185 | #endif /* JS_THREADSAFE */ |
michael@0 | 2186 | |
michael@0 | 2187 | bool |
michael@0 | 2188 | GCHelperThread::init() |
michael@0 | 2189 | { |
michael@0 | 2190 | if (!rt->useHelperThreads()) { |
michael@0 | 2191 | backgroundAllocation = false; |
michael@0 | 2192 | return true; |
michael@0 | 2193 | } |
michael@0 | 2194 | |
michael@0 | 2195 | #ifdef JS_THREADSAFE |
michael@0 | 2196 | if (!(wakeup = PR_NewCondVar(rt->gcLock))) |
michael@0 | 2197 | return false; |
michael@0 | 2198 | if (!(done = PR_NewCondVar(rt->gcLock))) |
michael@0 | 2199 | return false; |
michael@0 | 2200 | |
michael@0 | 2201 | thread = PR_CreateThread(PR_USER_THREAD, threadMain, this, PR_PRIORITY_NORMAL, |
michael@0 | 2202 | PR_GLOBAL_THREAD, PR_JOINABLE_THREAD, 0); |
michael@0 | 2203 | if (!thread) |
michael@0 | 2204 | return false; |
michael@0 | 2205 | |
michael@0 | 2206 | backgroundAllocation = (GetCPUCount() >= 2); |
michael@0 | 2207 | #endif /* JS_THREADSAFE */ |
michael@0 | 2208 | return true; |
michael@0 | 2209 | } |
michael@0 | 2210 | |
michael@0 | 2211 | void |
michael@0 | 2212 | GCHelperThread::finish() |
michael@0 | 2213 | { |
michael@0 | 2214 | if (!rt->useHelperThreads() || !rt->gcLock) { |
michael@0 | 2215 | JS_ASSERT(state == IDLE); |
michael@0 | 2216 | return; |
michael@0 | 2217 | } |
michael@0 | 2218 | |
michael@0 | 2219 | #ifdef JS_THREADSAFE |
michael@0 | 2220 | PRThread *join = nullptr; |
michael@0 | 2221 | { |
michael@0 | 2222 | AutoLockGC lock(rt); |
michael@0 | 2223 | if (thread && state != SHUTDOWN) { |
michael@0 | 2224 | /* |
michael@0 | 2225 | * We cannot be in the ALLOCATING or CANCEL_ALLOCATION states as |
michael@0 | 2226 | * the allocations should have been stopped during the last GC. |
michael@0 | 2227 | */ |
michael@0 | 2228 | JS_ASSERT(state == IDLE || state == SWEEPING); |
michael@0 | 2229 | if (state == IDLE) |
michael@0 | 2230 | PR_NotifyCondVar(wakeup); |
michael@0 | 2231 | state = SHUTDOWN; |
michael@0 | 2232 | join = thread; |
michael@0 | 2233 | } |
michael@0 | 2234 | } |
michael@0 | 2235 | if (join) { |
michael@0 | 2236 | /* PR_DestroyThread is not necessary. */ |
michael@0 | 2237 | PR_JoinThread(join); |
michael@0 | 2238 | } |
michael@0 | 2239 | if (wakeup) |
michael@0 | 2240 | PR_DestroyCondVar(wakeup); |
michael@0 | 2241 | if (done) |
michael@0 | 2242 | PR_DestroyCondVar(done); |
michael@0 | 2243 | #endif /* JS_THREADSAFE */ |
michael@0 | 2244 | } |
michael@0 | 2245 | |
michael@0 | 2246 | #ifdef JS_THREADSAFE |
michael@0 | 2247 | #ifdef MOZ_NUWA_PROCESS |
michael@0 | 2248 | extern "C" { |
michael@0 | 2249 | MFBT_API bool IsNuwaProcess(); |
michael@0 | 2250 | MFBT_API void NuwaMarkCurrentThread(void (*recreate)(void *), void *arg); |
michael@0 | 2251 | } |
michael@0 | 2252 | #endif |
michael@0 | 2253 | |
michael@0 | 2254 | /* static */ |
michael@0 | 2255 | void |
michael@0 | 2256 | GCHelperThread::threadMain(void *arg) |
michael@0 | 2257 | { |
michael@0 | 2258 | PR_SetCurrentThreadName("JS GC Helper"); |
michael@0 | 2259 | |
michael@0 | 2260 | #ifdef MOZ_NUWA_PROCESS |
michael@0 | 2261 | if (IsNuwaProcess && IsNuwaProcess()) { |
michael@0 | 2262 | JS_ASSERT(NuwaMarkCurrentThread != nullptr); |
michael@0 | 2263 | NuwaMarkCurrentThread(nullptr, nullptr); |
michael@0 | 2264 | } |
michael@0 | 2265 | #endif |
michael@0 | 2266 | |
michael@0 | 2267 | static_cast<GCHelperThread *>(arg)->threadLoop(); |
michael@0 | 2268 | } |
michael@0 | 2269 | |
michael@0 | 2270 | void |
michael@0 | 2271 | GCHelperThread::wait(PRCondVar *which) |
michael@0 | 2272 | { |
michael@0 | 2273 | rt->gcLockOwner = nullptr; |
michael@0 | 2274 | PR_WaitCondVar(which, PR_INTERVAL_NO_TIMEOUT); |
michael@0 | 2275 | #ifdef DEBUG |
michael@0 | 2276 | rt->gcLockOwner = PR_GetCurrentThread(); |
michael@0 | 2277 | #endif |
michael@0 | 2278 | } |
michael@0 | 2279 | |
michael@0 | 2280 | void |
michael@0 | 2281 | GCHelperThread::threadLoop() |
michael@0 | 2282 | { |
michael@0 | 2283 | AutoLockGC lock(rt); |
michael@0 | 2284 | |
michael@0 | 2285 | TraceLogger *logger = TraceLoggerForCurrentThread(); |
michael@0 | 2286 | |
michael@0 | 2287 | /* |
michael@0 | 2288 | * Even on the first iteration the state can be SHUTDOWN or SWEEPING if |
michael@0 | 2289 | * the stop request or the GC and the corresponding startBackgroundSweep call |
michael@0 | 2290 | * happen before this thread has a chance to run. |
michael@0 | 2291 | */ |
michael@0 | 2292 | for (;;) { |
michael@0 | 2293 | switch (state) { |
michael@0 | 2294 | case SHUTDOWN: |
michael@0 | 2295 | return; |
michael@0 | 2296 | case IDLE: |
michael@0 | 2297 | wait(wakeup); |
michael@0 | 2298 | break; |
michael@0 | 2299 | case SWEEPING: { |
michael@0 | 2300 | AutoTraceLog logSweeping(logger, TraceLogger::GCSweeping); |
michael@0 | 2301 | doSweep(); |
michael@0 | 2302 | if (state == SWEEPING) |
michael@0 | 2303 | state = IDLE; |
michael@0 | 2304 | PR_NotifyAllCondVar(done); |
michael@0 | 2305 | break; |
michael@0 | 2306 | } |
michael@0 | 2307 | case ALLOCATING: { |
michael@0 | 2308 | AutoTraceLog logAllocating(logger, TraceLogger::GCAllocation); |
michael@0 | 2309 | do { |
michael@0 | 2310 | Chunk *chunk; |
michael@0 | 2311 | { |
michael@0 | 2312 | AutoUnlockGC unlock(rt); |
michael@0 | 2313 | chunk = Chunk::allocate(rt); |
michael@0 | 2314 | } |
michael@0 | 2315 | |
michael@0 | 2316 | /* OOM stops the background allocation. */ |
michael@0 | 2317 | if (!chunk) |
michael@0 | 2318 | break; |
michael@0 | 2319 | JS_ASSERT(chunk->info.numArenasFreeCommitted == 0); |
michael@0 | 2320 | rt->gcChunkPool.put(chunk); |
michael@0 | 2321 | } while (state == ALLOCATING && rt->gcChunkPool.wantBackgroundAllocation(rt)); |
michael@0 | 2322 | if (state == ALLOCATING) |
michael@0 | 2323 | state = IDLE; |
michael@0 | 2324 | break; |
michael@0 | 2325 | } |
michael@0 | 2326 | case CANCEL_ALLOCATION: |
michael@0 | 2327 | state = IDLE; |
michael@0 | 2328 | PR_NotifyAllCondVar(done); |
michael@0 | 2329 | break; |
michael@0 | 2330 | } |
michael@0 | 2331 | } |
michael@0 | 2332 | } |
michael@0 | 2333 | #endif /* JS_THREADSAFE */ |
michael@0 | 2334 | |
michael@0 | 2335 | void |
michael@0 | 2336 | GCHelperThread::startBackgroundSweep(bool shouldShrink) |
michael@0 | 2337 | { |
michael@0 | 2338 | JS_ASSERT(rt->useHelperThreads()); |
michael@0 | 2339 | |
michael@0 | 2340 | #ifdef JS_THREADSAFE |
michael@0 | 2341 | AutoLockGC lock(rt); |
michael@0 | 2342 | JS_ASSERT(state == IDLE); |
michael@0 | 2343 | JS_ASSERT(!sweepFlag); |
michael@0 | 2344 | sweepFlag = true; |
michael@0 | 2345 | shrinkFlag = shouldShrink; |
michael@0 | 2346 | state = SWEEPING; |
michael@0 | 2347 | PR_NotifyCondVar(wakeup); |
michael@0 | 2348 | #endif /* JS_THREADSAFE */ |
michael@0 | 2349 | } |
michael@0 | 2350 | |
michael@0 | 2351 | /* Must be called with the GC lock taken. */ |
michael@0 | 2352 | void |
michael@0 | 2353 | GCHelperThread::startBackgroundShrink() |
michael@0 | 2354 | { |
michael@0 | 2355 | JS_ASSERT(rt->useHelperThreads()); |
michael@0 | 2356 | |
michael@0 | 2357 | #ifdef JS_THREADSAFE |
michael@0 | 2358 | switch (state) { |
michael@0 | 2359 | case IDLE: |
michael@0 | 2360 | JS_ASSERT(!sweepFlag); |
michael@0 | 2361 | shrinkFlag = true; |
michael@0 | 2362 | state = SWEEPING; |
michael@0 | 2363 | PR_NotifyCondVar(wakeup); |
michael@0 | 2364 | break; |
michael@0 | 2365 | case SWEEPING: |
michael@0 | 2366 | shrinkFlag = true; |
michael@0 | 2367 | break; |
michael@0 | 2368 | case ALLOCATING: |
michael@0 | 2369 | case CANCEL_ALLOCATION: |
michael@0 | 2370 | /* |
michael@0 | 2371 | * If we have started background allocation there is nothing to |
michael@0 | 2372 | * shrink. |
michael@0 | 2373 | */ |
michael@0 | 2374 | break; |
michael@0 | 2375 | case SHUTDOWN: |
michael@0 | 2376 | MOZ_ASSUME_UNREACHABLE("No shrink on shutdown"); |
michael@0 | 2377 | } |
michael@0 | 2378 | #endif /* JS_THREADSAFE */ |
michael@0 | 2379 | } |
michael@0 | 2380 | |
michael@0 | 2381 | void |
michael@0 | 2382 | GCHelperThread::waitBackgroundSweepEnd() |
michael@0 | 2383 | { |
michael@0 | 2384 | if (!rt->useHelperThreads()) { |
michael@0 | 2385 | JS_ASSERT(state == IDLE); |
michael@0 | 2386 | return; |
michael@0 | 2387 | } |
michael@0 | 2388 | |
michael@0 | 2389 | #ifdef JS_THREADSAFE |
michael@0 | 2390 | AutoLockGC lock(rt); |
michael@0 | 2391 | while (state == SWEEPING) |
michael@0 | 2392 | wait(done); |
michael@0 | 2393 | if (rt->gcIncrementalState == NO_INCREMENTAL) |
michael@0 | 2394 | AssertBackgroundSweepingFinished(rt); |
michael@0 | 2395 | #endif /* JS_THREADSAFE */ |
michael@0 | 2396 | } |
michael@0 | 2397 | |
michael@0 | 2398 | void |
michael@0 | 2399 | GCHelperThread::waitBackgroundSweepOrAllocEnd() |
michael@0 | 2400 | { |
michael@0 | 2401 | if (!rt->useHelperThreads()) { |
michael@0 | 2402 | JS_ASSERT(state == IDLE); |
michael@0 | 2403 | return; |
michael@0 | 2404 | } |
michael@0 | 2405 | |
michael@0 | 2406 | #ifdef JS_THREADSAFE |
michael@0 | 2407 | AutoLockGC lock(rt); |
michael@0 | 2408 | if (state == ALLOCATING) |
michael@0 | 2409 | state = CANCEL_ALLOCATION; |
michael@0 | 2410 | while (state == SWEEPING || state == CANCEL_ALLOCATION) |
michael@0 | 2411 | wait(done); |
michael@0 | 2412 | if (rt->gcIncrementalState == NO_INCREMENTAL) |
michael@0 | 2413 | AssertBackgroundSweepingFinished(rt); |
michael@0 | 2414 | #endif /* JS_THREADSAFE */ |
michael@0 | 2415 | } |
michael@0 | 2416 | |
michael@0 | 2417 | /* Must be called with the GC lock taken. */ |
michael@0 | 2418 | inline void |
michael@0 | 2419 | GCHelperThread::startBackgroundAllocationIfIdle() |
michael@0 | 2420 | { |
michael@0 | 2421 | JS_ASSERT(rt->useHelperThreads()); |
michael@0 | 2422 | |
michael@0 | 2423 | #ifdef JS_THREADSAFE |
michael@0 | 2424 | if (state == IDLE) { |
michael@0 | 2425 | state = ALLOCATING; |
michael@0 | 2426 | PR_NotifyCondVar(wakeup); |
michael@0 | 2427 | } |
michael@0 | 2428 | #endif /* JS_THREADSAFE */ |
michael@0 | 2429 | } |
michael@0 | 2430 | |
michael@0 | 2431 | void |
michael@0 | 2432 | GCHelperThread::replenishAndFreeLater(void *ptr) |
michael@0 | 2433 | { |
michael@0 | 2434 | JS_ASSERT(freeCursor == freeCursorEnd); |
michael@0 | 2435 | do { |
michael@0 | 2436 | if (freeCursor && !freeVector.append(freeCursorEnd - FREE_ARRAY_LENGTH)) |
michael@0 | 2437 | break; |
michael@0 | 2438 | freeCursor = (void **) js_malloc(FREE_ARRAY_SIZE); |
michael@0 | 2439 | if (!freeCursor) { |
michael@0 | 2440 | freeCursorEnd = nullptr; |
michael@0 | 2441 | break; |
michael@0 | 2442 | } |
michael@0 | 2443 | freeCursorEnd = freeCursor + FREE_ARRAY_LENGTH; |
michael@0 | 2444 | *freeCursor++ = ptr; |
michael@0 | 2445 | return; |
michael@0 | 2446 | } while (false); |
michael@0 | 2447 | js_free(ptr); |
michael@0 | 2448 | } |
michael@0 | 2449 | |
michael@0 | 2450 | #ifdef JS_THREADSAFE |
michael@0 | 2451 | /* Must be called with the GC lock taken. */ |
michael@0 | 2452 | void |
michael@0 | 2453 | GCHelperThread::doSweep() |
michael@0 | 2454 | { |
michael@0 | 2455 | if (sweepFlag) { |
michael@0 | 2456 | sweepFlag = false; |
michael@0 | 2457 | AutoUnlockGC unlock(rt); |
michael@0 | 2458 | |
michael@0 | 2459 | SweepBackgroundThings(rt, true); |
michael@0 | 2460 | |
michael@0 | 2461 | if (freeCursor) { |
michael@0 | 2462 | void **array = freeCursorEnd - FREE_ARRAY_LENGTH; |
michael@0 | 2463 | freeElementsAndArray(array, freeCursor); |
michael@0 | 2464 | freeCursor = freeCursorEnd = nullptr; |
michael@0 | 2465 | } else { |
michael@0 | 2466 | JS_ASSERT(!freeCursorEnd); |
michael@0 | 2467 | } |
michael@0 | 2468 | for (void ***iter = freeVector.begin(); iter != freeVector.end(); ++iter) { |
michael@0 | 2469 | void **array = *iter; |
michael@0 | 2470 | freeElementsAndArray(array, array + FREE_ARRAY_LENGTH); |
michael@0 | 2471 | } |
michael@0 | 2472 | freeVector.resize(0); |
michael@0 | 2473 | |
michael@0 | 2474 | rt->freeLifoAlloc.freeAll(); |
michael@0 | 2475 | } |
michael@0 | 2476 | |
michael@0 | 2477 | bool shrinking = shrinkFlag; |
michael@0 | 2478 | ExpireChunksAndArenas(rt, shrinking); |
michael@0 | 2479 | |
michael@0 | 2480 | /* |
michael@0 | 2481 | * The main thread may have called ShrinkGCBuffers while |
michael@0 | 2482 | * ExpireChunksAndArenas(rt, false) was running, so we recheck the flag |
michael@0 | 2483 | * afterwards. |
michael@0 | 2484 | */ |
michael@0 | 2485 | if (!shrinking && shrinkFlag) { |
michael@0 | 2486 | shrinkFlag = false; |
michael@0 | 2487 | ExpireChunksAndArenas(rt, true); |
michael@0 | 2488 | } |
michael@0 | 2489 | } |
michael@0 | 2490 | #endif /* JS_THREADSAFE */ |
michael@0 | 2491 | |
michael@0 | 2492 | bool |
michael@0 | 2493 | GCHelperThread::onBackgroundThread() |
michael@0 | 2494 | { |
michael@0 | 2495 | #ifdef JS_THREADSAFE |
michael@0 | 2496 | return PR_GetCurrentThread() == getThread(); |
michael@0 | 2497 | #else |
michael@0 | 2498 | return false; |
michael@0 | 2499 | #endif |
michael@0 | 2500 | } |
michael@0 | 2501 | |
michael@0 | 2502 | static bool |
michael@0 | 2503 | ReleaseObservedTypes(JSRuntime *rt) |
michael@0 | 2504 | { |
michael@0 | 2505 | bool releaseTypes = rt->gcZeal() != 0; |
michael@0 | 2506 | |
michael@0 | 2507 | #ifndef JS_MORE_DETERMINISTIC |
michael@0 | 2508 | int64_t now = PRMJ_Now(); |
michael@0 | 2509 | if (now >= rt->gcJitReleaseTime) |
michael@0 | 2510 | releaseTypes = true; |
michael@0 | 2511 | if (releaseTypes) |
michael@0 | 2512 | rt->gcJitReleaseTime = now + JIT_SCRIPT_RELEASE_TYPES_INTERVAL; |
michael@0 | 2513 | #endif |
michael@0 | 2514 | |
michael@0 | 2515 | return releaseTypes; |
michael@0 | 2516 | } |
michael@0 | 2517 | |
michael@0 | 2518 | /* |
michael@0 | 2519 | * It's simpler if we preserve the invariant that every zone has at least one |
michael@0 | 2520 | * compartment. If we know we're deleting the entire zone, then |
michael@0 | 2521 | * SweepCompartments is allowed to delete all compartments. In this case, |
michael@0 | 2522 | * |keepAtleastOne| is false. If some objects remain in the zone so that it |
michael@0 | 2523 | * cannot be deleted, then we set |keepAtleastOne| to true, which prohibits |
michael@0 | 2524 | * SweepCompartments from deleting every compartment. Instead, it preserves an |
michael@0 | 2525 | * arbitrary compartment in the zone. |
michael@0 | 2526 | */ |
michael@0 | 2527 | static void |
michael@0 | 2528 | SweepCompartments(FreeOp *fop, Zone *zone, bool keepAtleastOne, bool lastGC) |
michael@0 | 2529 | { |
michael@0 | 2530 | JSRuntime *rt = zone->runtimeFromMainThread(); |
michael@0 | 2531 | JSDestroyCompartmentCallback callback = rt->destroyCompartmentCallback; |
michael@0 | 2532 | |
michael@0 | 2533 | JSCompartment **read = zone->compartments.begin(); |
michael@0 | 2534 | JSCompartment **end = zone->compartments.end(); |
michael@0 | 2535 | JSCompartment **write = read; |
michael@0 | 2536 | bool foundOne = false; |
michael@0 | 2537 | while (read < end) { |
michael@0 | 2538 | JSCompartment *comp = *read++; |
michael@0 | 2539 | JS_ASSERT(!rt->isAtomsCompartment(comp)); |
michael@0 | 2540 | |
michael@0 | 2541 | /* |
michael@0 | 2542 | * Don't delete the last compartment if all the ones before it were |
michael@0 | 2543 | * deleted and keepAtleastOne is true. |
michael@0 | 2544 | */ |
michael@0 | 2545 | bool dontDelete = read == end && !foundOne && keepAtleastOne; |
michael@0 | 2546 | if ((!comp->marked && !dontDelete) || lastGC) { |
michael@0 | 2547 | if (callback) |
michael@0 | 2548 | callback(fop, comp); |
michael@0 | 2549 | if (comp->principals) |
michael@0 | 2550 | JS_DropPrincipals(rt, comp->principals); |
michael@0 | 2551 | js_delete(comp); |
michael@0 | 2552 | } else { |
michael@0 | 2553 | *write++ = comp; |
michael@0 | 2554 | foundOne = true; |
michael@0 | 2555 | } |
michael@0 | 2556 | } |
michael@0 | 2557 | zone->compartments.resize(write - zone->compartments.begin()); |
michael@0 | 2558 | JS_ASSERT_IF(keepAtleastOne, !zone->compartments.empty()); |
michael@0 | 2559 | } |
michael@0 | 2560 | |
michael@0 | 2561 | static void |
michael@0 | 2562 | SweepZones(FreeOp *fop, bool lastGC) |
michael@0 | 2563 | { |
michael@0 | 2564 | JSRuntime *rt = fop->runtime(); |
michael@0 | 2565 | JSZoneCallback callback = rt->destroyZoneCallback; |
michael@0 | 2566 | |
michael@0 | 2567 | /* Skip the atomsCompartment zone. */ |
michael@0 | 2568 | Zone **read = rt->zones.begin() + 1; |
michael@0 | 2569 | Zone **end = rt->zones.end(); |
michael@0 | 2570 | Zone **write = read; |
michael@0 | 2571 | JS_ASSERT(rt->zones.length() >= 1); |
michael@0 | 2572 | JS_ASSERT(rt->isAtomsZone(rt->zones[0])); |
michael@0 | 2573 | |
michael@0 | 2574 | while (read < end) { |
michael@0 | 2575 | Zone *zone = *read++; |
michael@0 | 2576 | |
michael@0 | 2577 | if (zone->wasGCStarted()) { |
michael@0 | 2578 | if ((zone->allocator.arenas.arenaListsAreEmpty() && !zone->hasMarkedCompartments()) || |
michael@0 | 2579 | lastGC) |
michael@0 | 2580 | { |
michael@0 | 2581 | zone->allocator.arenas.checkEmptyFreeLists(); |
michael@0 | 2582 | if (callback) |
michael@0 | 2583 | callback(zone); |
michael@0 | 2584 | SweepCompartments(fop, zone, false, lastGC); |
michael@0 | 2585 | JS_ASSERT(zone->compartments.empty()); |
michael@0 | 2586 | fop->delete_(zone); |
michael@0 | 2587 | continue; |
michael@0 | 2588 | } |
michael@0 | 2589 | SweepCompartments(fop, zone, true, lastGC); |
michael@0 | 2590 | } |
michael@0 | 2591 | *write++ = zone; |
michael@0 | 2592 | } |
michael@0 | 2593 | rt->zones.resize(write - rt->zones.begin()); |
michael@0 | 2594 | } |
michael@0 | 2595 | |
michael@0 | 2596 | static void |
michael@0 | 2597 | PurgeRuntime(JSRuntime *rt) |
michael@0 | 2598 | { |
michael@0 | 2599 | for (GCCompartmentsIter comp(rt); !comp.done(); comp.next()) |
michael@0 | 2600 | comp->purge(); |
michael@0 | 2601 | |
michael@0 | 2602 | rt->freeLifoAlloc.transferUnusedFrom(&rt->tempLifoAlloc); |
michael@0 | 2603 | rt->interpreterStack().purge(rt); |
michael@0 | 2604 | |
michael@0 | 2605 | rt->gsnCache.purge(); |
michael@0 | 2606 | rt->scopeCoordinateNameCache.purge(); |
michael@0 | 2607 | rt->newObjectCache.purge(); |
michael@0 | 2608 | rt->nativeIterCache.purge(); |
michael@0 | 2609 | rt->sourceDataCache.purge(); |
michael@0 | 2610 | rt->evalCache.clear(); |
michael@0 | 2611 | |
michael@0 | 2612 | if (!rt->hasActiveCompilations()) |
michael@0 | 2613 | rt->parseMapPool().purgeAll(); |
michael@0 | 2614 | } |
michael@0 | 2615 | |
michael@0 | 2616 | static bool |
michael@0 | 2617 | ShouldPreserveJITCode(JSCompartment *comp, int64_t currentTime) |
michael@0 | 2618 | { |
michael@0 | 2619 | JSRuntime *rt = comp->runtimeFromMainThread(); |
michael@0 | 2620 | if (rt->gcShouldCleanUpEverything) |
michael@0 | 2621 | return false; |
michael@0 | 2622 | |
michael@0 | 2623 | if (rt->alwaysPreserveCode) |
michael@0 | 2624 | return true; |
michael@0 | 2625 | if (comp->lastAnimationTime + PRMJ_USEC_PER_SEC >= currentTime) |
michael@0 | 2626 | return true; |
michael@0 | 2627 | |
michael@0 | 2628 | return false; |
michael@0 | 2629 | } |
michael@0 | 2630 | |
michael@0 | 2631 | #ifdef DEBUG |
michael@0 | 2632 | class CompartmentCheckTracer : public JSTracer |
michael@0 | 2633 | { |
michael@0 | 2634 | public: |
michael@0 | 2635 | CompartmentCheckTracer(JSRuntime *rt, JSTraceCallback callback) |
michael@0 | 2636 | : JSTracer(rt, callback) |
michael@0 | 2637 | {} |
michael@0 | 2638 | |
michael@0 | 2639 | Cell *src; |
michael@0 | 2640 | JSGCTraceKind srcKind; |
michael@0 | 2641 | Zone *zone; |
michael@0 | 2642 | JSCompartment *compartment; |
michael@0 | 2643 | }; |
michael@0 | 2644 | |
michael@0 | 2645 | static bool |
michael@0 | 2646 | InCrossCompartmentMap(JSObject *src, Cell *dst, JSGCTraceKind dstKind) |
michael@0 | 2647 | { |
michael@0 | 2648 | JSCompartment *srccomp = src->compartment(); |
michael@0 | 2649 | |
michael@0 | 2650 | if (dstKind == JSTRACE_OBJECT) { |
michael@0 | 2651 | Value key = ObjectValue(*static_cast<JSObject *>(dst)); |
michael@0 | 2652 | if (WrapperMap::Ptr p = srccomp->lookupWrapper(key)) { |
michael@0 | 2653 | if (*p->value().unsafeGet() == ObjectValue(*src)) |
michael@0 | 2654 | return true; |
michael@0 | 2655 | } |
michael@0 | 2656 | } |
michael@0 | 2657 | |
michael@0 | 2658 | /* |
michael@0 | 2659 | * If the cross-compartment edge is caused by the debugger, then we don't |
michael@0 | 2660 | * know the right hashtable key, so we have to iterate. |
michael@0 | 2661 | */ |
michael@0 | 2662 | for (JSCompartment::WrapperEnum e(srccomp); !e.empty(); e.popFront()) { |
michael@0 | 2663 | if (e.front().key().wrapped == dst && ToMarkable(e.front().value()) == src) |
michael@0 | 2664 | return true; |
michael@0 | 2665 | } |
michael@0 | 2666 | |
michael@0 | 2667 | return false; |
michael@0 | 2668 | } |
michael@0 | 2669 | |
michael@0 | 2670 | static void |
michael@0 | 2671 | CheckCompartment(CompartmentCheckTracer *trc, JSCompartment *thingCompartment, |
michael@0 | 2672 | Cell *thing, JSGCTraceKind kind) |
michael@0 | 2673 | { |
michael@0 | 2674 | JS_ASSERT(thingCompartment == trc->compartment || |
michael@0 | 2675 | trc->runtime()->isAtomsCompartment(thingCompartment) || |
michael@0 | 2676 | (trc->srcKind == JSTRACE_OBJECT && |
michael@0 | 2677 | InCrossCompartmentMap((JSObject *)trc->src, thing, kind))); |
michael@0 | 2678 | } |
michael@0 | 2679 | |
michael@0 | 2680 | static JSCompartment * |
michael@0 | 2681 | CompartmentOfCell(Cell *thing, JSGCTraceKind kind) |
michael@0 | 2682 | { |
michael@0 | 2683 | if (kind == JSTRACE_OBJECT) |
michael@0 | 2684 | return static_cast<JSObject *>(thing)->compartment(); |
michael@0 | 2685 | else if (kind == JSTRACE_SHAPE) |
michael@0 | 2686 | return static_cast<Shape *>(thing)->compartment(); |
michael@0 | 2687 | else if (kind == JSTRACE_BASE_SHAPE) |
michael@0 | 2688 | return static_cast<BaseShape *>(thing)->compartment(); |
michael@0 | 2689 | else if (kind == JSTRACE_SCRIPT) |
michael@0 | 2690 | return static_cast<JSScript *>(thing)->compartment(); |
michael@0 | 2691 | else |
michael@0 | 2692 | return nullptr; |
michael@0 | 2693 | } |
michael@0 | 2694 | |
michael@0 | 2695 | static void |
michael@0 | 2696 | CheckCompartmentCallback(JSTracer *trcArg, void **thingp, JSGCTraceKind kind) |
michael@0 | 2697 | { |
michael@0 | 2698 | CompartmentCheckTracer *trc = static_cast<CompartmentCheckTracer *>(trcArg); |
michael@0 | 2699 | Cell *thing = (Cell *)*thingp; |
michael@0 | 2700 | |
michael@0 | 2701 | JSCompartment *comp = CompartmentOfCell(thing, kind); |
michael@0 | 2702 | if (comp && trc->compartment) { |
michael@0 | 2703 | CheckCompartment(trc, comp, thing, kind); |
michael@0 | 2704 | } else { |
michael@0 | 2705 | JS_ASSERT(thing->tenuredZone() == trc->zone || |
michael@0 | 2706 | trc->runtime()->isAtomsZone(thing->tenuredZone())); |
michael@0 | 2707 | } |
michael@0 | 2708 | } |
michael@0 | 2709 | |
michael@0 | 2710 | static void |
michael@0 | 2711 | CheckForCompartmentMismatches(JSRuntime *rt) |
michael@0 | 2712 | { |
michael@0 | 2713 | if (rt->gcDisableStrictProxyCheckingCount) |
michael@0 | 2714 | return; |
michael@0 | 2715 | |
michael@0 | 2716 | CompartmentCheckTracer trc(rt, CheckCompartmentCallback); |
michael@0 | 2717 | for (ZonesIter zone(rt, SkipAtoms); !zone.done(); zone.next()) { |
michael@0 | 2718 | trc.zone = zone; |
michael@0 | 2719 | for (size_t thingKind = 0; thingKind < FINALIZE_LAST; thingKind++) { |
michael@0 | 2720 | for (CellIterUnderGC i(zone, AllocKind(thingKind)); !i.done(); i.next()) { |
michael@0 | 2721 | trc.src = i.getCell(); |
michael@0 | 2722 | trc.srcKind = MapAllocToTraceKind(AllocKind(thingKind)); |
michael@0 | 2723 | trc.compartment = CompartmentOfCell(trc.src, trc.srcKind); |
michael@0 | 2724 | JS_TraceChildren(&trc, trc.src, trc.srcKind); |
michael@0 | 2725 | } |
michael@0 | 2726 | } |
michael@0 | 2727 | } |
michael@0 | 2728 | } |
michael@0 | 2729 | #endif |
michael@0 | 2730 | |
michael@0 | 2731 | static bool |
michael@0 | 2732 | BeginMarkPhase(JSRuntime *rt) |
michael@0 | 2733 | { |
michael@0 | 2734 | int64_t currentTime = PRMJ_Now(); |
michael@0 | 2735 | |
michael@0 | 2736 | #ifdef DEBUG |
michael@0 | 2737 | if (rt->gcFullCompartmentChecks) |
michael@0 | 2738 | CheckForCompartmentMismatches(rt); |
michael@0 | 2739 | #endif |
michael@0 | 2740 | |
michael@0 | 2741 | rt->gcIsFull = true; |
michael@0 | 2742 | bool any = false; |
michael@0 | 2743 | |
michael@0 | 2744 | for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) { |
michael@0 | 2745 | /* Assert that zone state is as we expect */ |
michael@0 | 2746 | JS_ASSERT(!zone->isCollecting()); |
michael@0 | 2747 | JS_ASSERT(!zone->compartments.empty()); |
michael@0 | 2748 | for (unsigned i = 0; i < FINALIZE_LIMIT; ++i) |
michael@0 | 2749 | JS_ASSERT(!zone->allocator.arenas.arenaListsToSweep[i]); |
michael@0 | 2750 | |
michael@0 | 2751 | /* Set up which zones will be collected. */ |
michael@0 | 2752 | if (zone->isGCScheduled()) { |
michael@0 | 2753 | if (!rt->isAtomsZone(zone)) { |
michael@0 | 2754 | any = true; |
michael@0 | 2755 | zone->setGCState(Zone::Mark); |
michael@0 | 2756 | } |
michael@0 | 2757 | } else { |
michael@0 | 2758 | rt->gcIsFull = false; |
michael@0 | 2759 | } |
michael@0 | 2760 | |
michael@0 | 2761 | zone->scheduledForDestruction = false; |
michael@0 | 2762 | zone->maybeAlive = false; |
michael@0 | 2763 | zone->setPreservingCode(false); |
michael@0 | 2764 | } |
michael@0 | 2765 | |
michael@0 | 2766 | for (CompartmentsIter c(rt, WithAtoms); !c.done(); c.next()) { |
michael@0 | 2767 | JS_ASSERT(c->gcLiveArrayBuffers.empty()); |
michael@0 | 2768 | c->marked = false; |
michael@0 | 2769 | if (ShouldPreserveJITCode(c, currentTime)) |
michael@0 | 2770 | c->zone()->setPreservingCode(true); |
michael@0 | 2771 | } |
michael@0 | 2772 | |
michael@0 | 2773 | /* |
michael@0 | 2774 | * Atoms are not in the cross-compartment map. So if there are any |
michael@0 | 2775 | * zones that are not being collected, we are not allowed to collect |
michael@0 | 2776 | * atoms. Otherwise, the non-collected zones could contain pointers |
michael@0 | 2777 | * to atoms that we would miss. |
michael@0 | 2778 | * |
michael@0 | 2779 | * keepAtoms() will only change on the main thread, which we are currently |
michael@0 | 2780 | * on. If the value of keepAtoms() changes between GC slices, then we'll |
michael@0 | 2781 | * cancel the incremental GC. See IsIncrementalGCSafe. |
michael@0 | 2782 | */ |
michael@0 | 2783 | if (rt->gcIsFull && !rt->keepAtoms()) { |
michael@0 | 2784 | Zone *atomsZone = rt->atomsCompartment()->zone(); |
michael@0 | 2785 | if (atomsZone->isGCScheduled()) { |
michael@0 | 2786 | JS_ASSERT(!atomsZone->isCollecting()); |
michael@0 | 2787 | atomsZone->setGCState(Zone::Mark); |
michael@0 | 2788 | any = true; |
michael@0 | 2789 | } |
michael@0 | 2790 | } |
michael@0 | 2791 | |
michael@0 | 2792 | /* Check that at least one zone is scheduled for collection. */ |
michael@0 | 2793 | if (!any) |
michael@0 | 2794 | return false; |
michael@0 | 2795 | |
michael@0 | 2796 | /* |
michael@0 | 2797 | * At the end of each incremental slice, we call prepareForIncrementalGC, |
michael@0 | 2798 | * which marks objects in all arenas that we're currently allocating |
michael@0 | 2799 | * into. This can cause leaks if unreachable objects are in these |
michael@0 | 2800 | * arenas. This purge call ensures that we only mark arenas that have had |
michael@0 | 2801 | * allocations after the incremental GC started. |
michael@0 | 2802 | */ |
michael@0 | 2803 | if (rt->gcIsIncremental) { |
michael@0 | 2804 | for (GCZonesIter zone(rt); !zone.done(); zone.next()) |
michael@0 | 2805 | zone->allocator.arenas.purge(); |
michael@0 | 2806 | } |
michael@0 | 2807 | |
michael@0 | 2808 | rt->gcMarker.start(); |
michael@0 | 2809 | JS_ASSERT(!rt->gcMarker.callback); |
michael@0 | 2810 | JS_ASSERT(IS_GC_MARKING_TRACER(&rt->gcMarker)); |
michael@0 | 2811 | |
michael@0 | 2812 | /* For non-incremental GC the following sweep discards the jit code. */ |
michael@0 | 2813 | if (rt->gcIsIncremental) { |
michael@0 | 2814 | for (GCZonesIter zone(rt); !zone.done(); zone.next()) { |
michael@0 | 2815 | gcstats::AutoPhase ap(rt->gcStats, gcstats::PHASE_MARK_DISCARD_CODE); |
michael@0 | 2816 | zone->discardJitCode(rt->defaultFreeOp()); |
michael@0 | 2817 | } |
michael@0 | 2818 | } |
michael@0 | 2819 | |
michael@0 | 2820 | GCMarker *gcmarker = &rt->gcMarker; |
michael@0 | 2821 | |
michael@0 | 2822 | rt->gcStartNumber = rt->gcNumber; |
michael@0 | 2823 | |
michael@0 | 2824 | /* |
michael@0 | 2825 | * We must purge the runtime at the beginning of an incremental GC. The |
michael@0 | 2826 | * danger if we purge later is that the snapshot invariant of incremental |
michael@0 | 2827 | * GC will be broken, as follows. If some object is reachable only through |
michael@0 | 2828 | * some cache (say the dtoaCache) then it will not be part of the snapshot. |
michael@0 | 2829 | * If we purge after root marking, then the mutator could obtain a pointer |
michael@0 | 2830 | * to the object and start using it. This object might never be marked, so |
michael@0 | 2831 | * a GC hazard would exist. |
michael@0 | 2832 | */ |
michael@0 | 2833 | { |
michael@0 | 2834 | gcstats::AutoPhase ap(rt->gcStats, gcstats::PHASE_PURGE); |
michael@0 | 2835 | PurgeRuntime(rt); |
michael@0 | 2836 | } |
michael@0 | 2837 | |
michael@0 | 2838 | /* |
michael@0 | 2839 | * Mark phase. |
michael@0 | 2840 | */ |
michael@0 | 2841 | gcstats::AutoPhase ap1(rt->gcStats, gcstats::PHASE_MARK); |
michael@0 | 2842 | gcstats::AutoPhase ap2(rt->gcStats, gcstats::PHASE_MARK_ROOTS); |
michael@0 | 2843 | |
michael@0 | 2844 | for (GCZonesIter zone(rt); !zone.done(); zone.next()) { |
michael@0 | 2845 | /* Unmark everything in the zones being collected. */ |
michael@0 | 2846 | zone->allocator.arenas.unmarkAll(); |
michael@0 | 2847 | } |
michael@0 | 2848 | |
michael@0 | 2849 | for (GCCompartmentsIter c(rt); !c.done(); c.next()) { |
michael@0 | 2850 | /* Reset weak map list for the compartments being collected. */ |
michael@0 | 2851 | WeakMapBase::resetCompartmentWeakMapList(c); |
michael@0 | 2852 | } |
michael@0 | 2853 | |
michael@0 | 2854 | if (rt->gcIsFull) |
michael@0 | 2855 | UnmarkScriptData(rt); |
michael@0 | 2856 | |
michael@0 | 2857 | MarkRuntime(gcmarker); |
michael@0 | 2858 | if (rt->gcIsIncremental) |
michael@0 | 2859 | BufferGrayRoots(gcmarker); |
michael@0 | 2860 | |
michael@0 | 2861 | /* |
michael@0 | 2862 | * This code ensures that if a zone is "dead", then it will be |
michael@0 | 2863 | * collected in this GC. A zone is considered dead if its maybeAlive |
michael@0 | 2864 | * flag is false. The maybeAlive flag is set if: |
michael@0 | 2865 | * (1) the zone has incoming cross-compartment edges, or |
michael@0 | 2866 | * (2) an object in the zone was marked during root marking, either |
michael@0 | 2867 | * as a black root or a gray root. |
michael@0 | 2868 | * If the maybeAlive is false, then we set the scheduledForDestruction flag. |
michael@0 | 2869 | * At any time later in the GC, if we try to mark an object whose |
michael@0 | 2870 | * zone is scheduled for destruction, we will assert. |
michael@0 | 2871 | * NOTE: Due to bug 811587, we only assert if gcManipulatingDeadCompartments |
michael@0 | 2872 | * is true (e.g., if we're doing a brain transplant). |
michael@0 | 2873 | * |
michael@0 | 2874 | * The purpose of this check is to ensure that a zone that we would |
michael@0 | 2875 | * normally destroy is not resurrected by a read barrier or an |
michael@0 | 2876 | * allocation. This might happen during a function like JS_TransplantObject, |
michael@0 | 2877 | * which iterates over all compartments, live or dead, and operates on their |
michael@0 | 2878 | * objects. See bug 803376 for details on this problem. To avoid the |
michael@0 | 2879 | * problem, we are very careful to avoid allocation and read barriers during |
michael@0 | 2880 | * JS_TransplantObject and the like. The code here ensures that we don't |
michael@0 | 2881 | * regress. |
michael@0 | 2882 | * |
michael@0 | 2883 | * Note that there are certain cases where allocations or read barriers in |
michael@0 | 2884 | * dead zone are difficult to avoid. We detect such cases (via the |
michael@0 | 2885 | * gcObjectsMarkedInDeadCompartment counter) and redo any ongoing GCs after |
michael@0 | 2886 | * the JS_TransplantObject function has finished. This ensures that the dead |
michael@0 | 2887 | * zones will be cleaned up. See AutoMarkInDeadZone and |
michael@0 | 2888 | * AutoMaybeTouchDeadZones for details. |
michael@0 | 2889 | */ |
michael@0 | 2890 | |
michael@0 | 2891 | /* Set the maybeAlive flag based on cross-compartment edges. */ |
michael@0 | 2892 | for (CompartmentsIter c(rt, SkipAtoms); !c.done(); c.next()) { |
michael@0 | 2893 | for (JSCompartment::WrapperEnum e(c); !e.empty(); e.popFront()) { |
michael@0 | 2894 | Cell *dst = e.front().key().wrapped; |
michael@0 | 2895 | dst->tenuredZone()->maybeAlive = true; |
michael@0 | 2896 | } |
michael@0 | 2897 | } |
michael@0 | 2898 | |
michael@0 | 2899 | /* |
michael@0 | 2900 | * For black roots, code in gc/Marking.cpp will already have set maybeAlive |
michael@0 | 2901 | * during MarkRuntime. |
michael@0 | 2902 | */ |
michael@0 | 2903 | |
michael@0 | 2904 | for (GCZonesIter zone(rt); !zone.done(); zone.next()) { |
michael@0 | 2905 | if (!zone->maybeAlive && !rt->isAtomsZone(zone)) |
michael@0 | 2906 | zone->scheduledForDestruction = true; |
michael@0 | 2907 | } |
michael@0 | 2908 | rt->gcFoundBlackGrayEdges = false; |
michael@0 | 2909 | |
michael@0 | 2910 | return true; |
michael@0 | 2911 | } |
michael@0 | 2912 | |
michael@0 | 2913 | template <class CompartmentIterT> |
michael@0 | 2914 | static void |
michael@0 | 2915 | MarkWeakReferences(JSRuntime *rt, gcstats::Phase phase) |
michael@0 | 2916 | { |
michael@0 | 2917 | GCMarker *gcmarker = &rt->gcMarker; |
michael@0 | 2918 | JS_ASSERT(gcmarker->isDrained()); |
michael@0 | 2919 | |
michael@0 | 2920 | gcstats::AutoPhase ap(rt->gcStats, gcstats::PHASE_SWEEP_MARK); |
michael@0 | 2921 | gcstats::AutoPhase ap1(rt->gcStats, phase); |
michael@0 | 2922 | |
michael@0 | 2923 | for (;;) { |
michael@0 | 2924 | bool markedAny = false; |
michael@0 | 2925 | for (CompartmentIterT c(rt); !c.done(); c.next()) { |
michael@0 | 2926 | markedAny |= WatchpointMap::markCompartmentIteratively(c, gcmarker); |
michael@0 | 2927 | markedAny |= WeakMapBase::markCompartmentIteratively(c, gcmarker); |
michael@0 | 2928 | } |
michael@0 | 2929 | markedAny |= Debugger::markAllIteratively(gcmarker); |
michael@0 | 2930 | |
michael@0 | 2931 | if (!markedAny) |
michael@0 | 2932 | break; |
michael@0 | 2933 | |
michael@0 | 2934 | SliceBudget budget; |
michael@0 | 2935 | gcmarker->drainMarkStack(budget); |
michael@0 | 2936 | } |
michael@0 | 2937 | JS_ASSERT(gcmarker->isDrained()); |
michael@0 | 2938 | } |
michael@0 | 2939 | |
michael@0 | 2940 | static void |
michael@0 | 2941 | MarkWeakReferencesInCurrentGroup(JSRuntime *rt, gcstats::Phase phase) |
michael@0 | 2942 | { |
michael@0 | 2943 | MarkWeakReferences<GCCompartmentGroupIter>(rt, phase); |
michael@0 | 2944 | } |
michael@0 | 2945 | |
michael@0 | 2946 | template <class ZoneIterT, class CompartmentIterT> |
michael@0 | 2947 | static void |
michael@0 | 2948 | MarkGrayReferences(JSRuntime *rt) |
michael@0 | 2949 | { |
michael@0 | 2950 | GCMarker *gcmarker = &rt->gcMarker; |
michael@0 | 2951 | |
michael@0 | 2952 | { |
michael@0 | 2953 | gcstats::AutoPhase ap(rt->gcStats, gcstats::PHASE_SWEEP_MARK); |
michael@0 | 2954 | gcstats::AutoPhase ap1(rt->gcStats, gcstats::PHASE_SWEEP_MARK_GRAY); |
michael@0 | 2955 | gcmarker->setMarkColorGray(); |
michael@0 | 2956 | if (gcmarker->hasBufferedGrayRoots()) { |
michael@0 | 2957 | for (ZoneIterT zone(rt); !zone.done(); zone.next()) |
michael@0 | 2958 | gcmarker->markBufferedGrayRoots(zone); |
michael@0 | 2959 | } else { |
michael@0 | 2960 | JS_ASSERT(!rt->gcIsIncremental); |
michael@0 | 2961 | if (JSTraceDataOp op = rt->gcGrayRootTracer.op) |
michael@0 | 2962 | (*op)(gcmarker, rt->gcGrayRootTracer.data); |
michael@0 | 2963 | } |
michael@0 | 2964 | SliceBudget budget; |
michael@0 | 2965 | gcmarker->drainMarkStack(budget); |
michael@0 | 2966 | } |
michael@0 | 2967 | |
michael@0 | 2968 | MarkWeakReferences<CompartmentIterT>(rt, gcstats::PHASE_SWEEP_MARK_GRAY_WEAK); |
michael@0 | 2969 | |
michael@0 | 2970 | JS_ASSERT(gcmarker->isDrained()); |
michael@0 | 2971 | |
michael@0 | 2972 | gcmarker->setMarkColorBlack(); |
michael@0 | 2973 | } |
michael@0 | 2974 | |
michael@0 | 2975 | static void |
michael@0 | 2976 | MarkGrayReferencesInCurrentGroup(JSRuntime *rt) |
michael@0 | 2977 | { |
michael@0 | 2978 | MarkGrayReferences<GCZoneGroupIter, GCCompartmentGroupIter>(rt); |
michael@0 | 2979 | } |
michael@0 | 2980 | |
michael@0 | 2981 | #ifdef DEBUG |
michael@0 | 2982 | |
michael@0 | 2983 | static void |
michael@0 | 2984 | MarkAllWeakReferences(JSRuntime *rt, gcstats::Phase phase) |
michael@0 | 2985 | { |
michael@0 | 2986 | MarkWeakReferences<GCCompartmentsIter>(rt, phase); |
michael@0 | 2987 | } |
michael@0 | 2988 | |
michael@0 | 2989 | static void |
michael@0 | 2990 | MarkAllGrayReferences(JSRuntime *rt) |
michael@0 | 2991 | { |
michael@0 | 2992 | MarkGrayReferences<GCZonesIter, GCCompartmentsIter>(rt); |
michael@0 | 2993 | } |
michael@0 | 2994 | |
michael@0 | 2995 | class js::gc::MarkingValidator |
michael@0 | 2996 | { |
michael@0 | 2997 | public: |
michael@0 | 2998 | MarkingValidator(JSRuntime *rt); |
michael@0 | 2999 | ~MarkingValidator(); |
michael@0 | 3000 | void nonIncrementalMark(); |
michael@0 | 3001 | void validate(); |
michael@0 | 3002 | |
michael@0 | 3003 | private: |
michael@0 | 3004 | JSRuntime *runtime; |
michael@0 | 3005 | bool initialized; |
michael@0 | 3006 | |
michael@0 | 3007 | typedef HashMap<Chunk *, ChunkBitmap *, GCChunkHasher, SystemAllocPolicy> BitmapMap; |
michael@0 | 3008 | BitmapMap map; |
michael@0 | 3009 | }; |
michael@0 | 3010 | |
michael@0 | 3011 | js::gc::MarkingValidator::MarkingValidator(JSRuntime *rt) |
michael@0 | 3012 | : runtime(rt), |
michael@0 | 3013 | initialized(false) |
michael@0 | 3014 | {} |
michael@0 | 3015 | |
michael@0 | 3016 | js::gc::MarkingValidator::~MarkingValidator() |
michael@0 | 3017 | { |
michael@0 | 3018 | if (!map.initialized()) |
michael@0 | 3019 | return; |
michael@0 | 3020 | |
michael@0 | 3021 | for (BitmapMap::Range r(map.all()); !r.empty(); r.popFront()) |
michael@0 | 3022 | js_delete(r.front().value()); |
michael@0 | 3023 | } |
michael@0 | 3024 | |
michael@0 | 3025 | void |
michael@0 | 3026 | js::gc::MarkingValidator::nonIncrementalMark() |
michael@0 | 3027 | { |
michael@0 | 3028 | /* |
michael@0 | 3029 | * Perform a non-incremental mark for all collecting zones and record |
michael@0 | 3030 | * the results for later comparison. |
michael@0 | 3031 | * |
michael@0 | 3032 | * Currently this does not validate gray marking. |
michael@0 | 3033 | */ |
michael@0 | 3034 | |
michael@0 | 3035 | if (!map.init()) |
michael@0 | 3036 | return; |
michael@0 | 3037 | |
michael@0 | 3038 | GCMarker *gcmarker = &runtime->gcMarker; |
michael@0 | 3039 | |
michael@0 | 3040 | /* Save existing mark bits. */ |
michael@0 | 3041 | for (GCChunkSet::Range r(runtime->gcChunkSet.all()); !r.empty(); r.popFront()) { |
michael@0 | 3042 | ChunkBitmap *bitmap = &r.front()->bitmap; |
michael@0 | 3043 | ChunkBitmap *entry = js_new<ChunkBitmap>(); |
michael@0 | 3044 | if (!entry) |
michael@0 | 3045 | return; |
michael@0 | 3046 | |
michael@0 | 3047 | memcpy((void *)entry->bitmap, (void *)bitmap->bitmap, sizeof(bitmap->bitmap)); |
michael@0 | 3048 | if (!map.putNew(r.front(), entry)) |
michael@0 | 3049 | return; |
michael@0 | 3050 | } |
michael@0 | 3051 | |
michael@0 | 3052 | /* |
michael@0 | 3053 | * Temporarily clear the lists of live weakmaps and array buffers for the |
michael@0 | 3054 | * compartments we are collecting. |
michael@0 | 3055 | */ |
michael@0 | 3056 | |
michael@0 | 3057 | WeakMapVector weakmaps; |
michael@0 | 3058 | ArrayBufferVector arrayBuffers; |
michael@0 | 3059 | for (GCCompartmentsIter c(runtime); !c.done(); c.next()) { |
michael@0 | 3060 | if (!WeakMapBase::saveCompartmentWeakMapList(c, weakmaps) || |
michael@0 | 3061 | !ArrayBufferObject::saveArrayBufferList(c, arrayBuffers)) |
michael@0 | 3062 | { |
michael@0 | 3063 | return; |
michael@0 | 3064 | } |
michael@0 | 3065 | } |
michael@0 | 3066 | |
michael@0 | 3067 | /* |
michael@0 | 3068 | * After this point, the function should run to completion, so we shouldn't |
michael@0 | 3069 | * do anything fallible. |
michael@0 | 3070 | */ |
michael@0 | 3071 | initialized = true; |
michael@0 | 3072 | |
michael@0 | 3073 | for (GCCompartmentsIter c(runtime); !c.done(); c.next()) { |
michael@0 | 3074 | WeakMapBase::resetCompartmentWeakMapList(c); |
michael@0 | 3075 | ArrayBufferObject::resetArrayBufferList(c); |
michael@0 | 3076 | } |
michael@0 | 3077 | |
michael@0 | 3078 | /* Re-do all the marking, but non-incrementally. */ |
michael@0 | 3079 | js::gc::State state = runtime->gcIncrementalState; |
michael@0 | 3080 | runtime->gcIncrementalState = MARK_ROOTS; |
michael@0 | 3081 | |
michael@0 | 3082 | JS_ASSERT(gcmarker->isDrained()); |
michael@0 | 3083 | gcmarker->reset(); |
michael@0 | 3084 | |
michael@0 | 3085 | for (GCChunkSet::Range r(runtime->gcChunkSet.all()); !r.empty(); r.popFront()) |
michael@0 | 3086 | r.front()->bitmap.clear(); |
michael@0 | 3087 | |
michael@0 | 3088 | { |
michael@0 | 3089 | gcstats::AutoPhase ap1(runtime->gcStats, gcstats::PHASE_MARK); |
michael@0 | 3090 | gcstats::AutoPhase ap2(runtime->gcStats, gcstats::PHASE_MARK_ROOTS); |
michael@0 | 3091 | MarkRuntime(gcmarker, true); |
michael@0 | 3092 | } |
michael@0 | 3093 | |
michael@0 | 3094 | { |
michael@0 | 3095 | gcstats::AutoPhase ap1(runtime->gcStats, gcstats::PHASE_MARK); |
michael@0 | 3096 | SliceBudget budget; |
michael@0 | 3097 | runtime->gcIncrementalState = MARK; |
michael@0 | 3098 | runtime->gcMarker.drainMarkStack(budget); |
michael@0 | 3099 | } |
michael@0 | 3100 | |
michael@0 | 3101 | runtime->gcIncrementalState = SWEEP; |
michael@0 | 3102 | { |
michael@0 | 3103 | gcstats::AutoPhase ap(runtime->gcStats, gcstats::PHASE_SWEEP); |
michael@0 | 3104 | MarkAllWeakReferences(runtime, gcstats::PHASE_SWEEP_MARK_WEAK); |
michael@0 | 3105 | |
michael@0 | 3106 | /* Update zone state for gray marking. */ |
michael@0 | 3107 | for (GCZonesIter zone(runtime); !zone.done(); zone.next()) { |
michael@0 | 3108 | JS_ASSERT(zone->isGCMarkingBlack()); |
michael@0 | 3109 | zone->setGCState(Zone::MarkGray); |
michael@0 | 3110 | } |
michael@0 | 3111 | |
michael@0 | 3112 | MarkAllGrayReferences(runtime); |
michael@0 | 3113 | |
michael@0 | 3114 | /* Restore zone state. */ |
michael@0 | 3115 | for (GCZonesIter zone(runtime); !zone.done(); zone.next()) { |
michael@0 | 3116 | JS_ASSERT(zone->isGCMarkingGray()); |
michael@0 | 3117 | zone->setGCState(Zone::Mark); |
michael@0 | 3118 | } |
michael@0 | 3119 | } |
michael@0 | 3120 | |
michael@0 | 3121 | /* Take a copy of the non-incremental mark state and restore the original. */ |
michael@0 | 3122 | for (GCChunkSet::Range r(runtime->gcChunkSet.all()); !r.empty(); r.popFront()) { |
michael@0 | 3123 | Chunk *chunk = r.front(); |
michael@0 | 3124 | ChunkBitmap *bitmap = &chunk->bitmap; |
michael@0 | 3125 | ChunkBitmap *entry = map.lookup(chunk)->value(); |
michael@0 | 3126 | Swap(*entry, *bitmap); |
michael@0 | 3127 | } |
michael@0 | 3128 | |
michael@0 | 3129 | for (GCCompartmentsIter c(runtime); !c.done(); c.next()) { |
michael@0 | 3130 | WeakMapBase::resetCompartmentWeakMapList(c); |
michael@0 | 3131 | ArrayBufferObject::resetArrayBufferList(c); |
michael@0 | 3132 | } |
michael@0 | 3133 | WeakMapBase::restoreCompartmentWeakMapLists(weakmaps); |
michael@0 | 3134 | ArrayBufferObject::restoreArrayBufferLists(arrayBuffers); |
michael@0 | 3135 | |
michael@0 | 3136 | runtime->gcIncrementalState = state; |
michael@0 | 3137 | } |
michael@0 | 3138 | |
michael@0 | 3139 | void |
michael@0 | 3140 | js::gc::MarkingValidator::validate() |
michael@0 | 3141 | { |
michael@0 | 3142 | /* |
michael@0 | 3143 | * Validates the incremental marking for a single compartment by comparing |
michael@0 | 3144 | * the mark bits to those previously recorded for a non-incremental mark. |
michael@0 | 3145 | */ |
michael@0 | 3146 | |
michael@0 | 3147 | if (!initialized) |
michael@0 | 3148 | return; |
michael@0 | 3149 | |
michael@0 | 3150 | for (GCChunkSet::Range r(runtime->gcChunkSet.all()); !r.empty(); r.popFront()) { |
michael@0 | 3151 | Chunk *chunk = r.front(); |
michael@0 | 3152 | BitmapMap::Ptr ptr = map.lookup(chunk); |
michael@0 | 3153 | if (!ptr) |
michael@0 | 3154 | continue; /* Allocated after we did the non-incremental mark. */ |
michael@0 | 3155 | |
michael@0 | 3156 | ChunkBitmap *bitmap = ptr->value(); |
michael@0 | 3157 | ChunkBitmap *incBitmap = &chunk->bitmap; |
michael@0 | 3158 | |
michael@0 | 3159 | for (size_t i = 0; i < ArenasPerChunk; i++) { |
michael@0 | 3160 | if (chunk->decommittedArenas.get(i)) |
michael@0 | 3161 | continue; |
michael@0 | 3162 | Arena *arena = &chunk->arenas[i]; |
michael@0 | 3163 | if (!arena->aheader.allocated()) |
michael@0 | 3164 | continue; |
michael@0 | 3165 | if (!arena->aheader.zone->isGCSweeping()) |
michael@0 | 3166 | continue; |
michael@0 | 3167 | if (arena->aheader.allocatedDuringIncremental) |
michael@0 | 3168 | continue; |
michael@0 | 3169 | |
michael@0 | 3170 | AllocKind kind = arena->aheader.getAllocKind(); |
michael@0 | 3171 | uintptr_t thing = arena->thingsStart(kind); |
michael@0 | 3172 | uintptr_t end = arena->thingsEnd(); |
michael@0 | 3173 | while (thing < end) { |
michael@0 | 3174 | Cell *cell = (Cell *)thing; |
michael@0 | 3175 | |
michael@0 | 3176 | /* |
michael@0 | 3177 | * If a non-incremental GC wouldn't have collected a cell, then |
michael@0 | 3178 | * an incremental GC won't collect it. |
michael@0 | 3179 | */ |
michael@0 | 3180 | JS_ASSERT_IF(bitmap->isMarked(cell, BLACK), incBitmap->isMarked(cell, BLACK)); |
michael@0 | 3181 | |
michael@0 | 3182 | /* |
michael@0 | 3183 | * If the cycle collector isn't allowed to collect an object |
michael@0 | 3184 | * after a non-incremental GC has run, then it isn't allowed to |
michael@0 | 3185 | * collected it after an incremental GC. |
michael@0 | 3186 | */ |
michael@0 | 3187 | JS_ASSERT_IF(!bitmap->isMarked(cell, GRAY), !incBitmap->isMarked(cell, GRAY)); |
michael@0 | 3188 | |
michael@0 | 3189 | thing += Arena::thingSize(kind); |
michael@0 | 3190 | } |
michael@0 | 3191 | } |
michael@0 | 3192 | } |
michael@0 | 3193 | } |
michael@0 | 3194 | |
michael@0 | 3195 | #endif |
michael@0 | 3196 | |
michael@0 | 3197 | static void |
michael@0 | 3198 | ComputeNonIncrementalMarkingForValidation(JSRuntime *rt) |
michael@0 | 3199 | { |
michael@0 | 3200 | #ifdef DEBUG |
michael@0 | 3201 | JS_ASSERT(!rt->gcMarkingValidator); |
michael@0 | 3202 | if (rt->gcIsIncremental && rt->gcValidate) |
michael@0 | 3203 | rt->gcMarkingValidator = js_new<MarkingValidator>(rt); |
michael@0 | 3204 | if (rt->gcMarkingValidator) |
michael@0 | 3205 | rt->gcMarkingValidator->nonIncrementalMark(); |
michael@0 | 3206 | #endif |
michael@0 | 3207 | } |
michael@0 | 3208 | |
michael@0 | 3209 | static void |
michael@0 | 3210 | ValidateIncrementalMarking(JSRuntime *rt) |
michael@0 | 3211 | { |
michael@0 | 3212 | #ifdef DEBUG |
michael@0 | 3213 | if (rt->gcMarkingValidator) |
michael@0 | 3214 | rt->gcMarkingValidator->validate(); |
michael@0 | 3215 | #endif |
michael@0 | 3216 | } |
michael@0 | 3217 | |
michael@0 | 3218 | static void |
michael@0 | 3219 | FinishMarkingValidation(JSRuntime *rt) |
michael@0 | 3220 | { |
michael@0 | 3221 | #ifdef DEBUG |
michael@0 | 3222 | js_delete(rt->gcMarkingValidator); |
michael@0 | 3223 | rt->gcMarkingValidator = nullptr; |
michael@0 | 3224 | #endif |
michael@0 | 3225 | } |
michael@0 | 3226 | |
michael@0 | 3227 | static void |
michael@0 | 3228 | AssertNeedsBarrierFlagsConsistent(JSRuntime *rt) |
michael@0 | 3229 | { |
michael@0 | 3230 | #ifdef DEBUG |
michael@0 | 3231 | bool anyNeedsBarrier = false; |
michael@0 | 3232 | for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) |
michael@0 | 3233 | anyNeedsBarrier |= zone->needsBarrier(); |
michael@0 | 3234 | JS_ASSERT(rt->needsBarrier() == anyNeedsBarrier); |
michael@0 | 3235 | #endif |
michael@0 | 3236 | } |
michael@0 | 3237 | |
michael@0 | 3238 | static void |
michael@0 | 3239 | DropStringWrappers(JSRuntime *rt) |
michael@0 | 3240 | { |
michael@0 | 3241 | /* |
michael@0 | 3242 | * String "wrappers" are dropped on GC because their presence would require |
michael@0 | 3243 | * us to sweep the wrappers in all compartments every time we sweep a |
michael@0 | 3244 | * compartment group. |
michael@0 | 3245 | */ |
michael@0 | 3246 | for (CompartmentsIter c(rt, SkipAtoms); !c.done(); c.next()) { |
michael@0 | 3247 | for (JSCompartment::WrapperEnum e(c); !e.empty(); e.popFront()) { |
michael@0 | 3248 | if (e.front().key().kind == CrossCompartmentKey::StringWrapper) |
michael@0 | 3249 | e.removeFront(); |
michael@0 | 3250 | } |
michael@0 | 3251 | } |
michael@0 | 3252 | } |
michael@0 | 3253 | |
michael@0 | 3254 | /* |
michael@0 | 3255 | * Group zones that must be swept at the same time. |
michael@0 | 3256 | * |
michael@0 | 3257 | * If compartment A has an edge to an unmarked object in compartment B, then we |
michael@0 | 3258 | * must not sweep A in a later slice than we sweep B. That's because a write |
michael@0 | 3259 | * barrier in A that could lead to the unmarked object in B becoming |
michael@0 | 3260 | * marked. However, if we had already swept that object, we would be in trouble. |
michael@0 | 3261 | * |
michael@0 | 3262 | * If we consider these dependencies as a graph, then all the compartments in |
michael@0 | 3263 | * any strongly-connected component of this graph must be swept in the same |
michael@0 | 3264 | * slice. |
michael@0 | 3265 | * |
michael@0 | 3266 | * Tarjan's algorithm is used to calculate the components. |
michael@0 | 3267 | */ |
michael@0 | 3268 | |
michael@0 | 3269 | void |
michael@0 | 3270 | JSCompartment::findOutgoingEdges(ComponentFinder<JS::Zone> &finder) |
michael@0 | 3271 | { |
michael@0 | 3272 | for (js::WrapperMap::Enum e(crossCompartmentWrappers); !e.empty(); e.popFront()) { |
michael@0 | 3273 | CrossCompartmentKey::Kind kind = e.front().key().kind; |
michael@0 | 3274 | JS_ASSERT(kind != CrossCompartmentKey::StringWrapper); |
michael@0 | 3275 | Cell *other = e.front().key().wrapped; |
michael@0 | 3276 | if (kind == CrossCompartmentKey::ObjectWrapper) { |
michael@0 | 3277 | /* |
michael@0 | 3278 | * Add edge to wrapped object compartment if wrapped object is not |
michael@0 | 3279 | * marked black to indicate that wrapper compartment not be swept |
michael@0 | 3280 | * after wrapped compartment. |
michael@0 | 3281 | */ |
michael@0 | 3282 | if (!other->isMarked(BLACK) || other->isMarked(GRAY)) { |
michael@0 | 3283 | JS::Zone *w = other->tenuredZone(); |
michael@0 | 3284 | if (w->isGCMarking()) |
michael@0 | 3285 | finder.addEdgeTo(w); |
michael@0 | 3286 | } |
michael@0 | 3287 | } else { |
michael@0 | 3288 | JS_ASSERT(kind == CrossCompartmentKey::DebuggerScript || |
michael@0 | 3289 | kind == CrossCompartmentKey::DebuggerSource || |
michael@0 | 3290 | kind == CrossCompartmentKey::DebuggerObject || |
michael@0 | 3291 | kind == CrossCompartmentKey::DebuggerEnvironment); |
michael@0 | 3292 | /* |
michael@0 | 3293 | * Add edge for debugger object wrappers, to ensure (in conjuction |
michael@0 | 3294 | * with call to Debugger::findCompartmentEdges below) that debugger |
michael@0 | 3295 | * and debuggee objects are always swept in the same group. |
michael@0 | 3296 | */ |
michael@0 | 3297 | JS::Zone *w = other->tenuredZone(); |
michael@0 | 3298 | if (w->isGCMarking()) |
michael@0 | 3299 | finder.addEdgeTo(w); |
michael@0 | 3300 | } |
michael@0 | 3301 | } |
michael@0 | 3302 | |
michael@0 | 3303 | Debugger::findCompartmentEdges(zone(), finder); |
michael@0 | 3304 | } |
michael@0 | 3305 | |
michael@0 | 3306 | void |
michael@0 | 3307 | Zone::findOutgoingEdges(ComponentFinder<JS::Zone> &finder) |
michael@0 | 3308 | { |
michael@0 | 3309 | /* |
michael@0 | 3310 | * Any compartment may have a pointer to an atom in the atoms |
michael@0 | 3311 | * compartment, and these aren't in the cross compartment map. |
michael@0 | 3312 | */ |
michael@0 | 3313 | JSRuntime *rt = runtimeFromMainThread(); |
michael@0 | 3314 | if (rt->atomsCompartment()->zone()->isGCMarking()) |
michael@0 | 3315 | finder.addEdgeTo(rt->atomsCompartment()->zone()); |
michael@0 | 3316 | |
michael@0 | 3317 | for (CompartmentsInZoneIter comp(this); !comp.done(); comp.next()) |
michael@0 | 3318 | comp->findOutgoingEdges(finder); |
michael@0 | 3319 | } |
michael@0 | 3320 | |
michael@0 | 3321 | static void |
michael@0 | 3322 | FindZoneGroups(JSRuntime *rt) |
michael@0 | 3323 | { |
michael@0 | 3324 | ComponentFinder<Zone> finder(rt->mainThread.nativeStackLimit[StackForSystemCode]); |
michael@0 | 3325 | if (!rt->gcIsIncremental) |
michael@0 | 3326 | finder.useOneComponent(); |
michael@0 | 3327 | |
michael@0 | 3328 | for (GCZonesIter zone(rt); !zone.done(); zone.next()) { |
michael@0 | 3329 | JS_ASSERT(zone->isGCMarking()); |
michael@0 | 3330 | finder.addNode(zone); |
michael@0 | 3331 | } |
michael@0 | 3332 | rt->gcZoneGroups = finder.getResultsList(); |
michael@0 | 3333 | rt->gcCurrentZoneGroup = rt->gcZoneGroups; |
michael@0 | 3334 | rt->gcZoneGroupIndex = 0; |
michael@0 | 3335 | JS_ASSERT_IF(!rt->gcIsIncremental, !rt->gcCurrentZoneGroup->nextGroup()); |
michael@0 | 3336 | } |
michael@0 | 3337 | |
michael@0 | 3338 | static void |
michael@0 | 3339 | ResetGrayList(JSCompartment* comp); |
michael@0 | 3340 | |
michael@0 | 3341 | static void |
michael@0 | 3342 | GetNextZoneGroup(JSRuntime *rt) |
michael@0 | 3343 | { |
michael@0 | 3344 | rt->gcCurrentZoneGroup = rt->gcCurrentZoneGroup->nextGroup(); |
michael@0 | 3345 | ++rt->gcZoneGroupIndex; |
michael@0 | 3346 | if (!rt->gcCurrentZoneGroup) { |
michael@0 | 3347 | rt->gcAbortSweepAfterCurrentGroup = false; |
michael@0 | 3348 | return; |
michael@0 | 3349 | } |
michael@0 | 3350 | |
michael@0 | 3351 | if (!rt->gcIsIncremental) |
michael@0 | 3352 | ComponentFinder<Zone>::mergeGroups(rt->gcCurrentZoneGroup); |
michael@0 | 3353 | |
michael@0 | 3354 | if (rt->gcAbortSweepAfterCurrentGroup) { |
michael@0 | 3355 | JS_ASSERT(!rt->gcIsIncremental); |
michael@0 | 3356 | for (GCZoneGroupIter zone(rt); !zone.done(); zone.next()) { |
michael@0 | 3357 | JS_ASSERT(!zone->gcNextGraphComponent); |
michael@0 | 3358 | JS_ASSERT(zone->isGCMarking()); |
michael@0 | 3359 | zone->setNeedsBarrier(false, Zone::UpdateIon); |
michael@0 | 3360 | zone->setGCState(Zone::NoGC); |
michael@0 | 3361 | zone->gcGrayRoots.clearAndFree(); |
michael@0 | 3362 | } |
michael@0 | 3363 | rt->setNeedsBarrier(false); |
michael@0 | 3364 | AssertNeedsBarrierFlagsConsistent(rt); |
michael@0 | 3365 | |
michael@0 | 3366 | for (GCCompartmentGroupIter comp(rt); !comp.done(); comp.next()) { |
michael@0 | 3367 | ArrayBufferObject::resetArrayBufferList(comp); |
michael@0 | 3368 | ResetGrayList(comp); |
michael@0 | 3369 | } |
michael@0 | 3370 | |
michael@0 | 3371 | rt->gcAbortSweepAfterCurrentGroup = false; |
michael@0 | 3372 | rt->gcCurrentZoneGroup = nullptr; |
michael@0 | 3373 | } |
michael@0 | 3374 | } |
michael@0 | 3375 | |
michael@0 | 3376 | /* |
michael@0 | 3377 | * Gray marking: |
michael@0 | 3378 | * |
michael@0 | 3379 | * At the end of collection, anything reachable from a gray root that has not |
michael@0 | 3380 | * otherwise been marked black must be marked gray. |
michael@0 | 3381 | * |
michael@0 | 3382 | * This means that when marking things gray we must not allow marking to leave |
michael@0 | 3383 | * the current compartment group, as that could result in things being marked |
michael@0 | 3384 | * grey when they might subsequently be marked black. To achieve this, when we |
michael@0 | 3385 | * find a cross compartment pointer we don't mark the referent but add it to a |
michael@0 | 3386 | * singly-linked list of incoming gray pointers that is stored with each |
michael@0 | 3387 | * compartment. |
michael@0 | 3388 | * |
michael@0 | 3389 | * The list head is stored in JSCompartment::gcIncomingGrayPointers and contains |
michael@0 | 3390 | * cross compartment wrapper objects. The next pointer is stored in the second |
michael@0 | 3391 | * extra slot of the cross compartment wrapper. |
michael@0 | 3392 | * |
michael@0 | 3393 | * The list is created during gray marking when one of the |
michael@0 | 3394 | * MarkCrossCompartmentXXX functions is called for a pointer that leaves the |
michael@0 | 3395 | * current compartent group. This calls DelayCrossCompartmentGrayMarking to |
michael@0 | 3396 | * push the referring object onto the list. |
michael@0 | 3397 | * |
michael@0 | 3398 | * The list is traversed and then unlinked in |
michael@0 | 3399 | * MarkIncomingCrossCompartmentPointers. |
michael@0 | 3400 | */ |
michael@0 | 3401 | |
michael@0 | 3402 | static bool |
michael@0 | 3403 | IsGrayListObject(JSObject *obj) |
michael@0 | 3404 | { |
michael@0 | 3405 | JS_ASSERT(obj); |
michael@0 | 3406 | return obj->is<CrossCompartmentWrapperObject>() && !IsDeadProxyObject(obj); |
michael@0 | 3407 | } |
michael@0 | 3408 | |
michael@0 | 3409 | /* static */ unsigned |
michael@0 | 3410 | ProxyObject::grayLinkSlot(JSObject *obj) |
michael@0 | 3411 | { |
michael@0 | 3412 | JS_ASSERT(IsGrayListObject(obj)); |
michael@0 | 3413 | return ProxyObject::EXTRA_SLOT + 1; |
michael@0 | 3414 | } |
michael@0 | 3415 | |
michael@0 | 3416 | #ifdef DEBUG |
michael@0 | 3417 | static void |
michael@0 | 3418 | AssertNotOnGrayList(JSObject *obj) |
michael@0 | 3419 | { |
michael@0 | 3420 | JS_ASSERT_IF(IsGrayListObject(obj), |
michael@0 | 3421 | obj->getReservedSlot(ProxyObject::grayLinkSlot(obj)).isUndefined()); |
michael@0 | 3422 | } |
michael@0 | 3423 | #endif |
michael@0 | 3424 | |
michael@0 | 3425 | static JSObject * |
michael@0 | 3426 | CrossCompartmentPointerReferent(JSObject *obj) |
michael@0 | 3427 | { |
michael@0 | 3428 | JS_ASSERT(IsGrayListObject(obj)); |
michael@0 | 3429 | return &obj->as<ProxyObject>().private_().toObject(); |
michael@0 | 3430 | } |
michael@0 | 3431 | |
michael@0 | 3432 | static JSObject * |
michael@0 | 3433 | NextIncomingCrossCompartmentPointer(JSObject *prev, bool unlink) |
michael@0 | 3434 | { |
michael@0 | 3435 | unsigned slot = ProxyObject::grayLinkSlot(prev); |
michael@0 | 3436 | JSObject *next = prev->getReservedSlot(slot).toObjectOrNull(); |
michael@0 | 3437 | JS_ASSERT_IF(next, IsGrayListObject(next)); |
michael@0 | 3438 | |
michael@0 | 3439 | if (unlink) |
michael@0 | 3440 | prev->setSlot(slot, UndefinedValue()); |
michael@0 | 3441 | |
michael@0 | 3442 | return next; |
michael@0 | 3443 | } |
michael@0 | 3444 | |
michael@0 | 3445 | void |
michael@0 | 3446 | js::DelayCrossCompartmentGrayMarking(JSObject *src) |
michael@0 | 3447 | { |
michael@0 | 3448 | JS_ASSERT(IsGrayListObject(src)); |
michael@0 | 3449 | |
michael@0 | 3450 | /* Called from MarkCrossCompartmentXXX functions. */ |
michael@0 | 3451 | unsigned slot = ProxyObject::grayLinkSlot(src); |
michael@0 | 3452 | JSObject *dest = CrossCompartmentPointerReferent(src); |
michael@0 | 3453 | JSCompartment *comp = dest->compartment(); |
michael@0 | 3454 | |
michael@0 | 3455 | if (src->getReservedSlot(slot).isUndefined()) { |
michael@0 | 3456 | src->setCrossCompartmentSlot(slot, ObjectOrNullValue(comp->gcIncomingGrayPointers)); |
michael@0 | 3457 | comp->gcIncomingGrayPointers = src; |
michael@0 | 3458 | } else { |
michael@0 | 3459 | JS_ASSERT(src->getReservedSlot(slot).isObjectOrNull()); |
michael@0 | 3460 | } |
michael@0 | 3461 | |
michael@0 | 3462 | #ifdef DEBUG |
michael@0 | 3463 | /* |
michael@0 | 3464 | * Assert that the object is in our list, also walking the list to check its |
michael@0 | 3465 | * integrity. |
michael@0 | 3466 | */ |
michael@0 | 3467 | JSObject *obj = comp->gcIncomingGrayPointers; |
michael@0 | 3468 | bool found = false; |
michael@0 | 3469 | while (obj) { |
michael@0 | 3470 | if (obj == src) |
michael@0 | 3471 | found = true; |
michael@0 | 3472 | obj = NextIncomingCrossCompartmentPointer(obj, false); |
michael@0 | 3473 | } |
michael@0 | 3474 | JS_ASSERT(found); |
michael@0 | 3475 | #endif |
michael@0 | 3476 | } |
michael@0 | 3477 | |
michael@0 | 3478 | static void |
michael@0 | 3479 | MarkIncomingCrossCompartmentPointers(JSRuntime *rt, const uint32_t color) |
michael@0 | 3480 | { |
michael@0 | 3481 | JS_ASSERT(color == BLACK || color == GRAY); |
michael@0 | 3482 | |
michael@0 | 3483 | gcstats::AutoPhase ap(rt->gcStats, gcstats::PHASE_SWEEP_MARK); |
michael@0 | 3484 | static const gcstats::Phase statsPhases[] = { |
michael@0 | 3485 | gcstats::PHASE_SWEEP_MARK_INCOMING_BLACK, |
michael@0 | 3486 | gcstats::PHASE_SWEEP_MARK_INCOMING_GRAY |
michael@0 | 3487 | }; |
michael@0 | 3488 | gcstats::AutoPhase ap1(rt->gcStats, statsPhases[color]); |
michael@0 | 3489 | |
michael@0 | 3490 | bool unlinkList = color == GRAY; |
michael@0 | 3491 | |
michael@0 | 3492 | for (GCCompartmentGroupIter c(rt); !c.done(); c.next()) { |
michael@0 | 3493 | JS_ASSERT_IF(color == GRAY, c->zone()->isGCMarkingGray()); |
michael@0 | 3494 | JS_ASSERT_IF(color == BLACK, c->zone()->isGCMarkingBlack()); |
michael@0 | 3495 | JS_ASSERT_IF(c->gcIncomingGrayPointers, IsGrayListObject(c->gcIncomingGrayPointers)); |
michael@0 | 3496 | |
michael@0 | 3497 | for (JSObject *src = c->gcIncomingGrayPointers; |
michael@0 | 3498 | src; |
michael@0 | 3499 | src = NextIncomingCrossCompartmentPointer(src, unlinkList)) |
michael@0 | 3500 | { |
michael@0 | 3501 | JSObject *dst = CrossCompartmentPointerReferent(src); |
michael@0 | 3502 | JS_ASSERT(dst->compartment() == c); |
michael@0 | 3503 | |
michael@0 | 3504 | if (color == GRAY) { |
michael@0 | 3505 | if (IsObjectMarked(&src) && src->isMarked(GRAY)) |
michael@0 | 3506 | MarkGCThingUnbarriered(&rt->gcMarker, (void**)&dst, |
michael@0 | 3507 | "cross-compartment gray pointer"); |
michael@0 | 3508 | } else { |
michael@0 | 3509 | if (IsObjectMarked(&src) && !src->isMarked(GRAY)) |
michael@0 | 3510 | MarkGCThingUnbarriered(&rt->gcMarker, (void**)&dst, |
michael@0 | 3511 | "cross-compartment black pointer"); |
michael@0 | 3512 | } |
michael@0 | 3513 | } |
michael@0 | 3514 | |
michael@0 | 3515 | if (unlinkList) |
michael@0 | 3516 | c->gcIncomingGrayPointers = nullptr; |
michael@0 | 3517 | } |
michael@0 | 3518 | |
michael@0 | 3519 | SliceBudget budget; |
michael@0 | 3520 | rt->gcMarker.drainMarkStack(budget); |
michael@0 | 3521 | } |
michael@0 | 3522 | |
michael@0 | 3523 | static bool |
michael@0 | 3524 | RemoveFromGrayList(JSObject *wrapper) |
michael@0 | 3525 | { |
michael@0 | 3526 | if (!IsGrayListObject(wrapper)) |
michael@0 | 3527 | return false; |
michael@0 | 3528 | |
michael@0 | 3529 | unsigned slot = ProxyObject::grayLinkSlot(wrapper); |
michael@0 | 3530 | if (wrapper->getReservedSlot(slot).isUndefined()) |
michael@0 | 3531 | return false; /* Not on our list. */ |
michael@0 | 3532 | |
michael@0 | 3533 | JSObject *tail = wrapper->getReservedSlot(slot).toObjectOrNull(); |
michael@0 | 3534 | wrapper->setReservedSlot(slot, UndefinedValue()); |
michael@0 | 3535 | |
michael@0 | 3536 | JSCompartment *comp = CrossCompartmentPointerReferent(wrapper)->compartment(); |
michael@0 | 3537 | JSObject *obj = comp->gcIncomingGrayPointers; |
michael@0 | 3538 | if (obj == wrapper) { |
michael@0 | 3539 | comp->gcIncomingGrayPointers = tail; |
michael@0 | 3540 | return true; |
michael@0 | 3541 | } |
michael@0 | 3542 | |
michael@0 | 3543 | while (obj) { |
michael@0 | 3544 | unsigned slot = ProxyObject::grayLinkSlot(obj); |
michael@0 | 3545 | JSObject *next = obj->getReservedSlot(slot).toObjectOrNull(); |
michael@0 | 3546 | if (next == wrapper) { |
michael@0 | 3547 | obj->setCrossCompartmentSlot(slot, ObjectOrNullValue(tail)); |
michael@0 | 3548 | return true; |
michael@0 | 3549 | } |
michael@0 | 3550 | obj = next; |
michael@0 | 3551 | } |
michael@0 | 3552 | |
michael@0 | 3553 | MOZ_ASSUME_UNREACHABLE("object not found in gray link list"); |
michael@0 | 3554 | } |
michael@0 | 3555 | |
michael@0 | 3556 | static void |
michael@0 | 3557 | ResetGrayList(JSCompartment *comp) |
michael@0 | 3558 | { |
michael@0 | 3559 | JSObject *src = comp->gcIncomingGrayPointers; |
michael@0 | 3560 | while (src) |
michael@0 | 3561 | src = NextIncomingCrossCompartmentPointer(src, true); |
michael@0 | 3562 | comp->gcIncomingGrayPointers = nullptr; |
michael@0 | 3563 | } |
michael@0 | 3564 | |
michael@0 | 3565 | void |
michael@0 | 3566 | js::NotifyGCNukeWrapper(JSObject *obj) |
michael@0 | 3567 | { |
michael@0 | 3568 | /* |
michael@0 | 3569 | * References to target of wrapper are being removed, we no longer have to |
michael@0 | 3570 | * remember to mark it. |
michael@0 | 3571 | */ |
michael@0 | 3572 | RemoveFromGrayList(obj); |
michael@0 | 3573 | } |
michael@0 | 3574 | |
michael@0 | 3575 | enum { |
michael@0 | 3576 | JS_GC_SWAP_OBJECT_A_REMOVED = 1 << 0, |
michael@0 | 3577 | JS_GC_SWAP_OBJECT_B_REMOVED = 1 << 1 |
michael@0 | 3578 | }; |
michael@0 | 3579 | |
michael@0 | 3580 | unsigned |
michael@0 | 3581 | js::NotifyGCPreSwap(JSObject *a, JSObject *b) |
michael@0 | 3582 | { |
michael@0 | 3583 | /* |
michael@0 | 3584 | * Two objects in the same compartment are about to have had their contents |
michael@0 | 3585 | * swapped. If either of them are in our gray pointer list, then we remove |
michael@0 | 3586 | * them from the lists, returning a bitset indicating what happened. |
michael@0 | 3587 | */ |
michael@0 | 3588 | return (RemoveFromGrayList(a) ? JS_GC_SWAP_OBJECT_A_REMOVED : 0) | |
michael@0 | 3589 | (RemoveFromGrayList(b) ? JS_GC_SWAP_OBJECT_B_REMOVED : 0); |
michael@0 | 3590 | } |
michael@0 | 3591 | |
michael@0 | 3592 | void |
michael@0 | 3593 | js::NotifyGCPostSwap(JSObject *a, JSObject *b, unsigned removedFlags) |
michael@0 | 3594 | { |
michael@0 | 3595 | /* |
michael@0 | 3596 | * Two objects in the same compartment have had their contents swapped. If |
michael@0 | 3597 | * either of them were in our gray pointer list, we re-add them again. |
michael@0 | 3598 | */ |
michael@0 | 3599 | if (removedFlags & JS_GC_SWAP_OBJECT_A_REMOVED) |
michael@0 | 3600 | DelayCrossCompartmentGrayMarking(b); |
michael@0 | 3601 | if (removedFlags & JS_GC_SWAP_OBJECT_B_REMOVED) |
michael@0 | 3602 | DelayCrossCompartmentGrayMarking(a); |
michael@0 | 3603 | } |
michael@0 | 3604 | |
michael@0 | 3605 | static void |
michael@0 | 3606 | EndMarkingZoneGroup(JSRuntime *rt) |
michael@0 | 3607 | { |
michael@0 | 3608 | /* |
michael@0 | 3609 | * Mark any incoming black pointers from previously swept compartments |
michael@0 | 3610 | * whose referents are not marked. This can occur when gray cells become |
michael@0 | 3611 | * black by the action of UnmarkGray. |
michael@0 | 3612 | */ |
michael@0 | 3613 | MarkIncomingCrossCompartmentPointers(rt, BLACK); |
michael@0 | 3614 | |
michael@0 | 3615 | MarkWeakReferencesInCurrentGroup(rt, gcstats::PHASE_SWEEP_MARK_WEAK); |
michael@0 | 3616 | |
michael@0 | 3617 | /* |
michael@0 | 3618 | * Change state of current group to MarkGray to restrict marking to this |
michael@0 | 3619 | * group. Note that there may be pointers to the atoms compartment, and |
michael@0 | 3620 | * these will be marked through, as they are not marked with |
michael@0 | 3621 | * MarkCrossCompartmentXXX. |
michael@0 | 3622 | */ |
michael@0 | 3623 | for (GCZoneGroupIter zone(rt); !zone.done(); zone.next()) { |
michael@0 | 3624 | JS_ASSERT(zone->isGCMarkingBlack()); |
michael@0 | 3625 | zone->setGCState(Zone::MarkGray); |
michael@0 | 3626 | } |
michael@0 | 3627 | |
michael@0 | 3628 | /* Mark incoming gray pointers from previously swept compartments. */ |
michael@0 | 3629 | rt->gcMarker.setMarkColorGray(); |
michael@0 | 3630 | MarkIncomingCrossCompartmentPointers(rt, GRAY); |
michael@0 | 3631 | rt->gcMarker.setMarkColorBlack(); |
michael@0 | 3632 | |
michael@0 | 3633 | /* Mark gray roots and mark transitively inside the current compartment group. */ |
michael@0 | 3634 | MarkGrayReferencesInCurrentGroup(rt); |
michael@0 | 3635 | |
michael@0 | 3636 | /* Restore marking state. */ |
michael@0 | 3637 | for (GCZoneGroupIter zone(rt); !zone.done(); zone.next()) { |
michael@0 | 3638 | JS_ASSERT(zone->isGCMarkingGray()); |
michael@0 | 3639 | zone->setGCState(Zone::Mark); |
michael@0 | 3640 | } |
michael@0 | 3641 | |
michael@0 | 3642 | JS_ASSERT(rt->gcMarker.isDrained()); |
michael@0 | 3643 | } |
michael@0 | 3644 | |
michael@0 | 3645 | static void |
michael@0 | 3646 | BeginSweepingZoneGroup(JSRuntime *rt) |
michael@0 | 3647 | { |
michael@0 | 3648 | /* |
michael@0 | 3649 | * Begin sweeping the group of zones in gcCurrentZoneGroup, |
michael@0 | 3650 | * performing actions that must be done before yielding to caller. |
michael@0 | 3651 | */ |
michael@0 | 3652 | |
michael@0 | 3653 | bool sweepingAtoms = false; |
michael@0 | 3654 | for (GCZoneGroupIter zone(rt); !zone.done(); zone.next()) { |
michael@0 | 3655 | /* Set the GC state to sweeping. */ |
michael@0 | 3656 | JS_ASSERT(zone->isGCMarking()); |
michael@0 | 3657 | zone->setGCState(Zone::Sweep); |
michael@0 | 3658 | |
michael@0 | 3659 | /* Purge the ArenaLists before sweeping. */ |
michael@0 | 3660 | zone->allocator.arenas.purge(); |
michael@0 | 3661 | |
michael@0 | 3662 | if (rt->isAtomsZone(zone)) |
michael@0 | 3663 | sweepingAtoms = true; |
michael@0 | 3664 | |
michael@0 | 3665 | if (rt->sweepZoneCallback) |
michael@0 | 3666 | rt->sweepZoneCallback(zone); |
michael@0 | 3667 | } |
michael@0 | 3668 | |
michael@0 | 3669 | ValidateIncrementalMarking(rt); |
michael@0 | 3670 | |
michael@0 | 3671 | FreeOp fop(rt, rt->gcSweepOnBackgroundThread); |
michael@0 | 3672 | |
michael@0 | 3673 | { |
michael@0 | 3674 | gcstats::AutoPhase ap(rt->gcStats, gcstats::PHASE_FINALIZE_START); |
michael@0 | 3675 | if (rt->gcFinalizeCallback) |
michael@0 | 3676 | rt->gcFinalizeCallback(&fop, JSFINALIZE_GROUP_START, !rt->gcIsFull /* unused */); |
michael@0 | 3677 | } |
michael@0 | 3678 | |
michael@0 | 3679 | if (sweepingAtoms) { |
michael@0 | 3680 | gcstats::AutoPhase ap(rt->gcStats, gcstats::PHASE_SWEEP_ATOMS); |
michael@0 | 3681 | rt->sweepAtoms(); |
michael@0 | 3682 | } |
michael@0 | 3683 | |
michael@0 | 3684 | /* Prune out dead views from ArrayBuffer's view lists. */ |
michael@0 | 3685 | for (GCCompartmentGroupIter c(rt); !c.done(); c.next()) |
michael@0 | 3686 | ArrayBufferObject::sweep(c); |
michael@0 | 3687 | |
michael@0 | 3688 | /* Collect watch points associated with unreachable objects. */ |
michael@0 | 3689 | WatchpointMap::sweepAll(rt); |
michael@0 | 3690 | |
michael@0 | 3691 | /* Detach unreachable debuggers and global objects from each other. */ |
michael@0 | 3692 | Debugger::sweepAll(&fop); |
michael@0 | 3693 | |
michael@0 | 3694 | { |
michael@0 | 3695 | gcstats::AutoPhase ap(rt->gcStats, gcstats::PHASE_SWEEP_COMPARTMENTS); |
michael@0 | 3696 | |
michael@0 | 3697 | for (GCZoneGroupIter zone(rt); !zone.done(); zone.next()) { |
michael@0 | 3698 | gcstats::AutoPhase ap(rt->gcStats, gcstats::PHASE_SWEEP_DISCARD_CODE); |
michael@0 | 3699 | zone->discardJitCode(&fop); |
michael@0 | 3700 | } |
michael@0 | 3701 | |
michael@0 | 3702 | bool releaseTypes = ReleaseObservedTypes(rt); |
michael@0 | 3703 | for (GCCompartmentGroupIter c(rt); !c.done(); c.next()) { |
michael@0 | 3704 | gcstats::AutoSCC scc(rt->gcStats, rt->gcZoneGroupIndex); |
michael@0 | 3705 | c->sweep(&fop, releaseTypes && !c->zone()->isPreservingCode()); |
michael@0 | 3706 | } |
michael@0 | 3707 | |
michael@0 | 3708 | for (GCZoneGroupIter zone(rt); !zone.done(); zone.next()) { |
michael@0 | 3709 | gcstats::AutoSCC scc(rt->gcStats, rt->gcZoneGroupIndex); |
michael@0 | 3710 | |
michael@0 | 3711 | // If there is an OOM while sweeping types, the type information |
michael@0 | 3712 | // will be deoptimized so that it is still correct (i.e. |
michael@0 | 3713 | // overapproximates the possible types in the zone), but the |
michael@0 | 3714 | // constraints might not have been triggered on the deoptimization |
michael@0 | 3715 | // or even copied over completely. In this case, destroy all JIT |
michael@0 | 3716 | // code and new script addendums in the zone, the only things whose |
michael@0 | 3717 | // correctness depends on the type constraints. |
michael@0 | 3718 | bool oom = false; |
michael@0 | 3719 | zone->sweep(&fop, releaseTypes && !zone->isPreservingCode(), &oom); |
michael@0 | 3720 | |
michael@0 | 3721 | if (oom) { |
michael@0 | 3722 | zone->setPreservingCode(false); |
michael@0 | 3723 | zone->discardJitCode(&fop); |
michael@0 | 3724 | zone->types.clearAllNewScriptAddendumsOnOOM(); |
michael@0 | 3725 | } |
michael@0 | 3726 | } |
michael@0 | 3727 | } |
michael@0 | 3728 | |
michael@0 | 3729 | /* |
michael@0 | 3730 | * Queue all GC things in all zones for sweeping, either in the |
michael@0 | 3731 | * foreground or on the background thread. |
michael@0 | 3732 | * |
michael@0 | 3733 | * Note that order is important here for the background case. |
michael@0 | 3734 | * |
michael@0 | 3735 | * Objects are finalized immediately but this may change in the future. |
michael@0 | 3736 | */ |
michael@0 | 3737 | for (GCZoneGroupIter zone(rt); !zone.done(); zone.next()) { |
michael@0 | 3738 | gcstats::AutoSCC scc(rt->gcStats, rt->gcZoneGroupIndex); |
michael@0 | 3739 | zone->allocator.arenas.queueObjectsForSweep(&fop); |
michael@0 | 3740 | } |
michael@0 | 3741 | for (GCZoneGroupIter zone(rt); !zone.done(); zone.next()) { |
michael@0 | 3742 | gcstats::AutoSCC scc(rt->gcStats, rt->gcZoneGroupIndex); |
michael@0 | 3743 | zone->allocator.arenas.queueStringsForSweep(&fop); |
michael@0 | 3744 | } |
michael@0 | 3745 | for (GCZoneGroupIter zone(rt); !zone.done(); zone.next()) { |
michael@0 | 3746 | gcstats::AutoSCC scc(rt->gcStats, rt->gcZoneGroupIndex); |
michael@0 | 3747 | zone->allocator.arenas.queueScriptsForSweep(&fop); |
michael@0 | 3748 | } |
michael@0 | 3749 | #ifdef JS_ION |
michael@0 | 3750 | for (GCZoneGroupIter zone(rt); !zone.done(); zone.next()) { |
michael@0 | 3751 | gcstats::AutoSCC scc(rt->gcStats, rt->gcZoneGroupIndex); |
michael@0 | 3752 | zone->allocator.arenas.queueJitCodeForSweep(&fop); |
michael@0 | 3753 | } |
michael@0 | 3754 | #endif |
michael@0 | 3755 | for (GCZoneGroupIter zone(rt); !zone.done(); zone.next()) { |
michael@0 | 3756 | gcstats::AutoSCC scc(rt->gcStats, rt->gcZoneGroupIndex); |
michael@0 | 3757 | zone->allocator.arenas.queueShapesForSweep(&fop); |
michael@0 | 3758 | zone->allocator.arenas.gcShapeArenasToSweep = |
michael@0 | 3759 | zone->allocator.arenas.arenaListsToSweep[FINALIZE_SHAPE]; |
michael@0 | 3760 | } |
michael@0 | 3761 | |
michael@0 | 3762 | rt->gcSweepPhase = 0; |
michael@0 | 3763 | rt->gcSweepZone = rt->gcCurrentZoneGroup; |
michael@0 | 3764 | rt->gcSweepKindIndex = 0; |
michael@0 | 3765 | |
michael@0 | 3766 | { |
michael@0 | 3767 | gcstats::AutoPhase ap(rt->gcStats, gcstats::PHASE_FINALIZE_END); |
michael@0 | 3768 | if (rt->gcFinalizeCallback) |
michael@0 | 3769 | rt->gcFinalizeCallback(&fop, JSFINALIZE_GROUP_END, !rt->gcIsFull /* unused */); |
michael@0 | 3770 | } |
michael@0 | 3771 | } |
michael@0 | 3772 | |
michael@0 | 3773 | static void |
michael@0 | 3774 | EndSweepingZoneGroup(JSRuntime *rt) |
michael@0 | 3775 | { |
michael@0 | 3776 | /* Update the GC state for zones we have swept and unlink the list. */ |
michael@0 | 3777 | for (GCZoneGroupIter zone(rt); !zone.done(); zone.next()) { |
michael@0 | 3778 | JS_ASSERT(zone->isGCSweeping()); |
michael@0 | 3779 | zone->setGCState(Zone::Finished); |
michael@0 | 3780 | } |
michael@0 | 3781 | |
michael@0 | 3782 | /* Reset the list of arenas marked as being allocated during sweep phase. */ |
michael@0 | 3783 | while (ArenaHeader *arena = rt->gcArenasAllocatedDuringSweep) { |
michael@0 | 3784 | rt->gcArenasAllocatedDuringSweep = arena->getNextAllocDuringSweep(); |
michael@0 | 3785 | arena->unsetAllocDuringSweep(); |
michael@0 | 3786 | } |
michael@0 | 3787 | } |
michael@0 | 3788 | |
michael@0 | 3789 | static void |
michael@0 | 3790 | BeginSweepPhase(JSRuntime *rt, bool lastGC) |
michael@0 | 3791 | { |
michael@0 | 3792 | /* |
michael@0 | 3793 | * Sweep phase. |
michael@0 | 3794 | * |
michael@0 | 3795 | * Finalize as we sweep, outside of rt->gcLock but with rt->isHeapBusy() |
michael@0 | 3796 | * true so that any attempt to allocate a GC-thing from a finalizer will |
michael@0 | 3797 | * fail, rather than nest badly and leave the unmarked newborn to be swept. |
michael@0 | 3798 | */ |
michael@0 | 3799 | |
michael@0 | 3800 | JS_ASSERT(!rt->gcAbortSweepAfterCurrentGroup); |
michael@0 | 3801 | |
michael@0 | 3802 | ComputeNonIncrementalMarkingForValidation(rt); |
michael@0 | 3803 | |
michael@0 | 3804 | gcstats::AutoPhase ap(rt->gcStats, gcstats::PHASE_SWEEP); |
michael@0 | 3805 | |
michael@0 | 3806 | #ifdef JS_THREADSAFE |
michael@0 | 3807 | rt->gcSweepOnBackgroundThread = !lastGC && rt->useHelperThreads(); |
michael@0 | 3808 | #endif |
michael@0 | 3809 | |
michael@0 | 3810 | #ifdef DEBUG |
michael@0 | 3811 | for (CompartmentsIter c(rt, SkipAtoms); !c.done(); c.next()) { |
michael@0 | 3812 | JS_ASSERT(!c->gcIncomingGrayPointers); |
michael@0 | 3813 | for (JSCompartment::WrapperEnum e(c); !e.empty(); e.popFront()) { |
michael@0 | 3814 | if (e.front().key().kind != CrossCompartmentKey::StringWrapper) |
michael@0 | 3815 | AssertNotOnGrayList(&e.front().value().get().toObject()); |
michael@0 | 3816 | } |
michael@0 | 3817 | } |
michael@0 | 3818 | #endif |
michael@0 | 3819 | |
michael@0 | 3820 | DropStringWrappers(rt); |
michael@0 | 3821 | FindZoneGroups(rt); |
michael@0 | 3822 | EndMarkingZoneGroup(rt); |
michael@0 | 3823 | BeginSweepingZoneGroup(rt); |
michael@0 | 3824 | } |
michael@0 | 3825 | |
michael@0 | 3826 | bool |
michael@0 | 3827 | ArenaLists::foregroundFinalize(FreeOp *fop, AllocKind thingKind, SliceBudget &sliceBudget) |
michael@0 | 3828 | { |
michael@0 | 3829 | if (!arenaListsToSweep[thingKind]) |
michael@0 | 3830 | return true; |
michael@0 | 3831 | |
michael@0 | 3832 | ArenaList &dest = arenaLists[thingKind]; |
michael@0 | 3833 | return FinalizeArenas(fop, &arenaListsToSweep[thingKind], dest, thingKind, sliceBudget); |
michael@0 | 3834 | } |
michael@0 | 3835 | |
michael@0 | 3836 | static bool |
michael@0 | 3837 | DrainMarkStack(JSRuntime *rt, SliceBudget &sliceBudget, gcstats::Phase phase) |
michael@0 | 3838 | { |
michael@0 | 3839 | /* Run a marking slice and return whether the stack is now empty. */ |
michael@0 | 3840 | gcstats::AutoPhase ap(rt->gcStats, phase); |
michael@0 | 3841 | return rt->gcMarker.drainMarkStack(sliceBudget); |
michael@0 | 3842 | } |
michael@0 | 3843 | |
michael@0 | 3844 | static bool |
michael@0 | 3845 | SweepPhase(JSRuntime *rt, SliceBudget &sliceBudget) |
michael@0 | 3846 | { |
michael@0 | 3847 | gcstats::AutoPhase ap(rt->gcStats, gcstats::PHASE_SWEEP); |
michael@0 | 3848 | FreeOp fop(rt, rt->gcSweepOnBackgroundThread); |
michael@0 | 3849 | |
michael@0 | 3850 | bool finished = DrainMarkStack(rt, sliceBudget, gcstats::PHASE_SWEEP_MARK); |
michael@0 | 3851 | if (!finished) |
michael@0 | 3852 | return false; |
michael@0 | 3853 | |
michael@0 | 3854 | for (;;) { |
michael@0 | 3855 | /* Finalize foreground finalized things. */ |
michael@0 | 3856 | for (; rt->gcSweepPhase < FinalizePhaseCount ; ++rt->gcSweepPhase) { |
michael@0 | 3857 | gcstats::AutoPhase ap(rt->gcStats, FinalizePhaseStatsPhase[rt->gcSweepPhase]); |
michael@0 | 3858 | |
michael@0 | 3859 | for (; rt->gcSweepZone; rt->gcSweepZone = rt->gcSweepZone->nextNodeInGroup()) { |
michael@0 | 3860 | Zone *zone = rt->gcSweepZone; |
michael@0 | 3861 | |
michael@0 | 3862 | while (rt->gcSweepKindIndex < FinalizePhaseLength[rt->gcSweepPhase]) { |
michael@0 | 3863 | AllocKind kind = FinalizePhases[rt->gcSweepPhase][rt->gcSweepKindIndex]; |
michael@0 | 3864 | |
michael@0 | 3865 | if (!zone->allocator.arenas.foregroundFinalize(&fop, kind, sliceBudget)) |
michael@0 | 3866 | return false; /* Yield to the mutator. */ |
michael@0 | 3867 | |
michael@0 | 3868 | ++rt->gcSweepKindIndex; |
michael@0 | 3869 | } |
michael@0 | 3870 | rt->gcSweepKindIndex = 0; |
michael@0 | 3871 | } |
michael@0 | 3872 | rt->gcSweepZone = rt->gcCurrentZoneGroup; |
michael@0 | 3873 | } |
michael@0 | 3874 | |
michael@0 | 3875 | /* Remove dead shapes from the shape tree, but don't finalize them yet. */ |
michael@0 | 3876 | { |
michael@0 | 3877 | gcstats::AutoPhase ap(rt->gcStats, gcstats::PHASE_SWEEP_SHAPE); |
michael@0 | 3878 | |
michael@0 | 3879 | for (; rt->gcSweepZone; rt->gcSweepZone = rt->gcSweepZone->nextNodeInGroup()) { |
michael@0 | 3880 | Zone *zone = rt->gcSweepZone; |
michael@0 | 3881 | while (ArenaHeader *arena = zone->allocator.arenas.gcShapeArenasToSweep) { |
michael@0 | 3882 | for (CellIterUnderGC i(arena); !i.done(); i.next()) { |
michael@0 | 3883 | Shape *shape = i.get<Shape>(); |
michael@0 | 3884 | if (!shape->isMarked()) |
michael@0 | 3885 | shape->sweep(); |
michael@0 | 3886 | } |
michael@0 | 3887 | |
michael@0 | 3888 | zone->allocator.arenas.gcShapeArenasToSweep = arena->next; |
michael@0 | 3889 | sliceBudget.step(Arena::thingsPerArena(Arena::thingSize(FINALIZE_SHAPE))); |
michael@0 | 3890 | if (sliceBudget.isOverBudget()) |
michael@0 | 3891 | return false; /* Yield to the mutator. */ |
michael@0 | 3892 | } |
michael@0 | 3893 | } |
michael@0 | 3894 | } |
michael@0 | 3895 | |
michael@0 | 3896 | EndSweepingZoneGroup(rt); |
michael@0 | 3897 | GetNextZoneGroup(rt); |
michael@0 | 3898 | if (!rt->gcCurrentZoneGroup) |
michael@0 | 3899 | return true; /* We're finished. */ |
michael@0 | 3900 | EndMarkingZoneGroup(rt); |
michael@0 | 3901 | BeginSweepingZoneGroup(rt); |
michael@0 | 3902 | } |
michael@0 | 3903 | } |
michael@0 | 3904 | |
michael@0 | 3905 | static void |
michael@0 | 3906 | EndSweepPhase(JSRuntime *rt, JSGCInvocationKind gckind, bool lastGC) |
michael@0 | 3907 | { |
michael@0 | 3908 | gcstats::AutoPhase ap(rt->gcStats, gcstats::PHASE_SWEEP); |
michael@0 | 3909 | FreeOp fop(rt, rt->gcSweepOnBackgroundThread); |
michael@0 | 3910 | |
michael@0 | 3911 | JS_ASSERT_IF(lastGC, !rt->gcSweepOnBackgroundThread); |
michael@0 | 3912 | |
michael@0 | 3913 | JS_ASSERT(rt->gcMarker.isDrained()); |
michael@0 | 3914 | rt->gcMarker.stop(); |
michael@0 | 3915 | |
michael@0 | 3916 | /* |
michael@0 | 3917 | * Recalculate whether GC was full or not as this may have changed due to |
michael@0 | 3918 | * newly created zones. Can only change from full to not full. |
michael@0 | 3919 | */ |
michael@0 | 3920 | if (rt->gcIsFull) { |
michael@0 | 3921 | for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) { |
michael@0 | 3922 | if (!zone->isCollecting()) { |
michael@0 | 3923 | rt->gcIsFull = false; |
michael@0 | 3924 | break; |
michael@0 | 3925 | } |
michael@0 | 3926 | } |
michael@0 | 3927 | } |
michael@0 | 3928 | |
michael@0 | 3929 | /* |
michael@0 | 3930 | * If we found any black->gray edges during marking, we completely clear the |
michael@0 | 3931 | * mark bits of all uncollected zones, or if a reset has occured, zones that |
michael@0 | 3932 | * will no longer be collected. This is safe, although it may |
michael@0 | 3933 | * prevent the cycle collector from collecting some dead objects. |
michael@0 | 3934 | */ |
michael@0 | 3935 | if (rt->gcFoundBlackGrayEdges) { |
michael@0 | 3936 | for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) { |
michael@0 | 3937 | if (!zone->isCollecting()) |
michael@0 | 3938 | zone->allocator.arenas.unmarkAll(); |
michael@0 | 3939 | } |
michael@0 | 3940 | } |
michael@0 | 3941 | |
michael@0 | 3942 | { |
michael@0 | 3943 | gcstats::AutoPhase ap(rt->gcStats, gcstats::PHASE_DESTROY); |
michael@0 | 3944 | |
michael@0 | 3945 | /* |
michael@0 | 3946 | * Sweep script filenames after sweeping functions in the generic loop |
michael@0 | 3947 | * above. In this way when a scripted function's finalizer destroys the |
michael@0 | 3948 | * script and calls rt->destroyScriptHook, the hook can still access the |
michael@0 | 3949 | * script's filename. See bug 323267. |
michael@0 | 3950 | */ |
michael@0 | 3951 | if (rt->gcIsFull) |
michael@0 | 3952 | SweepScriptData(rt); |
michael@0 | 3953 | |
michael@0 | 3954 | /* Clear out any small pools that we're hanging on to. */ |
michael@0 | 3955 | if (JSC::ExecutableAllocator *execAlloc = rt->maybeExecAlloc()) |
michael@0 | 3956 | execAlloc->purge(); |
michael@0 | 3957 | |
michael@0 | 3958 | /* |
michael@0 | 3959 | * This removes compartments from rt->compartment, so we do it last to make |
michael@0 | 3960 | * sure we don't miss sweeping any compartments. |
michael@0 | 3961 | */ |
michael@0 | 3962 | if (!lastGC) |
michael@0 | 3963 | SweepZones(&fop, lastGC); |
michael@0 | 3964 | |
michael@0 | 3965 | if (!rt->gcSweepOnBackgroundThread) { |
michael@0 | 3966 | /* |
michael@0 | 3967 | * Destroy arenas after we finished the sweeping so finalizers can |
michael@0 | 3968 | * safely use IsAboutToBeFinalized(). This is done on the |
michael@0 | 3969 | * GCHelperThread if possible. We acquire the lock only because |
michael@0 | 3970 | * Expire needs to unlock it for other callers. |
michael@0 | 3971 | */ |
michael@0 | 3972 | AutoLockGC lock(rt); |
michael@0 | 3973 | ExpireChunksAndArenas(rt, gckind == GC_SHRINK); |
michael@0 | 3974 | } |
michael@0 | 3975 | } |
michael@0 | 3976 | |
michael@0 | 3977 | { |
michael@0 | 3978 | gcstats::AutoPhase ap(rt->gcStats, gcstats::PHASE_FINALIZE_END); |
michael@0 | 3979 | |
michael@0 | 3980 | if (rt->gcFinalizeCallback) |
michael@0 | 3981 | rt->gcFinalizeCallback(&fop, JSFINALIZE_COLLECTION_END, !rt->gcIsFull); |
michael@0 | 3982 | |
michael@0 | 3983 | /* If we finished a full GC, then the gray bits are correct. */ |
michael@0 | 3984 | if (rt->gcIsFull) |
michael@0 | 3985 | rt->gcGrayBitsValid = true; |
michael@0 | 3986 | } |
michael@0 | 3987 | |
michael@0 | 3988 | /* Set up list of zones for sweeping of background things. */ |
michael@0 | 3989 | JS_ASSERT(!rt->gcSweepingZones); |
michael@0 | 3990 | for (GCZonesIter zone(rt); !zone.done(); zone.next()) { |
michael@0 | 3991 | zone->gcNextGraphNode = rt->gcSweepingZones; |
michael@0 | 3992 | rt->gcSweepingZones = zone; |
michael@0 | 3993 | } |
michael@0 | 3994 | |
michael@0 | 3995 | /* If not sweeping on background thread then we must do it here. */ |
michael@0 | 3996 | if (!rt->gcSweepOnBackgroundThread) { |
michael@0 | 3997 | gcstats::AutoPhase ap(rt->gcStats, gcstats::PHASE_DESTROY); |
michael@0 | 3998 | |
michael@0 | 3999 | SweepBackgroundThings(rt, false); |
michael@0 | 4000 | |
michael@0 | 4001 | rt->freeLifoAlloc.freeAll(); |
michael@0 | 4002 | |
michael@0 | 4003 | /* Ensure the compartments get swept if it's the last GC. */ |
michael@0 | 4004 | if (lastGC) |
michael@0 | 4005 | SweepZones(&fop, lastGC); |
michael@0 | 4006 | } |
michael@0 | 4007 | |
michael@0 | 4008 | for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) { |
michael@0 | 4009 | zone->setGCLastBytes(zone->gcBytes, gckind); |
michael@0 | 4010 | if (zone->isCollecting()) { |
michael@0 | 4011 | JS_ASSERT(zone->isGCFinished()); |
michael@0 | 4012 | zone->setGCState(Zone::NoGC); |
michael@0 | 4013 | } |
michael@0 | 4014 | |
michael@0 | 4015 | #ifdef DEBUG |
michael@0 | 4016 | JS_ASSERT(!zone->isCollecting()); |
michael@0 | 4017 | JS_ASSERT(!zone->wasGCStarted()); |
michael@0 | 4018 | |
michael@0 | 4019 | for (unsigned i = 0 ; i < FINALIZE_LIMIT ; ++i) { |
michael@0 | 4020 | JS_ASSERT_IF(!IsBackgroundFinalized(AllocKind(i)) || |
michael@0 | 4021 | !rt->gcSweepOnBackgroundThread, |
michael@0 | 4022 | !zone->allocator.arenas.arenaListsToSweep[i]); |
michael@0 | 4023 | } |
michael@0 | 4024 | #endif |
michael@0 | 4025 | } |
michael@0 | 4026 | |
michael@0 | 4027 | #ifdef DEBUG |
michael@0 | 4028 | for (CompartmentsIter c(rt, SkipAtoms); !c.done(); c.next()) { |
michael@0 | 4029 | JS_ASSERT(!c->gcIncomingGrayPointers); |
michael@0 | 4030 | JS_ASSERT(c->gcLiveArrayBuffers.empty()); |
michael@0 | 4031 | |
michael@0 | 4032 | for (JSCompartment::WrapperEnum e(c); !e.empty(); e.popFront()) { |
michael@0 | 4033 | if (e.front().key().kind != CrossCompartmentKey::StringWrapper) |
michael@0 | 4034 | AssertNotOnGrayList(&e.front().value().get().toObject()); |
michael@0 | 4035 | } |
michael@0 | 4036 | } |
michael@0 | 4037 | #endif |
michael@0 | 4038 | |
michael@0 | 4039 | FinishMarkingValidation(rt); |
michael@0 | 4040 | |
michael@0 | 4041 | rt->gcLastGCTime = PRMJ_Now(); |
michael@0 | 4042 | } |
michael@0 | 4043 | |
michael@0 | 4044 | namespace { |
michael@0 | 4045 | |
michael@0 | 4046 | /* ...while this class is to be used only for garbage collection. */ |
michael@0 | 4047 | class AutoGCSession |
michael@0 | 4048 | { |
michael@0 | 4049 | JSRuntime *runtime; |
michael@0 | 4050 | AutoTraceSession session; |
michael@0 | 4051 | bool canceled; |
michael@0 | 4052 | |
michael@0 | 4053 | public: |
michael@0 | 4054 | explicit AutoGCSession(JSRuntime *rt); |
michael@0 | 4055 | ~AutoGCSession(); |
michael@0 | 4056 | |
michael@0 | 4057 | void cancel() { canceled = true; } |
michael@0 | 4058 | }; |
michael@0 | 4059 | |
michael@0 | 4060 | } /* anonymous namespace */ |
michael@0 | 4061 | |
michael@0 | 4062 | /* Start a new heap session. */ |
michael@0 | 4063 | AutoTraceSession::AutoTraceSession(JSRuntime *rt, js::HeapState heapState) |
michael@0 | 4064 | : lock(rt), |
michael@0 | 4065 | runtime(rt), |
michael@0 | 4066 | prevState(rt->heapState) |
michael@0 | 4067 | { |
michael@0 | 4068 | JS_ASSERT(!rt->noGCOrAllocationCheck); |
michael@0 | 4069 | JS_ASSERT(!rt->isHeapBusy()); |
michael@0 | 4070 | JS_ASSERT(heapState != Idle); |
michael@0 | 4071 | #ifdef JSGC_GENERATIONAL |
michael@0 | 4072 | JS_ASSERT_IF(heapState == MajorCollecting, rt->gcNursery.isEmpty()); |
michael@0 | 4073 | #endif |
michael@0 | 4074 | |
michael@0 | 4075 | // Threads with an exclusive context can hit refillFreeList while holding |
michael@0 | 4076 | // the exclusive access lock. To avoid deadlocking when we try to acquire |
michael@0 | 4077 | // this lock during GC and the other thread is waiting, make sure we hold |
michael@0 | 4078 | // the exclusive access lock during GC sessions. |
michael@0 | 4079 | JS_ASSERT(rt->currentThreadHasExclusiveAccess()); |
michael@0 | 4080 | |
michael@0 | 4081 | if (rt->exclusiveThreadsPresent()) { |
michael@0 | 4082 | // Lock the worker thread state when changing the heap state in the |
michael@0 | 4083 | // presence of exclusive threads, to avoid racing with refillFreeList. |
michael@0 | 4084 | #ifdef JS_THREADSAFE |
michael@0 | 4085 | AutoLockWorkerThreadState lock; |
michael@0 | 4086 | rt->heapState = heapState; |
michael@0 | 4087 | #else |
michael@0 | 4088 | MOZ_CRASH(); |
michael@0 | 4089 | #endif |
michael@0 | 4090 | } else { |
michael@0 | 4091 | rt->heapState = heapState; |
michael@0 | 4092 | } |
michael@0 | 4093 | } |
michael@0 | 4094 | |
michael@0 | 4095 | AutoTraceSession::~AutoTraceSession() |
michael@0 | 4096 | { |
michael@0 | 4097 | JS_ASSERT(runtime->isHeapBusy()); |
michael@0 | 4098 | |
michael@0 | 4099 | if (runtime->exclusiveThreadsPresent()) { |
michael@0 | 4100 | #ifdef JS_THREADSAFE |
michael@0 | 4101 | AutoLockWorkerThreadState lock; |
michael@0 | 4102 | runtime->heapState = prevState; |
michael@0 | 4103 | |
michael@0 | 4104 | // Notify any worker threads waiting for the trace session to end. |
michael@0 | 4105 | WorkerThreadState().notifyAll(GlobalWorkerThreadState::PRODUCER); |
michael@0 | 4106 | #else |
michael@0 | 4107 | MOZ_CRASH(); |
michael@0 | 4108 | #endif |
michael@0 | 4109 | } else { |
michael@0 | 4110 | runtime->heapState = prevState; |
michael@0 | 4111 | } |
michael@0 | 4112 | } |
michael@0 | 4113 | |
michael@0 | 4114 | AutoGCSession::AutoGCSession(JSRuntime *rt) |
michael@0 | 4115 | : runtime(rt), |
michael@0 | 4116 | session(rt, MajorCollecting), |
michael@0 | 4117 | canceled(false) |
michael@0 | 4118 | { |
michael@0 | 4119 | runtime->gcIsNeeded = false; |
michael@0 | 4120 | runtime->gcInterFrameGC = true; |
michael@0 | 4121 | |
michael@0 | 4122 | runtime->gcNumber++; |
michael@0 | 4123 | |
michael@0 | 4124 | // It's ok if threads other than the main thread have suppressGC set, as |
michael@0 | 4125 | // they are operating on zones which will not be collected from here. |
michael@0 | 4126 | JS_ASSERT(!runtime->mainThread.suppressGC); |
michael@0 | 4127 | } |
michael@0 | 4128 | |
michael@0 | 4129 | AutoGCSession::~AutoGCSession() |
michael@0 | 4130 | { |
michael@0 | 4131 | if (canceled) |
michael@0 | 4132 | return; |
michael@0 | 4133 | |
michael@0 | 4134 | #ifndef JS_MORE_DETERMINISTIC |
michael@0 | 4135 | runtime->gcNextFullGCTime = PRMJ_Now() + GC_IDLE_FULL_SPAN; |
michael@0 | 4136 | #endif |
michael@0 | 4137 | |
michael@0 | 4138 | runtime->gcChunkAllocationSinceLastGC = false; |
michael@0 | 4139 | |
michael@0 | 4140 | #ifdef JS_GC_ZEAL |
michael@0 | 4141 | /* Keeping these around after a GC is dangerous. */ |
michael@0 | 4142 | runtime->gcSelectedForMarking.clearAndFree(); |
michael@0 | 4143 | #endif |
michael@0 | 4144 | |
michael@0 | 4145 | /* Clear gcMallocBytes for all compartments */ |
michael@0 | 4146 | for (ZonesIter zone(runtime, WithAtoms); !zone.done(); zone.next()) { |
michael@0 | 4147 | zone->resetGCMallocBytes(); |
michael@0 | 4148 | zone->unscheduleGC(); |
michael@0 | 4149 | } |
michael@0 | 4150 | |
michael@0 | 4151 | runtime->resetGCMallocBytes(); |
michael@0 | 4152 | } |
michael@0 | 4153 | |
michael@0 | 4154 | AutoCopyFreeListToArenas::AutoCopyFreeListToArenas(JSRuntime *rt, ZoneSelector selector) |
michael@0 | 4155 | : runtime(rt), |
michael@0 | 4156 | selector(selector) |
michael@0 | 4157 | { |
michael@0 | 4158 | for (ZonesIter zone(rt, selector); !zone.done(); zone.next()) |
michael@0 | 4159 | zone->allocator.arenas.copyFreeListsToArenas(); |
michael@0 | 4160 | } |
michael@0 | 4161 | |
michael@0 | 4162 | AutoCopyFreeListToArenas::~AutoCopyFreeListToArenas() |
michael@0 | 4163 | { |
michael@0 | 4164 | for (ZonesIter zone(runtime, selector); !zone.done(); zone.next()) |
michael@0 | 4165 | zone->allocator.arenas.clearFreeListsInArenas(); |
michael@0 | 4166 | } |
michael@0 | 4167 | |
michael@0 | 4168 | class AutoCopyFreeListToArenasForGC |
michael@0 | 4169 | { |
michael@0 | 4170 | JSRuntime *runtime; |
michael@0 | 4171 | |
michael@0 | 4172 | public: |
michael@0 | 4173 | AutoCopyFreeListToArenasForGC(JSRuntime *rt) : runtime(rt) { |
michael@0 | 4174 | JS_ASSERT(rt->currentThreadHasExclusiveAccess()); |
michael@0 | 4175 | for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) |
michael@0 | 4176 | zone->allocator.arenas.copyFreeListsToArenas(); |
michael@0 | 4177 | } |
michael@0 | 4178 | ~AutoCopyFreeListToArenasForGC() { |
michael@0 | 4179 | for (ZonesIter zone(runtime, WithAtoms); !zone.done(); zone.next()) |
michael@0 | 4180 | zone->allocator.arenas.clearFreeListsInArenas(); |
michael@0 | 4181 | } |
michael@0 | 4182 | }; |
michael@0 | 4183 | |
michael@0 | 4184 | static void |
michael@0 | 4185 | IncrementalCollectSlice(JSRuntime *rt, |
michael@0 | 4186 | int64_t budget, |
michael@0 | 4187 | JS::gcreason::Reason gcReason, |
michael@0 | 4188 | JSGCInvocationKind gcKind); |
michael@0 | 4189 | |
michael@0 | 4190 | static void |
michael@0 | 4191 | ResetIncrementalGC(JSRuntime *rt, const char *reason) |
michael@0 | 4192 | { |
michael@0 | 4193 | switch (rt->gcIncrementalState) { |
michael@0 | 4194 | case NO_INCREMENTAL: |
michael@0 | 4195 | return; |
michael@0 | 4196 | |
michael@0 | 4197 | case MARK: { |
michael@0 | 4198 | /* Cancel any ongoing marking. */ |
michael@0 | 4199 | AutoCopyFreeListToArenasForGC copy(rt); |
michael@0 | 4200 | |
michael@0 | 4201 | rt->gcMarker.reset(); |
michael@0 | 4202 | rt->gcMarker.stop(); |
michael@0 | 4203 | |
michael@0 | 4204 | for (GCCompartmentsIter c(rt); !c.done(); c.next()) { |
michael@0 | 4205 | ArrayBufferObject::resetArrayBufferList(c); |
michael@0 | 4206 | ResetGrayList(c); |
michael@0 | 4207 | } |
michael@0 | 4208 | |
michael@0 | 4209 | for (GCZonesIter zone(rt); !zone.done(); zone.next()) { |
michael@0 | 4210 | JS_ASSERT(zone->isGCMarking()); |
michael@0 | 4211 | zone->setNeedsBarrier(false, Zone::UpdateIon); |
michael@0 | 4212 | zone->setGCState(Zone::NoGC); |
michael@0 | 4213 | } |
michael@0 | 4214 | rt->setNeedsBarrier(false); |
michael@0 | 4215 | AssertNeedsBarrierFlagsConsistent(rt); |
michael@0 | 4216 | |
michael@0 | 4217 | rt->gcIncrementalState = NO_INCREMENTAL; |
michael@0 | 4218 | |
michael@0 | 4219 | JS_ASSERT(!rt->gcStrictCompartmentChecking); |
michael@0 | 4220 | |
michael@0 | 4221 | break; |
michael@0 | 4222 | } |
michael@0 | 4223 | |
michael@0 | 4224 | case SWEEP: |
michael@0 | 4225 | rt->gcMarker.reset(); |
michael@0 | 4226 | |
michael@0 | 4227 | for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) |
michael@0 | 4228 | zone->scheduledForDestruction = false; |
michael@0 | 4229 | |
michael@0 | 4230 | /* Finish sweeping the current zone group, then abort. */ |
michael@0 | 4231 | rt->gcAbortSweepAfterCurrentGroup = true; |
michael@0 | 4232 | IncrementalCollectSlice(rt, SliceBudget::Unlimited, JS::gcreason::RESET, GC_NORMAL); |
michael@0 | 4233 | |
michael@0 | 4234 | { |
michael@0 | 4235 | gcstats::AutoPhase ap(rt->gcStats, gcstats::PHASE_WAIT_BACKGROUND_THREAD); |
michael@0 | 4236 | rt->gcHelperThread.waitBackgroundSweepOrAllocEnd(); |
michael@0 | 4237 | } |
michael@0 | 4238 | break; |
michael@0 | 4239 | |
michael@0 | 4240 | default: |
michael@0 | 4241 | MOZ_ASSUME_UNREACHABLE("Invalid incremental GC state"); |
michael@0 | 4242 | } |
michael@0 | 4243 | |
michael@0 | 4244 | rt->gcStats.reset(reason); |
michael@0 | 4245 | |
michael@0 | 4246 | #ifdef DEBUG |
michael@0 | 4247 | for (CompartmentsIter c(rt, SkipAtoms); !c.done(); c.next()) |
michael@0 | 4248 | JS_ASSERT(c->gcLiveArrayBuffers.empty()); |
michael@0 | 4249 | |
michael@0 | 4250 | for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) { |
michael@0 | 4251 | JS_ASSERT(!zone->needsBarrier()); |
michael@0 | 4252 | for (unsigned i = 0; i < FINALIZE_LIMIT; ++i) |
michael@0 | 4253 | JS_ASSERT(!zone->allocator.arenas.arenaListsToSweep[i]); |
michael@0 | 4254 | } |
michael@0 | 4255 | #endif |
michael@0 | 4256 | } |
michael@0 | 4257 | |
michael@0 | 4258 | namespace { |
michael@0 | 4259 | |
michael@0 | 4260 | class AutoGCSlice { |
michael@0 | 4261 | public: |
michael@0 | 4262 | AutoGCSlice(JSRuntime *rt); |
michael@0 | 4263 | ~AutoGCSlice(); |
michael@0 | 4264 | |
michael@0 | 4265 | private: |
michael@0 | 4266 | JSRuntime *runtime; |
michael@0 | 4267 | }; |
michael@0 | 4268 | |
michael@0 | 4269 | } /* anonymous namespace */ |
michael@0 | 4270 | |
michael@0 | 4271 | AutoGCSlice::AutoGCSlice(JSRuntime *rt) |
michael@0 | 4272 | : runtime(rt) |
michael@0 | 4273 | { |
michael@0 | 4274 | /* |
michael@0 | 4275 | * During incremental GC, the compartment's active flag determines whether |
michael@0 | 4276 | * there are stack frames active for any of its scripts. Normally this flag |
michael@0 | 4277 | * is set at the beginning of the mark phase. During incremental GC, we also |
michael@0 | 4278 | * set it at the start of every phase. |
michael@0 | 4279 | */ |
michael@0 | 4280 | for (ActivationIterator iter(rt); !iter.done(); ++iter) |
michael@0 | 4281 | iter->compartment()->zone()->active = true; |
michael@0 | 4282 | |
michael@0 | 4283 | for (GCZonesIter zone(rt); !zone.done(); zone.next()) { |
michael@0 | 4284 | /* |
michael@0 | 4285 | * Clear needsBarrier early so we don't do any write barriers during |
michael@0 | 4286 | * GC. We don't need to update the Ion barriers (which is expensive) |
michael@0 | 4287 | * because Ion code doesn't run during GC. If need be, we'll update the |
michael@0 | 4288 | * Ion barriers in ~AutoGCSlice. |
michael@0 | 4289 | */ |
michael@0 | 4290 | if (zone->isGCMarking()) { |
michael@0 | 4291 | JS_ASSERT(zone->needsBarrier()); |
michael@0 | 4292 | zone->setNeedsBarrier(false, Zone::DontUpdateIon); |
michael@0 | 4293 | } else { |
michael@0 | 4294 | JS_ASSERT(!zone->needsBarrier()); |
michael@0 | 4295 | } |
michael@0 | 4296 | } |
michael@0 | 4297 | rt->setNeedsBarrier(false); |
michael@0 | 4298 | AssertNeedsBarrierFlagsConsistent(rt); |
michael@0 | 4299 | } |
michael@0 | 4300 | |
michael@0 | 4301 | AutoGCSlice::~AutoGCSlice() |
michael@0 | 4302 | { |
michael@0 | 4303 | /* We can't use GCZonesIter if this is the end of the last slice. */ |
michael@0 | 4304 | bool haveBarriers = false; |
michael@0 | 4305 | for (ZonesIter zone(runtime, WithAtoms); !zone.done(); zone.next()) { |
michael@0 | 4306 | if (zone->isGCMarking()) { |
michael@0 | 4307 | zone->setNeedsBarrier(true, Zone::UpdateIon); |
michael@0 | 4308 | zone->allocator.arenas.prepareForIncrementalGC(runtime); |
michael@0 | 4309 | haveBarriers = true; |
michael@0 | 4310 | } else { |
michael@0 | 4311 | zone->setNeedsBarrier(false, Zone::UpdateIon); |
michael@0 | 4312 | } |
michael@0 | 4313 | } |
michael@0 | 4314 | runtime->setNeedsBarrier(haveBarriers); |
michael@0 | 4315 | AssertNeedsBarrierFlagsConsistent(runtime); |
michael@0 | 4316 | } |
michael@0 | 4317 | |
michael@0 | 4318 | static void |
michael@0 | 4319 | PushZealSelectedObjects(JSRuntime *rt) |
michael@0 | 4320 | { |
michael@0 | 4321 | #ifdef JS_GC_ZEAL |
michael@0 | 4322 | /* Push selected objects onto the mark stack and clear the list. */ |
michael@0 | 4323 | for (JSObject **obj = rt->gcSelectedForMarking.begin(); |
michael@0 | 4324 | obj != rt->gcSelectedForMarking.end(); obj++) |
michael@0 | 4325 | { |
michael@0 | 4326 | MarkObjectUnbarriered(&rt->gcMarker, obj, "selected obj"); |
michael@0 | 4327 | } |
michael@0 | 4328 | #endif |
michael@0 | 4329 | } |
michael@0 | 4330 | |
michael@0 | 4331 | static void |
michael@0 | 4332 | IncrementalCollectSlice(JSRuntime *rt, |
michael@0 | 4333 | int64_t budget, |
michael@0 | 4334 | JS::gcreason::Reason reason, |
michael@0 | 4335 | JSGCInvocationKind gckind) |
michael@0 | 4336 | { |
michael@0 | 4337 | JS_ASSERT(rt->currentThreadHasExclusiveAccess()); |
michael@0 | 4338 | |
michael@0 | 4339 | AutoCopyFreeListToArenasForGC copy(rt); |
michael@0 | 4340 | AutoGCSlice slice(rt); |
michael@0 | 4341 | |
michael@0 | 4342 | bool lastGC = (reason == JS::gcreason::DESTROY_RUNTIME); |
michael@0 | 4343 | |
michael@0 | 4344 | gc::State initialState = rt->gcIncrementalState; |
michael@0 | 4345 | |
michael@0 | 4346 | int zeal = 0; |
michael@0 | 4347 | #ifdef JS_GC_ZEAL |
michael@0 | 4348 | if (reason == JS::gcreason::DEBUG_GC && budget != SliceBudget::Unlimited) { |
michael@0 | 4349 | /* |
michael@0 | 4350 | * Do the incremental collection type specified by zeal mode if the |
michael@0 | 4351 | * collection was triggered by RunDebugGC() and incremental GC has not |
michael@0 | 4352 | * been cancelled by ResetIncrementalGC. |
michael@0 | 4353 | */ |
michael@0 | 4354 | zeal = rt->gcZeal(); |
michael@0 | 4355 | } |
michael@0 | 4356 | #endif |
michael@0 | 4357 | |
michael@0 | 4358 | JS_ASSERT_IF(rt->gcIncrementalState != NO_INCREMENTAL, rt->gcIsIncremental); |
michael@0 | 4359 | rt->gcIsIncremental = budget != SliceBudget::Unlimited; |
michael@0 | 4360 | |
michael@0 | 4361 | if (zeal == ZealIncrementalRootsThenFinish || zeal == ZealIncrementalMarkAllThenFinish) { |
michael@0 | 4362 | /* |
michael@0 | 4363 | * Yields between slices occurs at predetermined points in these modes; |
michael@0 | 4364 | * the budget is not used. |
michael@0 | 4365 | */ |
michael@0 | 4366 | budget = SliceBudget::Unlimited; |
michael@0 | 4367 | } |
michael@0 | 4368 | |
michael@0 | 4369 | SliceBudget sliceBudget(budget); |
michael@0 | 4370 | |
michael@0 | 4371 | if (rt->gcIncrementalState == NO_INCREMENTAL) { |
michael@0 | 4372 | rt->gcIncrementalState = MARK_ROOTS; |
michael@0 | 4373 | rt->gcLastMarkSlice = false; |
michael@0 | 4374 | } |
michael@0 | 4375 | |
michael@0 | 4376 | if (rt->gcIncrementalState == MARK) |
michael@0 | 4377 | AutoGCRooter::traceAllWrappers(&rt->gcMarker); |
michael@0 | 4378 | |
michael@0 | 4379 | switch (rt->gcIncrementalState) { |
michael@0 | 4380 | |
michael@0 | 4381 | case MARK_ROOTS: |
michael@0 | 4382 | if (!BeginMarkPhase(rt)) { |
michael@0 | 4383 | rt->gcIncrementalState = NO_INCREMENTAL; |
michael@0 | 4384 | return; |
michael@0 | 4385 | } |
michael@0 | 4386 | |
michael@0 | 4387 | if (!lastGC) |
michael@0 | 4388 | PushZealSelectedObjects(rt); |
michael@0 | 4389 | |
michael@0 | 4390 | rt->gcIncrementalState = MARK; |
michael@0 | 4391 | |
michael@0 | 4392 | if (rt->gcIsIncremental && zeal == ZealIncrementalRootsThenFinish) |
michael@0 | 4393 | break; |
michael@0 | 4394 | |
michael@0 | 4395 | /* fall through */ |
michael@0 | 4396 | |
michael@0 | 4397 | case MARK: { |
michael@0 | 4398 | /* If we needed delayed marking for gray roots, then collect until done. */ |
michael@0 | 4399 | if (!rt->gcMarker.hasBufferedGrayRoots()) { |
michael@0 | 4400 | sliceBudget.reset(); |
michael@0 | 4401 | rt->gcIsIncremental = false; |
michael@0 | 4402 | } |
michael@0 | 4403 | |
michael@0 | 4404 | bool finished = DrainMarkStack(rt, sliceBudget, gcstats::PHASE_MARK); |
michael@0 | 4405 | if (!finished) |
michael@0 | 4406 | break; |
michael@0 | 4407 | |
michael@0 | 4408 | JS_ASSERT(rt->gcMarker.isDrained()); |
michael@0 | 4409 | |
michael@0 | 4410 | if (!rt->gcLastMarkSlice && rt->gcIsIncremental && |
michael@0 | 4411 | ((initialState == MARK && zeal != ZealIncrementalRootsThenFinish) || |
michael@0 | 4412 | zeal == ZealIncrementalMarkAllThenFinish)) |
michael@0 | 4413 | { |
michael@0 | 4414 | /* |
michael@0 | 4415 | * Yield with the aim of starting the sweep in the next |
michael@0 | 4416 | * slice. We will need to mark anything new on the stack |
michael@0 | 4417 | * when we resume, so we stay in MARK state. |
michael@0 | 4418 | */ |
michael@0 | 4419 | rt->gcLastMarkSlice = true; |
michael@0 | 4420 | break; |
michael@0 | 4421 | } |
michael@0 | 4422 | |
michael@0 | 4423 | rt->gcIncrementalState = SWEEP; |
michael@0 | 4424 | |
michael@0 | 4425 | /* |
michael@0 | 4426 | * This runs to completion, but we don't continue if the budget is |
michael@0 | 4427 | * now exhasted. |
michael@0 | 4428 | */ |
michael@0 | 4429 | BeginSweepPhase(rt, lastGC); |
michael@0 | 4430 | if (sliceBudget.isOverBudget()) |
michael@0 | 4431 | break; |
michael@0 | 4432 | |
michael@0 | 4433 | /* |
michael@0 | 4434 | * Always yield here when running in incremental multi-slice zeal |
michael@0 | 4435 | * mode, so RunDebugGC can reset the slice buget. |
michael@0 | 4436 | */ |
michael@0 | 4437 | if (rt->gcIsIncremental && zeal == ZealIncrementalMultipleSlices) |
michael@0 | 4438 | break; |
michael@0 | 4439 | |
michael@0 | 4440 | /* fall through */ |
michael@0 | 4441 | } |
michael@0 | 4442 | |
michael@0 | 4443 | case SWEEP: { |
michael@0 | 4444 | bool finished = SweepPhase(rt, sliceBudget); |
michael@0 | 4445 | if (!finished) |
michael@0 | 4446 | break; |
michael@0 | 4447 | |
michael@0 | 4448 | EndSweepPhase(rt, gckind, lastGC); |
michael@0 | 4449 | |
michael@0 | 4450 | if (rt->gcSweepOnBackgroundThread) |
michael@0 | 4451 | rt->gcHelperThread.startBackgroundSweep(gckind == GC_SHRINK); |
michael@0 | 4452 | |
michael@0 | 4453 | rt->gcIncrementalState = NO_INCREMENTAL; |
michael@0 | 4454 | break; |
michael@0 | 4455 | } |
michael@0 | 4456 | |
michael@0 | 4457 | default: |
michael@0 | 4458 | JS_ASSERT(false); |
michael@0 | 4459 | } |
michael@0 | 4460 | } |
michael@0 | 4461 | |
michael@0 | 4462 | IncrementalSafety |
michael@0 | 4463 | gc::IsIncrementalGCSafe(JSRuntime *rt) |
michael@0 | 4464 | { |
michael@0 | 4465 | JS_ASSERT(!rt->mainThread.suppressGC); |
michael@0 | 4466 | |
michael@0 | 4467 | if (rt->keepAtoms()) |
michael@0 | 4468 | return IncrementalSafety::Unsafe("keepAtoms set"); |
michael@0 | 4469 | |
michael@0 | 4470 | if (!rt->gcIncrementalEnabled) |
michael@0 | 4471 | return IncrementalSafety::Unsafe("incremental permanently disabled"); |
michael@0 | 4472 | |
michael@0 | 4473 | return IncrementalSafety::Safe(); |
michael@0 | 4474 | } |
michael@0 | 4475 | |
michael@0 | 4476 | static void |
michael@0 | 4477 | BudgetIncrementalGC(JSRuntime *rt, int64_t *budget) |
michael@0 | 4478 | { |
michael@0 | 4479 | IncrementalSafety safe = IsIncrementalGCSafe(rt); |
michael@0 | 4480 | if (!safe) { |
michael@0 | 4481 | ResetIncrementalGC(rt, safe.reason()); |
michael@0 | 4482 | *budget = SliceBudget::Unlimited; |
michael@0 | 4483 | rt->gcStats.nonincremental(safe.reason()); |
michael@0 | 4484 | return; |
michael@0 | 4485 | } |
michael@0 | 4486 | |
michael@0 | 4487 | if (rt->gcMode() != JSGC_MODE_INCREMENTAL) { |
michael@0 | 4488 | ResetIncrementalGC(rt, "GC mode change"); |
michael@0 | 4489 | *budget = SliceBudget::Unlimited; |
michael@0 | 4490 | rt->gcStats.nonincremental("GC mode"); |
michael@0 | 4491 | return; |
michael@0 | 4492 | } |
michael@0 | 4493 | |
michael@0 | 4494 | if (rt->isTooMuchMalloc()) { |
michael@0 | 4495 | *budget = SliceBudget::Unlimited; |
michael@0 | 4496 | rt->gcStats.nonincremental("malloc bytes trigger"); |
michael@0 | 4497 | } |
michael@0 | 4498 | |
michael@0 | 4499 | bool reset = false; |
michael@0 | 4500 | for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) { |
michael@0 | 4501 | if (zone->gcBytes >= zone->gcTriggerBytes) { |
michael@0 | 4502 | *budget = SliceBudget::Unlimited; |
michael@0 | 4503 | rt->gcStats.nonincremental("allocation trigger"); |
michael@0 | 4504 | } |
michael@0 | 4505 | |
michael@0 | 4506 | if (rt->gcIncrementalState != NO_INCREMENTAL && |
michael@0 | 4507 | zone->isGCScheduled() != zone->wasGCStarted()) |
michael@0 | 4508 | { |
michael@0 | 4509 | reset = true; |
michael@0 | 4510 | } |
michael@0 | 4511 | |
michael@0 | 4512 | if (zone->isTooMuchMalloc()) { |
michael@0 | 4513 | *budget = SliceBudget::Unlimited; |
michael@0 | 4514 | rt->gcStats.nonincremental("malloc bytes trigger"); |
michael@0 | 4515 | } |
michael@0 | 4516 | } |
michael@0 | 4517 | |
michael@0 | 4518 | if (reset) |
michael@0 | 4519 | ResetIncrementalGC(rt, "zone change"); |
michael@0 | 4520 | } |
michael@0 | 4521 | |
michael@0 | 4522 | /* |
michael@0 | 4523 | * Run one GC "cycle" (either a slice of incremental GC or an entire |
michael@0 | 4524 | * non-incremental GC. We disable inlining to ensure that the bottom of the |
michael@0 | 4525 | * stack with possible GC roots recorded in MarkRuntime excludes any pointers we |
michael@0 | 4526 | * use during the marking implementation. |
michael@0 | 4527 | * |
michael@0 | 4528 | * Returns true if we "reset" an existing incremental GC, which would force us |
michael@0 | 4529 | * to run another cycle. |
michael@0 | 4530 | */ |
michael@0 | 4531 | static MOZ_NEVER_INLINE bool |
michael@0 | 4532 | GCCycle(JSRuntime *rt, bool incremental, int64_t budget, |
michael@0 | 4533 | JSGCInvocationKind gckind, JS::gcreason::Reason reason) |
michael@0 | 4534 | { |
michael@0 | 4535 | AutoGCSession gcsession(rt); |
michael@0 | 4536 | |
michael@0 | 4537 | /* |
michael@0 | 4538 | * As we about to purge caches and clear the mark bits we must wait for |
michael@0 | 4539 | * any background finalization to finish. We must also wait for the |
michael@0 | 4540 | * background allocation to finish so we can avoid taking the GC lock |
michael@0 | 4541 | * when manipulating the chunks during the GC. |
michael@0 | 4542 | */ |
michael@0 | 4543 | { |
michael@0 | 4544 | gcstats::AutoPhase ap(rt->gcStats, gcstats::PHASE_WAIT_BACKGROUND_THREAD); |
michael@0 | 4545 | rt->gcHelperThread.waitBackgroundSweepOrAllocEnd(); |
michael@0 | 4546 | } |
michael@0 | 4547 | |
michael@0 | 4548 | State prevState = rt->gcIncrementalState; |
michael@0 | 4549 | |
michael@0 | 4550 | if (!incremental) { |
michael@0 | 4551 | /* If non-incremental GC was requested, reset incremental GC. */ |
michael@0 | 4552 | ResetIncrementalGC(rt, "requested"); |
michael@0 | 4553 | rt->gcStats.nonincremental("requested"); |
michael@0 | 4554 | budget = SliceBudget::Unlimited; |
michael@0 | 4555 | } else { |
michael@0 | 4556 | BudgetIncrementalGC(rt, &budget); |
michael@0 | 4557 | } |
michael@0 | 4558 | |
michael@0 | 4559 | /* The GC was reset, so we need a do-over. */ |
michael@0 | 4560 | if (prevState != NO_INCREMENTAL && rt->gcIncrementalState == NO_INCREMENTAL) { |
michael@0 | 4561 | gcsession.cancel(); |
michael@0 | 4562 | return true; |
michael@0 | 4563 | } |
michael@0 | 4564 | |
michael@0 | 4565 | IncrementalCollectSlice(rt, budget, reason, gckind); |
michael@0 | 4566 | return false; |
michael@0 | 4567 | } |
michael@0 | 4568 | |
michael@0 | 4569 | #ifdef JS_GC_ZEAL |
michael@0 | 4570 | static bool |
michael@0 | 4571 | IsDeterministicGCReason(JS::gcreason::Reason reason) |
michael@0 | 4572 | { |
michael@0 | 4573 | if (reason > JS::gcreason::DEBUG_GC && |
michael@0 | 4574 | reason != JS::gcreason::CC_FORCED && reason != JS::gcreason::SHUTDOWN_CC) |
michael@0 | 4575 | { |
michael@0 | 4576 | return false; |
michael@0 | 4577 | } |
michael@0 | 4578 | |
michael@0 | 4579 | if (reason == JS::gcreason::MAYBEGC) |
michael@0 | 4580 | return false; |
michael@0 | 4581 | |
michael@0 | 4582 | return true; |
michael@0 | 4583 | } |
michael@0 | 4584 | #endif |
michael@0 | 4585 | |
michael@0 | 4586 | static bool |
michael@0 | 4587 | ShouldCleanUpEverything(JSRuntime *rt, JS::gcreason::Reason reason, JSGCInvocationKind gckind) |
michael@0 | 4588 | { |
michael@0 | 4589 | // During shutdown, we must clean everything up, for the sake of leak |
michael@0 | 4590 | // detection. When a runtime has no contexts, or we're doing a GC before a |
michael@0 | 4591 | // shutdown CC, those are strong indications that we're shutting down. |
michael@0 | 4592 | return reason == JS::gcreason::DESTROY_RUNTIME || |
michael@0 | 4593 | reason == JS::gcreason::SHUTDOWN_CC || |
michael@0 | 4594 | gckind == GC_SHRINK; |
michael@0 | 4595 | } |
michael@0 | 4596 | |
michael@0 | 4597 | namespace { |
michael@0 | 4598 | |
michael@0 | 4599 | #ifdef JSGC_GENERATIONAL |
michael@0 | 4600 | class AutoDisableStoreBuffer |
michael@0 | 4601 | { |
michael@0 | 4602 | JSRuntime *runtime; |
michael@0 | 4603 | bool prior; |
michael@0 | 4604 | |
michael@0 | 4605 | public: |
michael@0 | 4606 | AutoDisableStoreBuffer(JSRuntime *rt) : runtime(rt) { |
michael@0 | 4607 | prior = rt->gcStoreBuffer.isEnabled(); |
michael@0 | 4608 | rt->gcStoreBuffer.disable(); |
michael@0 | 4609 | } |
michael@0 | 4610 | ~AutoDisableStoreBuffer() { |
michael@0 | 4611 | if (prior) |
michael@0 | 4612 | runtime->gcStoreBuffer.enable(); |
michael@0 | 4613 | } |
michael@0 | 4614 | }; |
michael@0 | 4615 | #else |
michael@0 | 4616 | struct AutoDisableStoreBuffer |
michael@0 | 4617 | { |
michael@0 | 4618 | AutoDisableStoreBuffer(JSRuntime *) {} |
michael@0 | 4619 | }; |
michael@0 | 4620 | #endif |
michael@0 | 4621 | |
michael@0 | 4622 | } /* anonymous namespace */ |
michael@0 | 4623 | |
michael@0 | 4624 | static void |
michael@0 | 4625 | Collect(JSRuntime *rt, bool incremental, int64_t budget, |
michael@0 | 4626 | JSGCInvocationKind gckind, JS::gcreason::Reason reason) |
michael@0 | 4627 | { |
michael@0 | 4628 | /* GC shouldn't be running in parallel execution mode */ |
michael@0 | 4629 | JS_ASSERT(!InParallelSection()); |
michael@0 | 4630 | |
michael@0 | 4631 | JS_AbortIfWrongThread(rt); |
michael@0 | 4632 | |
michael@0 | 4633 | /* If we attempt to invoke the GC while we are running in the GC, assert. */ |
michael@0 | 4634 | JS_ASSERT(!rt->isHeapBusy()); |
michael@0 | 4635 | |
michael@0 | 4636 | if (rt->mainThread.suppressGC) |
michael@0 | 4637 | return; |
michael@0 | 4638 | |
michael@0 | 4639 | TraceLogger *logger = TraceLoggerForMainThread(rt); |
michael@0 | 4640 | AutoTraceLog logGC(logger, TraceLogger::GC); |
michael@0 | 4641 | |
michael@0 | 4642 | #ifdef JS_GC_ZEAL |
michael@0 | 4643 | if (rt->gcDeterministicOnly && !IsDeterministicGCReason(reason)) |
michael@0 | 4644 | return; |
michael@0 | 4645 | #endif |
michael@0 | 4646 | |
michael@0 | 4647 | JS_ASSERT_IF(!incremental || budget != SliceBudget::Unlimited, JSGC_INCREMENTAL); |
michael@0 | 4648 | |
michael@0 | 4649 | AutoStopVerifyingBarriers av(rt, reason == JS::gcreason::SHUTDOWN_CC || |
michael@0 | 4650 | reason == JS::gcreason::DESTROY_RUNTIME); |
michael@0 | 4651 | |
michael@0 | 4652 | RecordNativeStackTopForGC(rt); |
michael@0 | 4653 | |
michael@0 | 4654 | int zoneCount = 0; |
michael@0 | 4655 | int compartmentCount = 0; |
michael@0 | 4656 | int collectedCount = 0; |
michael@0 | 4657 | for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) { |
michael@0 | 4658 | if (rt->gcMode() == JSGC_MODE_GLOBAL) |
michael@0 | 4659 | zone->scheduleGC(); |
michael@0 | 4660 | |
michael@0 | 4661 | /* This is a heuristic to avoid resets. */ |
michael@0 | 4662 | if (rt->gcIncrementalState != NO_INCREMENTAL && zone->needsBarrier()) |
michael@0 | 4663 | zone->scheduleGC(); |
michael@0 | 4664 | |
michael@0 | 4665 | zoneCount++; |
michael@0 | 4666 | if (zone->isGCScheduled()) |
michael@0 | 4667 | collectedCount++; |
michael@0 | 4668 | } |
michael@0 | 4669 | |
michael@0 | 4670 | for (CompartmentsIter c(rt, WithAtoms); !c.done(); c.next()) |
michael@0 | 4671 | compartmentCount++; |
michael@0 | 4672 | |
michael@0 | 4673 | rt->gcShouldCleanUpEverything = ShouldCleanUpEverything(rt, reason, gckind); |
michael@0 | 4674 | |
michael@0 | 4675 | bool repeat = false; |
michael@0 | 4676 | do { |
michael@0 | 4677 | MinorGC(rt, reason); |
michael@0 | 4678 | |
michael@0 | 4679 | /* |
michael@0 | 4680 | * Marking can trigger many incidental post barriers, some of them for |
michael@0 | 4681 | * objects which are not going to be live after the GC. |
michael@0 | 4682 | */ |
michael@0 | 4683 | AutoDisableStoreBuffer adsb(rt); |
michael@0 | 4684 | |
michael@0 | 4685 | gcstats::AutoGCSlice agc(rt->gcStats, collectedCount, zoneCount, compartmentCount, reason); |
michael@0 | 4686 | |
michael@0 | 4687 | /* |
michael@0 | 4688 | * Let the API user decide to defer a GC if it wants to (unless this |
michael@0 | 4689 | * is the last context). Invoke the callback regardless. |
michael@0 | 4690 | */ |
michael@0 | 4691 | if (rt->gcIncrementalState == NO_INCREMENTAL) { |
michael@0 | 4692 | gcstats::AutoPhase ap(rt->gcStats, gcstats::PHASE_GC_BEGIN); |
michael@0 | 4693 | if (JSGCCallback callback = rt->gcCallback) |
michael@0 | 4694 | callback(rt, JSGC_BEGIN, rt->gcCallbackData); |
michael@0 | 4695 | } |
michael@0 | 4696 | |
michael@0 | 4697 | rt->gcPoke = false; |
michael@0 | 4698 | bool wasReset = GCCycle(rt, incremental, budget, gckind, reason); |
michael@0 | 4699 | |
michael@0 | 4700 | if (rt->gcIncrementalState == NO_INCREMENTAL) { |
michael@0 | 4701 | gcstats::AutoPhase ap(rt->gcStats, gcstats::PHASE_GC_END); |
michael@0 | 4702 | if (JSGCCallback callback = rt->gcCallback) |
michael@0 | 4703 | callback(rt, JSGC_END, rt->gcCallbackData); |
michael@0 | 4704 | } |
michael@0 | 4705 | |
michael@0 | 4706 | /* Need to re-schedule all zones for GC. */ |
michael@0 | 4707 | if (rt->gcPoke && rt->gcShouldCleanUpEverything) |
michael@0 | 4708 | JS::PrepareForFullGC(rt); |
michael@0 | 4709 | |
michael@0 | 4710 | /* |
michael@0 | 4711 | * If we reset an existing GC, we need to start a new one. Also, we |
michael@0 | 4712 | * repeat GCs that happen during shutdown (the gcShouldCleanUpEverything |
michael@0 | 4713 | * case) until we can be sure that no additional garbage is created |
michael@0 | 4714 | * (which typically happens if roots are dropped during finalizers). |
michael@0 | 4715 | */ |
michael@0 | 4716 | repeat = (rt->gcPoke && rt->gcShouldCleanUpEverything) || wasReset; |
michael@0 | 4717 | } while (repeat); |
michael@0 | 4718 | |
michael@0 | 4719 | if (rt->gcIncrementalState == NO_INCREMENTAL) { |
michael@0 | 4720 | #ifdef JS_THREADSAFE |
michael@0 | 4721 | EnqueuePendingParseTasksAfterGC(rt); |
michael@0 | 4722 | #endif |
michael@0 | 4723 | } |
michael@0 | 4724 | } |
michael@0 | 4725 | |
michael@0 | 4726 | void |
michael@0 | 4727 | js::GC(JSRuntime *rt, JSGCInvocationKind gckind, JS::gcreason::Reason reason) |
michael@0 | 4728 | { |
michael@0 | 4729 | Collect(rt, false, SliceBudget::Unlimited, gckind, reason); |
michael@0 | 4730 | } |
michael@0 | 4731 | |
michael@0 | 4732 | void |
michael@0 | 4733 | js::GCSlice(JSRuntime *rt, JSGCInvocationKind gckind, JS::gcreason::Reason reason, int64_t millis) |
michael@0 | 4734 | { |
michael@0 | 4735 | int64_t sliceBudget; |
michael@0 | 4736 | if (millis) |
michael@0 | 4737 | sliceBudget = SliceBudget::TimeBudget(millis); |
michael@0 | 4738 | else if (rt->gcHighFrequencyGC && rt->gcDynamicMarkSlice) |
michael@0 | 4739 | sliceBudget = rt->gcSliceBudget * IGC_MARK_SLICE_MULTIPLIER; |
michael@0 | 4740 | else |
michael@0 | 4741 | sliceBudget = rt->gcSliceBudget; |
michael@0 | 4742 | |
michael@0 | 4743 | Collect(rt, true, sliceBudget, gckind, reason); |
michael@0 | 4744 | } |
michael@0 | 4745 | |
michael@0 | 4746 | void |
michael@0 | 4747 | js::GCFinalSlice(JSRuntime *rt, JSGCInvocationKind gckind, JS::gcreason::Reason reason) |
michael@0 | 4748 | { |
michael@0 | 4749 | Collect(rt, true, SliceBudget::Unlimited, gckind, reason); |
michael@0 | 4750 | } |
michael@0 | 4751 | |
michael@0 | 4752 | static bool |
michael@0 | 4753 | ZonesSelected(JSRuntime *rt) |
michael@0 | 4754 | { |
michael@0 | 4755 | for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) { |
michael@0 | 4756 | if (zone->isGCScheduled()) |
michael@0 | 4757 | return true; |
michael@0 | 4758 | } |
michael@0 | 4759 | return false; |
michael@0 | 4760 | } |
michael@0 | 4761 | |
michael@0 | 4762 | void |
michael@0 | 4763 | js::GCDebugSlice(JSRuntime *rt, bool limit, int64_t objCount) |
michael@0 | 4764 | { |
michael@0 | 4765 | int64_t budget = limit ? SliceBudget::WorkBudget(objCount) : SliceBudget::Unlimited; |
michael@0 | 4766 | if (!ZonesSelected(rt)) { |
michael@0 | 4767 | if (JS::IsIncrementalGCInProgress(rt)) |
michael@0 | 4768 | JS::PrepareForIncrementalGC(rt); |
michael@0 | 4769 | else |
michael@0 | 4770 | JS::PrepareForFullGC(rt); |
michael@0 | 4771 | } |
michael@0 | 4772 | Collect(rt, true, budget, GC_NORMAL, JS::gcreason::DEBUG_GC); |
michael@0 | 4773 | } |
michael@0 | 4774 | |
michael@0 | 4775 | /* Schedule a full GC unless a zone will already be collected. */ |
michael@0 | 4776 | void |
michael@0 | 4777 | js::PrepareForDebugGC(JSRuntime *rt) |
michael@0 | 4778 | { |
michael@0 | 4779 | if (!ZonesSelected(rt)) |
michael@0 | 4780 | JS::PrepareForFullGC(rt); |
michael@0 | 4781 | } |
michael@0 | 4782 | |
michael@0 | 4783 | JS_FRIEND_API(void) |
michael@0 | 4784 | JS::ShrinkGCBuffers(JSRuntime *rt) |
michael@0 | 4785 | { |
michael@0 | 4786 | AutoLockGC lock(rt); |
michael@0 | 4787 | JS_ASSERT(!rt->isHeapBusy()); |
michael@0 | 4788 | |
michael@0 | 4789 | if (!rt->useHelperThreads()) |
michael@0 | 4790 | ExpireChunksAndArenas(rt, true); |
michael@0 | 4791 | else |
michael@0 | 4792 | rt->gcHelperThread.startBackgroundShrink(); |
michael@0 | 4793 | } |
michael@0 | 4794 | |
michael@0 | 4795 | void |
michael@0 | 4796 | js::MinorGC(JSRuntime *rt, JS::gcreason::Reason reason) |
michael@0 | 4797 | { |
michael@0 | 4798 | #ifdef JSGC_GENERATIONAL |
michael@0 | 4799 | TraceLogger *logger = TraceLoggerForMainThread(rt); |
michael@0 | 4800 | AutoTraceLog logMinorGC(logger, TraceLogger::MinorGC); |
michael@0 | 4801 | rt->gcNursery.collect(rt, reason, nullptr); |
michael@0 | 4802 | JS_ASSERT_IF(!rt->mainThread.suppressGC, rt->gcNursery.isEmpty()); |
michael@0 | 4803 | #endif |
michael@0 | 4804 | } |
michael@0 | 4805 | |
michael@0 | 4806 | void |
michael@0 | 4807 | js::MinorGC(JSContext *cx, JS::gcreason::Reason reason) |
michael@0 | 4808 | { |
michael@0 | 4809 | // Alternate to the runtime-taking form above which allows marking type |
michael@0 | 4810 | // objects as needing pretenuring. |
michael@0 | 4811 | #ifdef JSGC_GENERATIONAL |
michael@0 | 4812 | TraceLogger *logger = TraceLoggerForMainThread(cx->runtime()); |
michael@0 | 4813 | AutoTraceLog logMinorGC(logger, TraceLogger::MinorGC); |
michael@0 | 4814 | |
michael@0 | 4815 | Nursery::TypeObjectList pretenureTypes; |
michael@0 | 4816 | JSRuntime *rt = cx->runtime(); |
michael@0 | 4817 | rt->gcNursery.collect(cx->runtime(), reason, &pretenureTypes); |
michael@0 | 4818 | for (size_t i = 0; i < pretenureTypes.length(); i++) { |
michael@0 | 4819 | if (pretenureTypes[i]->canPreTenure()) |
michael@0 | 4820 | pretenureTypes[i]->setShouldPreTenure(cx); |
michael@0 | 4821 | } |
michael@0 | 4822 | JS_ASSERT_IF(!rt->mainThread.suppressGC, rt->gcNursery.isEmpty()); |
michael@0 | 4823 | #endif |
michael@0 | 4824 | } |
michael@0 | 4825 | |
michael@0 | 4826 | void |
michael@0 | 4827 | js::gc::GCIfNeeded(JSContext *cx) |
michael@0 | 4828 | { |
michael@0 | 4829 | JSRuntime *rt = cx->runtime(); |
michael@0 | 4830 | |
michael@0 | 4831 | #ifdef JSGC_GENERATIONAL |
michael@0 | 4832 | /* |
michael@0 | 4833 | * In case of store buffer overflow perform minor GC first so that the |
michael@0 | 4834 | * correct reason is seen in the logs. |
michael@0 | 4835 | */ |
michael@0 | 4836 | if (rt->gcStoreBuffer.isAboutToOverflow()) |
michael@0 | 4837 | MinorGC(cx, JS::gcreason::FULL_STORE_BUFFER); |
michael@0 | 4838 | #endif |
michael@0 | 4839 | |
michael@0 | 4840 | if (rt->gcIsNeeded) |
michael@0 | 4841 | GCSlice(rt, GC_NORMAL, rt->gcTriggerReason); |
michael@0 | 4842 | } |
michael@0 | 4843 | |
michael@0 | 4844 | void |
michael@0 | 4845 | js::gc::FinishBackgroundFinalize(JSRuntime *rt) |
michael@0 | 4846 | { |
michael@0 | 4847 | rt->gcHelperThread.waitBackgroundSweepEnd(); |
michael@0 | 4848 | } |
michael@0 | 4849 | |
michael@0 | 4850 | AutoFinishGC::AutoFinishGC(JSRuntime *rt) |
michael@0 | 4851 | { |
michael@0 | 4852 | if (JS::IsIncrementalGCInProgress(rt)) { |
michael@0 | 4853 | JS::PrepareForIncrementalGC(rt); |
michael@0 | 4854 | JS::FinishIncrementalGC(rt, JS::gcreason::API); |
michael@0 | 4855 | } |
michael@0 | 4856 | |
michael@0 | 4857 | gc::FinishBackgroundFinalize(rt); |
michael@0 | 4858 | } |
michael@0 | 4859 | |
michael@0 | 4860 | AutoPrepareForTracing::AutoPrepareForTracing(JSRuntime *rt, ZoneSelector selector) |
michael@0 | 4861 | : finish(rt), |
michael@0 | 4862 | session(rt), |
michael@0 | 4863 | copy(rt, selector) |
michael@0 | 4864 | { |
michael@0 | 4865 | RecordNativeStackTopForGC(rt); |
michael@0 | 4866 | } |
michael@0 | 4867 | |
michael@0 | 4868 | JSCompartment * |
michael@0 | 4869 | js::NewCompartment(JSContext *cx, Zone *zone, JSPrincipals *principals, |
michael@0 | 4870 | const JS::CompartmentOptions &options) |
michael@0 | 4871 | { |
michael@0 | 4872 | JSRuntime *rt = cx->runtime(); |
michael@0 | 4873 | JS_AbortIfWrongThread(rt); |
michael@0 | 4874 | |
michael@0 | 4875 | ScopedJSDeletePtr<Zone> zoneHolder; |
michael@0 | 4876 | if (!zone) { |
michael@0 | 4877 | zone = cx->new_<Zone>(rt); |
michael@0 | 4878 | if (!zone) |
michael@0 | 4879 | return nullptr; |
michael@0 | 4880 | |
michael@0 | 4881 | zoneHolder.reset(zone); |
michael@0 | 4882 | |
michael@0 | 4883 | zone->setGCLastBytes(8192, GC_NORMAL); |
michael@0 | 4884 | |
michael@0 | 4885 | const JSPrincipals *trusted = rt->trustedPrincipals(); |
michael@0 | 4886 | zone->isSystem = principals && principals == trusted; |
michael@0 | 4887 | } |
michael@0 | 4888 | |
michael@0 | 4889 | ScopedJSDeletePtr<JSCompartment> compartment(cx->new_<JSCompartment>(zone, options)); |
michael@0 | 4890 | if (!compartment || !compartment->init(cx)) |
michael@0 | 4891 | return nullptr; |
michael@0 | 4892 | |
michael@0 | 4893 | // Set up the principals. |
michael@0 | 4894 | JS_SetCompartmentPrincipals(compartment, principals); |
michael@0 | 4895 | |
michael@0 | 4896 | AutoLockGC lock(rt); |
michael@0 | 4897 | |
michael@0 | 4898 | if (!zone->compartments.append(compartment.get())) { |
michael@0 | 4899 | js_ReportOutOfMemory(cx); |
michael@0 | 4900 | return nullptr; |
michael@0 | 4901 | } |
michael@0 | 4902 | |
michael@0 | 4903 | if (zoneHolder && !rt->zones.append(zone)) { |
michael@0 | 4904 | js_ReportOutOfMemory(cx); |
michael@0 | 4905 | return nullptr; |
michael@0 | 4906 | } |
michael@0 | 4907 | |
michael@0 | 4908 | zoneHolder.forget(); |
michael@0 | 4909 | return compartment.forget(); |
michael@0 | 4910 | } |
michael@0 | 4911 | |
michael@0 | 4912 | void |
michael@0 | 4913 | gc::MergeCompartments(JSCompartment *source, JSCompartment *target) |
michael@0 | 4914 | { |
michael@0 | 4915 | // The source compartment must be specifically flagged as mergable. This |
michael@0 | 4916 | // also implies that the compartment is not visible to the debugger. |
michael@0 | 4917 | JS_ASSERT(source->options_.mergeable()); |
michael@0 | 4918 | |
michael@0 | 4919 | JSRuntime *rt = source->runtimeFromMainThread(); |
michael@0 | 4920 | |
michael@0 | 4921 | AutoPrepareForTracing prepare(rt, SkipAtoms); |
michael@0 | 4922 | |
michael@0 | 4923 | // Cleanup tables and other state in the source compartment that will be |
michael@0 | 4924 | // meaningless after merging into the target compartment. |
michael@0 | 4925 | |
michael@0 | 4926 | source->clearTables(); |
michael@0 | 4927 | |
michael@0 | 4928 | // Fixup compartment pointers in source to refer to target. |
michael@0 | 4929 | |
michael@0 | 4930 | for (CellIter iter(source->zone(), FINALIZE_SCRIPT); !iter.done(); iter.next()) { |
michael@0 | 4931 | JSScript *script = iter.get<JSScript>(); |
michael@0 | 4932 | JS_ASSERT(script->compartment() == source); |
michael@0 | 4933 | script->compartment_ = target; |
michael@0 | 4934 | } |
michael@0 | 4935 | |
michael@0 | 4936 | for (CellIter iter(source->zone(), FINALIZE_BASE_SHAPE); !iter.done(); iter.next()) { |
michael@0 | 4937 | BaseShape *base = iter.get<BaseShape>(); |
michael@0 | 4938 | JS_ASSERT(base->compartment() == source); |
michael@0 | 4939 | base->compartment_ = target; |
michael@0 | 4940 | } |
michael@0 | 4941 | |
michael@0 | 4942 | // Fixup zone pointers in source's zone to refer to target's zone. |
michael@0 | 4943 | |
michael@0 | 4944 | for (size_t thingKind = 0; thingKind != FINALIZE_LIMIT; thingKind++) { |
michael@0 | 4945 | for (ArenaIter aiter(source->zone(), AllocKind(thingKind)); !aiter.done(); aiter.next()) { |
michael@0 | 4946 | ArenaHeader *aheader = aiter.get(); |
michael@0 | 4947 | aheader->zone = target->zone(); |
michael@0 | 4948 | } |
michael@0 | 4949 | } |
michael@0 | 4950 | |
michael@0 | 4951 | // The source should be the only compartment in its zone. |
michael@0 | 4952 | for (CompartmentsInZoneIter c(source->zone()); !c.done(); c.next()) |
michael@0 | 4953 | JS_ASSERT(c.get() == source); |
michael@0 | 4954 | |
michael@0 | 4955 | // Merge the allocator in source's zone into target's zone. |
michael@0 | 4956 | target->zone()->allocator.arenas.adoptArenas(rt, &source->zone()->allocator.arenas); |
michael@0 | 4957 | target->zone()->gcBytes += source->zone()->gcBytes; |
michael@0 | 4958 | source->zone()->gcBytes = 0; |
michael@0 | 4959 | |
michael@0 | 4960 | // Merge other info in source's zone into target's zone. |
michael@0 | 4961 | target->zone()->types.typeLifoAlloc.transferFrom(&source->zone()->types.typeLifoAlloc); |
michael@0 | 4962 | } |
michael@0 | 4963 | |
michael@0 | 4964 | void |
michael@0 | 4965 | gc::RunDebugGC(JSContext *cx) |
michael@0 | 4966 | { |
michael@0 | 4967 | #ifdef JS_GC_ZEAL |
michael@0 | 4968 | JSRuntime *rt = cx->runtime(); |
michael@0 | 4969 | int type = rt->gcZeal(); |
michael@0 | 4970 | |
michael@0 | 4971 | if (rt->mainThread.suppressGC) |
michael@0 | 4972 | return; |
michael@0 | 4973 | |
michael@0 | 4974 | if (type == js::gc::ZealGenerationalGCValue) |
michael@0 | 4975 | return MinorGC(rt, JS::gcreason::DEBUG_GC); |
michael@0 | 4976 | |
michael@0 | 4977 | PrepareForDebugGC(cx->runtime()); |
michael@0 | 4978 | |
michael@0 | 4979 | if (type == ZealIncrementalRootsThenFinish || |
michael@0 | 4980 | type == ZealIncrementalMarkAllThenFinish || |
michael@0 | 4981 | type == ZealIncrementalMultipleSlices) |
michael@0 | 4982 | { |
michael@0 | 4983 | js::gc::State initialState = rt->gcIncrementalState; |
michael@0 | 4984 | int64_t budget; |
michael@0 | 4985 | if (type == ZealIncrementalMultipleSlices) { |
michael@0 | 4986 | /* |
michael@0 | 4987 | * Start with a small slice limit and double it every slice. This |
michael@0 | 4988 | * ensure that we get multiple slices, and collection runs to |
michael@0 | 4989 | * completion. |
michael@0 | 4990 | */ |
michael@0 | 4991 | if (initialState == NO_INCREMENTAL) |
michael@0 | 4992 | rt->gcIncrementalLimit = rt->gcZealFrequency / 2; |
michael@0 | 4993 | else |
michael@0 | 4994 | rt->gcIncrementalLimit *= 2; |
michael@0 | 4995 | budget = SliceBudget::WorkBudget(rt->gcIncrementalLimit); |
michael@0 | 4996 | } else { |
michael@0 | 4997 | // This triggers incremental GC but is actually ignored by IncrementalMarkSlice. |
michael@0 | 4998 | budget = SliceBudget::WorkBudget(1); |
michael@0 | 4999 | } |
michael@0 | 5000 | |
michael@0 | 5001 | Collect(rt, true, budget, GC_NORMAL, JS::gcreason::DEBUG_GC); |
michael@0 | 5002 | |
michael@0 | 5003 | /* |
michael@0 | 5004 | * For multi-slice zeal, reset the slice size when we get to the sweep |
michael@0 | 5005 | * phase. |
michael@0 | 5006 | */ |
michael@0 | 5007 | if (type == ZealIncrementalMultipleSlices && |
michael@0 | 5008 | initialState == MARK && rt->gcIncrementalState == SWEEP) |
michael@0 | 5009 | { |
michael@0 | 5010 | rt->gcIncrementalLimit = rt->gcZealFrequency / 2; |
michael@0 | 5011 | } |
michael@0 | 5012 | } else { |
michael@0 | 5013 | Collect(rt, false, SliceBudget::Unlimited, GC_NORMAL, JS::gcreason::DEBUG_GC); |
michael@0 | 5014 | } |
michael@0 | 5015 | |
michael@0 | 5016 | #endif |
michael@0 | 5017 | } |
michael@0 | 5018 | |
michael@0 | 5019 | void |
michael@0 | 5020 | gc::SetDeterministicGC(JSContext *cx, bool enabled) |
michael@0 | 5021 | { |
michael@0 | 5022 | #ifdef JS_GC_ZEAL |
michael@0 | 5023 | JSRuntime *rt = cx->runtime(); |
michael@0 | 5024 | rt->gcDeterministicOnly = enabled; |
michael@0 | 5025 | #endif |
michael@0 | 5026 | } |
michael@0 | 5027 | |
michael@0 | 5028 | void |
michael@0 | 5029 | gc::SetValidateGC(JSContext *cx, bool enabled) |
michael@0 | 5030 | { |
michael@0 | 5031 | JSRuntime *rt = cx->runtime(); |
michael@0 | 5032 | rt->gcValidate = enabled; |
michael@0 | 5033 | } |
michael@0 | 5034 | |
michael@0 | 5035 | void |
michael@0 | 5036 | gc::SetFullCompartmentChecks(JSContext *cx, bool enabled) |
michael@0 | 5037 | { |
michael@0 | 5038 | JSRuntime *rt = cx->runtime(); |
michael@0 | 5039 | rt->gcFullCompartmentChecks = enabled; |
michael@0 | 5040 | } |
michael@0 | 5041 | |
michael@0 | 5042 | #ifdef DEBUG |
michael@0 | 5043 | |
michael@0 | 5044 | /* Should only be called manually under gdb */ |
michael@0 | 5045 | void PreventGCDuringInteractiveDebug() |
michael@0 | 5046 | { |
michael@0 | 5047 | TlsPerThreadData.get()->suppressGC++; |
michael@0 | 5048 | } |
michael@0 | 5049 | |
michael@0 | 5050 | #endif |
michael@0 | 5051 | |
michael@0 | 5052 | void |
michael@0 | 5053 | js::ReleaseAllJITCode(FreeOp *fop) |
michael@0 | 5054 | { |
michael@0 | 5055 | #ifdef JS_ION |
michael@0 | 5056 | |
michael@0 | 5057 | # ifdef JSGC_GENERATIONAL |
michael@0 | 5058 | /* |
michael@0 | 5059 | * Scripts can entrain nursery things, inserting references to the script |
michael@0 | 5060 | * into the store buffer. Clear the store buffer before discarding scripts. |
michael@0 | 5061 | */ |
michael@0 | 5062 | MinorGC(fop->runtime(), JS::gcreason::EVICT_NURSERY); |
michael@0 | 5063 | # endif |
michael@0 | 5064 | |
michael@0 | 5065 | for (ZonesIter zone(fop->runtime(), SkipAtoms); !zone.done(); zone.next()) { |
michael@0 | 5066 | if (!zone->jitZone()) |
michael@0 | 5067 | continue; |
michael@0 | 5068 | |
michael@0 | 5069 | # ifdef DEBUG |
michael@0 | 5070 | /* Assert no baseline scripts are marked as active. */ |
michael@0 | 5071 | for (CellIter i(zone, FINALIZE_SCRIPT); !i.done(); i.next()) { |
michael@0 | 5072 | JSScript *script = i.get<JSScript>(); |
michael@0 | 5073 | JS_ASSERT_IF(script->hasBaselineScript(), !script->baselineScript()->active()); |
michael@0 | 5074 | } |
michael@0 | 5075 | # endif |
michael@0 | 5076 | |
michael@0 | 5077 | /* Mark baseline scripts on the stack as active. */ |
michael@0 | 5078 | jit::MarkActiveBaselineScripts(zone); |
michael@0 | 5079 | |
michael@0 | 5080 | jit::InvalidateAll(fop, zone); |
michael@0 | 5081 | |
michael@0 | 5082 | for (CellIter i(zone, FINALIZE_SCRIPT); !i.done(); i.next()) { |
michael@0 | 5083 | JSScript *script = i.get<JSScript>(); |
michael@0 | 5084 | jit::FinishInvalidation<SequentialExecution>(fop, script); |
michael@0 | 5085 | jit::FinishInvalidation<ParallelExecution>(fop, script); |
michael@0 | 5086 | |
michael@0 | 5087 | /* |
michael@0 | 5088 | * Discard baseline script if it's not marked as active. Note that |
michael@0 | 5089 | * this also resets the active flag. |
michael@0 | 5090 | */ |
michael@0 | 5091 | jit::FinishDiscardBaselineScript(fop, script); |
michael@0 | 5092 | } |
michael@0 | 5093 | |
michael@0 | 5094 | zone->jitZone()->optimizedStubSpace()->free(); |
michael@0 | 5095 | } |
michael@0 | 5096 | #endif |
michael@0 | 5097 | } |
michael@0 | 5098 | |
michael@0 | 5099 | /* |
michael@0 | 5100 | * There are three possible PCCount profiling states: |
michael@0 | 5101 | * |
michael@0 | 5102 | * 1. None: Neither scripts nor the runtime have count information. |
michael@0 | 5103 | * 2. Profile: Active scripts have count information, the runtime does not. |
michael@0 | 5104 | * 3. Query: Scripts do not have count information, the runtime does. |
michael@0 | 5105 | * |
michael@0 | 5106 | * When starting to profile scripts, counting begins immediately, with all JIT |
michael@0 | 5107 | * code discarded and recompiled with counts as necessary. Active interpreter |
michael@0 | 5108 | * frames will not begin profiling until they begin executing another script |
michael@0 | 5109 | * (via a call or return). |
michael@0 | 5110 | * |
michael@0 | 5111 | * The below API functions manage transitions to new states, according |
michael@0 | 5112 | * to the table below. |
michael@0 | 5113 | * |
michael@0 | 5114 | * Old State |
michael@0 | 5115 | * ------------------------- |
michael@0 | 5116 | * Function None Profile Query |
michael@0 | 5117 | * -------- |
michael@0 | 5118 | * StartPCCountProfiling Profile Profile Profile |
michael@0 | 5119 | * StopPCCountProfiling None Query Query |
michael@0 | 5120 | * PurgePCCounts None None None |
michael@0 | 5121 | */ |
michael@0 | 5122 | |
michael@0 | 5123 | static void |
michael@0 | 5124 | ReleaseScriptCounts(FreeOp *fop) |
michael@0 | 5125 | { |
michael@0 | 5126 | JSRuntime *rt = fop->runtime(); |
michael@0 | 5127 | JS_ASSERT(rt->scriptAndCountsVector); |
michael@0 | 5128 | |
michael@0 | 5129 | ScriptAndCountsVector &vec = *rt->scriptAndCountsVector; |
michael@0 | 5130 | |
michael@0 | 5131 | for (size_t i = 0; i < vec.length(); i++) |
michael@0 | 5132 | vec[i].scriptCounts.destroy(fop); |
michael@0 | 5133 | |
michael@0 | 5134 | fop->delete_(rt->scriptAndCountsVector); |
michael@0 | 5135 | rt->scriptAndCountsVector = nullptr; |
michael@0 | 5136 | } |
michael@0 | 5137 | |
michael@0 | 5138 | JS_FRIEND_API(void) |
michael@0 | 5139 | js::StartPCCountProfiling(JSContext *cx) |
michael@0 | 5140 | { |
michael@0 | 5141 | JSRuntime *rt = cx->runtime(); |
michael@0 | 5142 | |
michael@0 | 5143 | if (rt->profilingScripts) |
michael@0 | 5144 | return; |
michael@0 | 5145 | |
michael@0 | 5146 | if (rt->scriptAndCountsVector) |
michael@0 | 5147 | ReleaseScriptCounts(rt->defaultFreeOp()); |
michael@0 | 5148 | |
michael@0 | 5149 | ReleaseAllJITCode(rt->defaultFreeOp()); |
michael@0 | 5150 | |
michael@0 | 5151 | rt->profilingScripts = true; |
michael@0 | 5152 | } |
michael@0 | 5153 | |
michael@0 | 5154 | JS_FRIEND_API(void) |
michael@0 | 5155 | js::StopPCCountProfiling(JSContext *cx) |
michael@0 | 5156 | { |
michael@0 | 5157 | JSRuntime *rt = cx->runtime(); |
michael@0 | 5158 | |
michael@0 | 5159 | if (!rt->profilingScripts) |
michael@0 | 5160 | return; |
michael@0 | 5161 | JS_ASSERT(!rt->scriptAndCountsVector); |
michael@0 | 5162 | |
michael@0 | 5163 | ReleaseAllJITCode(rt->defaultFreeOp()); |
michael@0 | 5164 | |
michael@0 | 5165 | ScriptAndCountsVector *vec = cx->new_<ScriptAndCountsVector>(SystemAllocPolicy()); |
michael@0 | 5166 | if (!vec) |
michael@0 | 5167 | return; |
michael@0 | 5168 | |
michael@0 | 5169 | for (ZonesIter zone(rt, SkipAtoms); !zone.done(); zone.next()) { |
michael@0 | 5170 | for (CellIter i(zone, FINALIZE_SCRIPT); !i.done(); i.next()) { |
michael@0 | 5171 | JSScript *script = i.get<JSScript>(); |
michael@0 | 5172 | if (script->hasScriptCounts() && script->types) { |
michael@0 | 5173 | ScriptAndCounts sac; |
michael@0 | 5174 | sac.script = script; |
michael@0 | 5175 | sac.scriptCounts.set(script->releaseScriptCounts()); |
michael@0 | 5176 | if (!vec->append(sac)) |
michael@0 | 5177 | sac.scriptCounts.destroy(rt->defaultFreeOp()); |
michael@0 | 5178 | } |
michael@0 | 5179 | } |
michael@0 | 5180 | } |
michael@0 | 5181 | |
michael@0 | 5182 | rt->profilingScripts = false; |
michael@0 | 5183 | rt->scriptAndCountsVector = vec; |
michael@0 | 5184 | } |
michael@0 | 5185 | |
michael@0 | 5186 | JS_FRIEND_API(void) |
michael@0 | 5187 | js::PurgePCCounts(JSContext *cx) |
michael@0 | 5188 | { |
michael@0 | 5189 | JSRuntime *rt = cx->runtime(); |
michael@0 | 5190 | |
michael@0 | 5191 | if (!rt->scriptAndCountsVector) |
michael@0 | 5192 | return; |
michael@0 | 5193 | JS_ASSERT(!rt->profilingScripts); |
michael@0 | 5194 | |
michael@0 | 5195 | ReleaseScriptCounts(rt->defaultFreeOp()); |
michael@0 | 5196 | } |
michael@0 | 5197 | |
michael@0 | 5198 | void |
michael@0 | 5199 | js::PurgeJITCaches(Zone *zone) |
michael@0 | 5200 | { |
michael@0 | 5201 | #ifdef JS_ION |
michael@0 | 5202 | for (CellIterUnderGC i(zone, FINALIZE_SCRIPT); !i.done(); i.next()) { |
michael@0 | 5203 | JSScript *script = i.get<JSScript>(); |
michael@0 | 5204 | |
michael@0 | 5205 | /* Discard Ion caches. */ |
michael@0 | 5206 | jit::PurgeCaches(script); |
michael@0 | 5207 | } |
michael@0 | 5208 | #endif |
michael@0 | 5209 | } |
michael@0 | 5210 | |
michael@0 | 5211 | void |
michael@0 | 5212 | ArenaLists::normalizeBackgroundFinalizeState(AllocKind thingKind) |
michael@0 | 5213 | { |
michael@0 | 5214 | volatile uintptr_t *bfs = &backgroundFinalizeState[thingKind]; |
michael@0 | 5215 | switch (*bfs) { |
michael@0 | 5216 | case BFS_DONE: |
michael@0 | 5217 | break; |
michael@0 | 5218 | case BFS_JUST_FINISHED: |
michael@0 | 5219 | // No allocations between end of last sweep and now. |
michael@0 | 5220 | // Transfering over arenas is a kind of allocation. |
michael@0 | 5221 | *bfs = BFS_DONE; |
michael@0 | 5222 | break; |
michael@0 | 5223 | default: |
michael@0 | 5224 | JS_ASSERT(!"Background finalization in progress, but it should not be."); |
michael@0 | 5225 | break; |
michael@0 | 5226 | } |
michael@0 | 5227 | } |
michael@0 | 5228 | |
michael@0 | 5229 | void |
michael@0 | 5230 | ArenaLists::adoptArenas(JSRuntime *rt, ArenaLists *fromArenaLists) |
michael@0 | 5231 | { |
michael@0 | 5232 | // The other parallel threads have all completed now, and GC |
michael@0 | 5233 | // should be inactive, but still take the lock as a kind of read |
michael@0 | 5234 | // fence. |
michael@0 | 5235 | AutoLockGC lock(rt); |
michael@0 | 5236 | |
michael@0 | 5237 | fromArenaLists->purge(); |
michael@0 | 5238 | |
michael@0 | 5239 | for (size_t thingKind = 0; thingKind != FINALIZE_LIMIT; thingKind++) { |
michael@0 | 5240 | #ifdef JS_THREADSAFE |
michael@0 | 5241 | // When we enter a parallel section, we join the background |
michael@0 | 5242 | // thread, and we do not run GC while in the parallel section, |
michael@0 | 5243 | // so no finalizer should be active! |
michael@0 | 5244 | normalizeBackgroundFinalizeState(AllocKind(thingKind)); |
michael@0 | 5245 | fromArenaLists->normalizeBackgroundFinalizeState(AllocKind(thingKind)); |
michael@0 | 5246 | #endif |
michael@0 | 5247 | ArenaList *fromList = &fromArenaLists->arenaLists[thingKind]; |
michael@0 | 5248 | ArenaList *toList = &arenaLists[thingKind]; |
michael@0 | 5249 | while (fromList->head != nullptr) { |
michael@0 | 5250 | // Remove entry from |fromList| |
michael@0 | 5251 | ArenaHeader *fromHeader = fromList->head; |
michael@0 | 5252 | fromList->head = fromHeader->next; |
michael@0 | 5253 | fromHeader->next = nullptr; |
michael@0 | 5254 | |
michael@0 | 5255 | // During parallel execution, we sometimes keep empty arenas |
michael@0 | 5256 | // on the lists rather than sending them back to the chunk. |
michael@0 | 5257 | // Therefore, if fromHeader is empty, send it back to the |
michael@0 | 5258 | // chunk now. Otherwise, attach to |toList|. |
michael@0 | 5259 | if (fromHeader->isEmpty()) |
michael@0 | 5260 | fromHeader->chunk()->releaseArena(fromHeader); |
michael@0 | 5261 | else |
michael@0 | 5262 | toList->insert(fromHeader); |
michael@0 | 5263 | } |
michael@0 | 5264 | fromList->cursor = &fromList->head; |
michael@0 | 5265 | } |
michael@0 | 5266 | } |
michael@0 | 5267 | |
michael@0 | 5268 | bool |
michael@0 | 5269 | ArenaLists::containsArena(JSRuntime *rt, ArenaHeader *needle) |
michael@0 | 5270 | { |
michael@0 | 5271 | AutoLockGC lock(rt); |
michael@0 | 5272 | size_t allocKind = needle->getAllocKind(); |
michael@0 | 5273 | for (ArenaHeader *aheader = arenaLists[allocKind].head; |
michael@0 | 5274 | aheader != nullptr; |
michael@0 | 5275 | aheader = aheader->next) |
michael@0 | 5276 | { |
michael@0 | 5277 | if (aheader == needle) |
michael@0 | 5278 | return true; |
michael@0 | 5279 | } |
michael@0 | 5280 | return false; |
michael@0 | 5281 | } |
michael@0 | 5282 | |
michael@0 | 5283 | |
michael@0 | 5284 | AutoMaybeTouchDeadZones::AutoMaybeTouchDeadZones(JSContext *cx) |
michael@0 | 5285 | : runtime(cx->runtime()), |
michael@0 | 5286 | markCount(runtime->gcObjectsMarkedInDeadZones), |
michael@0 | 5287 | inIncremental(JS::IsIncrementalGCInProgress(runtime)), |
michael@0 | 5288 | manipulatingDeadZones(runtime->gcManipulatingDeadZones) |
michael@0 | 5289 | { |
michael@0 | 5290 | runtime->gcManipulatingDeadZones = true; |
michael@0 | 5291 | } |
michael@0 | 5292 | |
michael@0 | 5293 | AutoMaybeTouchDeadZones::AutoMaybeTouchDeadZones(JSObject *obj) |
michael@0 | 5294 | : runtime(obj->compartment()->runtimeFromMainThread()), |
michael@0 | 5295 | markCount(runtime->gcObjectsMarkedInDeadZones), |
michael@0 | 5296 | inIncremental(JS::IsIncrementalGCInProgress(runtime)), |
michael@0 | 5297 | manipulatingDeadZones(runtime->gcManipulatingDeadZones) |
michael@0 | 5298 | { |
michael@0 | 5299 | runtime->gcManipulatingDeadZones = true; |
michael@0 | 5300 | } |
michael@0 | 5301 | |
michael@0 | 5302 | AutoMaybeTouchDeadZones::~AutoMaybeTouchDeadZones() |
michael@0 | 5303 | { |
michael@0 | 5304 | runtime->gcManipulatingDeadZones = manipulatingDeadZones; |
michael@0 | 5305 | |
michael@0 | 5306 | if (inIncremental && runtime->gcObjectsMarkedInDeadZones != markCount) { |
michael@0 | 5307 | JS::PrepareForFullGC(runtime); |
michael@0 | 5308 | js::GC(runtime, GC_NORMAL, JS::gcreason::TRANSPLANT); |
michael@0 | 5309 | } |
michael@0 | 5310 | } |
michael@0 | 5311 | |
michael@0 | 5312 | AutoSuppressGC::AutoSuppressGC(ExclusiveContext *cx) |
michael@0 | 5313 | : suppressGC_(cx->perThreadData->suppressGC) |
michael@0 | 5314 | { |
michael@0 | 5315 | suppressGC_++; |
michael@0 | 5316 | } |
michael@0 | 5317 | |
michael@0 | 5318 | AutoSuppressGC::AutoSuppressGC(JSCompartment *comp) |
michael@0 | 5319 | : suppressGC_(comp->runtimeFromMainThread()->mainThread.suppressGC) |
michael@0 | 5320 | { |
michael@0 | 5321 | suppressGC_++; |
michael@0 | 5322 | } |
michael@0 | 5323 | |
michael@0 | 5324 | AutoSuppressGC::AutoSuppressGC(JSRuntime *rt) |
michael@0 | 5325 | : suppressGC_(rt->mainThread.suppressGC) |
michael@0 | 5326 | { |
michael@0 | 5327 | suppressGC_++; |
michael@0 | 5328 | } |
michael@0 | 5329 | |
michael@0 | 5330 | bool |
michael@0 | 5331 | js::UninlinedIsInsideNursery(JSRuntime *rt, const void *thing) |
michael@0 | 5332 | { |
michael@0 | 5333 | return IsInsideNursery(rt, thing); |
michael@0 | 5334 | } |
michael@0 | 5335 | |
michael@0 | 5336 | #ifdef DEBUG |
michael@0 | 5337 | AutoDisableProxyCheck::AutoDisableProxyCheck(JSRuntime *rt |
michael@0 | 5338 | MOZ_GUARD_OBJECT_NOTIFIER_PARAM_IN_IMPL) |
michael@0 | 5339 | : count(rt->gcDisableStrictProxyCheckingCount) |
michael@0 | 5340 | { |
michael@0 | 5341 | MOZ_GUARD_OBJECT_NOTIFIER_INIT; |
michael@0 | 5342 | count++; |
michael@0 | 5343 | } |
michael@0 | 5344 | |
michael@0 | 5345 | JS_FRIEND_API(void) |
michael@0 | 5346 | JS::AssertGCThingMustBeTenured(JSObject *obj) |
michael@0 | 5347 | { |
michael@0 | 5348 | JS_ASSERT((!IsNurseryAllocable(obj->tenuredGetAllocKind()) || obj->getClass()->finalize) && |
michael@0 | 5349 | obj->isTenured()); |
michael@0 | 5350 | } |
michael@0 | 5351 | |
michael@0 | 5352 | JS_FRIEND_API(size_t) |
michael@0 | 5353 | JS::GetGCNumber() |
michael@0 | 5354 | { |
michael@0 | 5355 | JSRuntime *rt = js::TlsPerThreadData.get()->runtimeFromMainThread(); |
michael@0 | 5356 | if (!rt) |
michael@0 | 5357 | return 0; |
michael@0 | 5358 | return rt->gcNumber; |
michael@0 | 5359 | } |
michael@0 | 5360 | |
michael@0 | 5361 | JS::AutoAssertNoGC::AutoAssertNoGC() |
michael@0 | 5362 | : runtime(nullptr), gcNumber(0) |
michael@0 | 5363 | { |
michael@0 | 5364 | js::PerThreadData *data = js::TlsPerThreadData.get(); |
michael@0 | 5365 | if (data) { |
michael@0 | 5366 | /* |
michael@0 | 5367 | * GC's from off-thread will always assert, so off-thread is implicitly |
michael@0 | 5368 | * AutoAssertNoGC. We still need to allow AutoAssertNoGC to be used in |
michael@0 | 5369 | * code that works from both threads, however. We also use this to |
michael@0 | 5370 | * annotate the off thread run loops. |
michael@0 | 5371 | */ |
michael@0 | 5372 | runtime = data->runtimeIfOnOwnerThread(); |
michael@0 | 5373 | if (runtime) |
michael@0 | 5374 | gcNumber = runtime->gcNumber; |
michael@0 | 5375 | } |
michael@0 | 5376 | } |
michael@0 | 5377 | |
michael@0 | 5378 | JS::AutoAssertNoGC::AutoAssertNoGC(JSRuntime *rt) |
michael@0 | 5379 | : runtime(rt), gcNumber(rt->gcNumber) |
michael@0 | 5380 | { |
michael@0 | 5381 | } |
michael@0 | 5382 | |
michael@0 | 5383 | JS::AutoAssertNoGC::~AutoAssertNoGC() |
michael@0 | 5384 | { |
michael@0 | 5385 | if (runtime) |
michael@0 | 5386 | MOZ_ASSERT(gcNumber == runtime->gcNumber, "GC ran inside an AutoAssertNoGC scope."); |
michael@0 | 5387 | } |
michael@0 | 5388 | #endif |