js/src/gc/Barrier.h

Wed, 31 Dec 2014 06:09:35 +0100

author
Michael Schloh von Bennewitz <michael@schloh.com>
date
Wed, 31 Dec 2014 06:09:35 +0100
changeset 0
6474c204b198
permissions
-rw-r--r--

Cloned upstream origin tor-browser at tor-browser-31.3.0esr-4.5-1-build1
revision ID fc1c9ff7c1b2defdbc039f12214767608f46423f for hacking purpose.

michael@0 1 /* -*- Mode: C++; tab-width: 8; indent-tabs-mode: nil; c-basic-offset: 4 -*-
michael@0 2 * vim: set ts=8 sts=4 et sw=4 tw=99:
michael@0 3 * This Source Code Form is subject to the terms of the Mozilla Public
michael@0 4 * License, v. 2.0. If a copy of the MPL was not distributed with this
michael@0 5 * file, You can obtain one at http://mozilla.org/MPL/2.0/. */
michael@0 6
michael@0 7 #ifndef gc_Barrier_h
michael@0 8 #define gc_Barrier_h
michael@0 9
michael@0 10 #include "NamespaceImports.h"
michael@0 11
michael@0 12 #include "gc/Heap.h"
michael@0 13 #ifdef JSGC_GENERATIONAL
michael@0 14 # include "gc/StoreBuffer.h"
michael@0 15 #endif
michael@0 16 #include "js/HashTable.h"
michael@0 17 #include "js/Id.h"
michael@0 18 #include "js/RootingAPI.h"
michael@0 19
michael@0 20 /*
michael@0 21 * A write barrier is a mechanism used by incremental or generation GCs to
michael@0 22 * ensure that every value that needs to be marked is marked. In general, the
michael@0 23 * write barrier should be invoked whenever a write can cause the set of things
michael@0 24 * traced through by the GC to change. This includes:
michael@0 25 * - writes to object properties
michael@0 26 * - writes to array slots
michael@0 27 * - writes to fields like JSObject::shape_ that we trace through
michael@0 28 * - writes to fields in private data, like JSGenerator::obj
michael@0 29 * - writes to non-markable fields like JSObject::private that point to
michael@0 30 * markable data
michael@0 31 * The last category is the trickiest. Even though the private pointers does not
michael@0 32 * point to a GC thing, changing the private pointer may change the set of
michael@0 33 * objects that are traced by the GC. Therefore it needs a write barrier.
michael@0 34 *
michael@0 35 * Every barriered write should have the following form:
michael@0 36 * <pre-barrier>
michael@0 37 * obj->field = value; // do the actual write
michael@0 38 * <post-barrier>
michael@0 39 * The pre-barrier is used for incremental GC and the post-barrier is for
michael@0 40 * generational GC.
michael@0 41 *
michael@0 42 * PRE-BARRIER
michael@0 43 *
michael@0 44 * To understand the pre-barrier, let's consider how incremental GC works. The
michael@0 45 * GC itself is divided into "slices". Between each slice, JS code is allowed to
michael@0 46 * run. Each slice should be short so that the user doesn't notice the
michael@0 47 * interruptions. In our GC, the structure of the slices is as follows:
michael@0 48 *
michael@0 49 * 1. ... JS work, which leads to a request to do GC ...
michael@0 50 * 2. [first GC slice, which performs all root marking and possibly more marking]
michael@0 51 * 3. ... more JS work is allowed to run ...
michael@0 52 * 4. [GC mark slice, which runs entirely in drainMarkStack]
michael@0 53 * 5. ... more JS work ...
michael@0 54 * 6. [GC mark slice, which runs entirely in drainMarkStack]
michael@0 55 * 7. ... more JS work ...
michael@0 56 * 8. [GC marking finishes; sweeping done non-incrementally; GC is done]
michael@0 57 * 9. ... JS continues uninterrupted now that GC is finishes ...
michael@0 58 *
michael@0 59 * Of course, there may be a different number of slices depending on how much
michael@0 60 * marking is to be done.
michael@0 61 *
michael@0 62 * The danger inherent in this scheme is that the JS code in steps 3, 5, and 7
michael@0 63 * might change the heap in a way that causes the GC to collect an object that
michael@0 64 * is actually reachable. The write barrier prevents this from happening. We use
michael@0 65 * a variant of incremental GC called "snapshot at the beginning." This approach
michael@0 66 * guarantees the invariant that if an object is reachable in step 2, then we
michael@0 67 * will mark it eventually. The name comes from the idea that we take a
michael@0 68 * theoretical "snapshot" of all reachable objects in step 2; all objects in
michael@0 69 * that snapshot should eventually be marked. (Note that the write barrier
michael@0 70 * verifier code takes an actual snapshot.)
michael@0 71 *
michael@0 72 * The basic correctness invariant of a snapshot-at-the-beginning collector is
michael@0 73 * that any object reachable at the end of the GC (step 9) must either:
michael@0 74 * (1) have been reachable at the beginning (step 2) and thus in the snapshot
michael@0 75 * (2) or must have been newly allocated, in steps 3, 5, or 7.
michael@0 76 * To deal with case (2), any objects allocated during an incremental GC are
michael@0 77 * automatically marked black.
michael@0 78 *
michael@0 79 * This strategy is actually somewhat conservative: if an object becomes
michael@0 80 * unreachable between steps 2 and 8, it would be safe to collect it. We won't,
michael@0 81 * mainly for simplicity. (Also, note that the snapshot is entirely
michael@0 82 * theoretical. We don't actually do anything special in step 2 that we wouldn't
michael@0 83 * do in a non-incremental GC.
michael@0 84 *
michael@0 85 * It's the pre-barrier's job to maintain the snapshot invariant. Consider the
michael@0 86 * write "obj->field = value". Let the prior value of obj->field be
michael@0 87 * value0. Since it's possible that value0 may have been what obj->field
michael@0 88 * contained in step 2, when the snapshot was taken, the barrier marks
michael@0 89 * value0. Note that it only does this if we're in the middle of an incremental
michael@0 90 * GC. Since this is rare, the cost of the write barrier is usually just an
michael@0 91 * extra branch.
michael@0 92 *
michael@0 93 * In practice, we implement the pre-barrier differently based on the type of
michael@0 94 * value0. E.g., see JSObject::writeBarrierPre, which is used if obj->field is
michael@0 95 * a JSObject*. It takes value0 as a parameter.
michael@0 96 *
michael@0 97 * POST-BARRIER
michael@0 98 *
michael@0 99 * For generational GC, we want to be able to quickly collect the nursery in a
michael@0 100 * minor collection. Part of the way this is achieved is to only mark the
michael@0 101 * nursery itself; tenured things, which may form the majority of the heap, are
michael@0 102 * not traced through or marked. This leads to the problem of what to do about
michael@0 103 * tenured objects that have pointers into the nursery: if such things are not
michael@0 104 * marked, they may be discarded while there are still live objects which
michael@0 105 * reference them. The solution is to maintain information about these pointers,
michael@0 106 * and mark their targets when we start a minor collection.
michael@0 107 *
michael@0 108 * The pointers can be thoughs of as edges in object graph, and the set of edges
michael@0 109 * from the tenured generation into the nursery is know as the remembered set.
michael@0 110 * Post barriers are used to track this remembered set.
michael@0 111 *
michael@0 112 * Whenever a slot which could contain such a pointer is written, we use a write
michael@0 113 * barrier to check if the edge created is in the remembered set, and if so we
michael@0 114 * insert it into the store buffer, which is the collector's representation of
michael@0 115 * the remembered set. This means than when we come to do a minor collection we
michael@0 116 * can examine the contents of the store buffer and mark any edge targets that
michael@0 117 * are in the nursery.
michael@0 118 *
michael@0 119 * IMPLEMENTATION DETAILS
michael@0 120 *
michael@0 121 * Since it would be awkward to change every write to memory into a function
michael@0 122 * call, this file contains a bunch of C++ classes and templates that use
michael@0 123 * operator overloading to take care of barriers automatically. In many cases,
michael@0 124 * all that's necessary to make some field be barriered is to replace
michael@0 125 * Type *field;
michael@0 126 * with
michael@0 127 * HeapPtr<Type> field;
michael@0 128 * There are also special classes HeapValue and HeapId, which barrier js::Value
michael@0 129 * and jsid, respectively.
michael@0 130 *
michael@0 131 * One additional note: not all object writes need to be barriered. Writes to
michael@0 132 * newly allocated objects do not need a pre-barrier. In these cases, we use
michael@0 133 * the "obj->field.init(value)" method instead of "obj->field = value". We use
michael@0 134 * the init naming idiom in many places to signify that a field is being
michael@0 135 * assigned for the first time.
michael@0 136 *
michael@0 137 * For each of pointers, Values and jsids this file implements four classes,
michael@0 138 * illustrated here for the pointer (Ptr) classes:
michael@0 139 *
michael@0 140 * BarrieredPtr abstract base class which provides common operations
michael@0 141 * | | |
michael@0 142 * | | EncapsulatedPtr provides pre-barriers only
michael@0 143 * | |
michael@0 144 * | HeapPtr provides pre- and post-barriers
michael@0 145 * |
michael@0 146 * RelocatablePtr provides pre- and post-barriers and is relocatable
michael@0 147 *
michael@0 148 * These classes are designed to be used by the internals of the JS engine.
michael@0 149 * Barriers designed to be used externally are provided in
michael@0 150 * js/public/RootingAPI.h.
michael@0 151 */
michael@0 152
michael@0 153 namespace js {
michael@0 154
michael@0 155 class PropertyName;
michael@0 156
michael@0 157 #ifdef DEBUG
michael@0 158 bool
michael@0 159 RuntimeFromMainThreadIsHeapMajorCollecting(JS::shadow::Zone *shadowZone);
michael@0 160 #endif
michael@0 161
michael@0 162 namespace gc {
michael@0 163
michael@0 164 template <typename T>
michael@0 165 void
michael@0 166 MarkUnbarriered(JSTracer *trc, T **thingp, const char *name);
michael@0 167
michael@0 168 // Direct value access used by the write barriers and the jits.
michael@0 169 void
michael@0 170 MarkValueUnbarriered(JSTracer *trc, Value *v, const char *name);
michael@0 171
michael@0 172 // These two declarations are also present in gc/Marking.h, via the DeclMarker
michael@0 173 // macro. Not great, but hard to avoid.
michael@0 174 void
michael@0 175 MarkObjectUnbarriered(JSTracer *trc, JSObject **obj, const char *name);
michael@0 176 void
michael@0 177 MarkStringUnbarriered(JSTracer *trc, JSString **str, const char *name);
michael@0 178
michael@0 179 // Note that some subclasses (e.g. ObjectImpl) specialize some of these
michael@0 180 // methods.
michael@0 181 template <typename T>
michael@0 182 class BarrieredCell : public gc::Cell
michael@0 183 {
michael@0 184 public:
michael@0 185 MOZ_ALWAYS_INLINE JS::Zone *zone() const { return tenuredZone(); }
michael@0 186 MOZ_ALWAYS_INLINE JS::shadow::Zone *shadowZone() const { return JS::shadow::Zone::asShadowZone(zone()); }
michael@0 187 MOZ_ALWAYS_INLINE JS::Zone *zoneFromAnyThread() const { return tenuredZoneFromAnyThread(); }
michael@0 188 MOZ_ALWAYS_INLINE JS::shadow::Zone *shadowZoneFromAnyThread() const {
michael@0 189 return JS::shadow::Zone::asShadowZone(zoneFromAnyThread());
michael@0 190 }
michael@0 191
michael@0 192 static MOZ_ALWAYS_INLINE void readBarrier(T *thing) {
michael@0 193 #ifdef JSGC_INCREMENTAL
michael@0 194 JS::shadow::Zone *shadowZone = thing->shadowZoneFromAnyThread();
michael@0 195 if (shadowZone->needsBarrier()) {
michael@0 196 MOZ_ASSERT(!RuntimeFromMainThreadIsHeapMajorCollecting(shadowZone));
michael@0 197 T *tmp = thing;
michael@0 198 js::gc::MarkUnbarriered<T>(shadowZone->barrierTracer(), &tmp, "read barrier");
michael@0 199 JS_ASSERT(tmp == thing);
michael@0 200 }
michael@0 201 #endif
michael@0 202 }
michael@0 203
michael@0 204 static MOZ_ALWAYS_INLINE bool needWriteBarrierPre(JS::Zone *zone) {
michael@0 205 #ifdef JSGC_INCREMENTAL
michael@0 206 return JS::shadow::Zone::asShadowZone(zone)->needsBarrier();
michael@0 207 #else
michael@0 208 return false;
michael@0 209 #endif
michael@0 210 }
michael@0 211
michael@0 212 static MOZ_ALWAYS_INLINE bool isNullLike(T *thing) { return !thing; }
michael@0 213
michael@0 214 static MOZ_ALWAYS_INLINE void writeBarrierPre(T *thing) {
michael@0 215 #ifdef JSGC_INCREMENTAL
michael@0 216 if (isNullLike(thing) || !thing->shadowRuntimeFromAnyThread()->needsBarrier())
michael@0 217 return;
michael@0 218
michael@0 219 JS::shadow::Zone *shadowZone = thing->shadowZoneFromAnyThread();
michael@0 220 if (shadowZone->needsBarrier()) {
michael@0 221 MOZ_ASSERT(!RuntimeFromMainThreadIsHeapMajorCollecting(shadowZone));
michael@0 222 T *tmp = thing;
michael@0 223 js::gc::MarkUnbarriered<T>(shadowZone->barrierTracer(), &tmp, "write barrier");
michael@0 224 JS_ASSERT(tmp == thing);
michael@0 225 }
michael@0 226 #endif
michael@0 227 }
michael@0 228
michael@0 229 static void writeBarrierPost(T *thing, void *addr) {}
michael@0 230 static void writeBarrierPostRelocate(T *thing, void *addr) {}
michael@0 231 static void writeBarrierPostRemove(T *thing, void *addr) {}
michael@0 232 };
michael@0 233
michael@0 234 } // namespace gc
michael@0 235
michael@0 236 // Note: the following Zone-getting functions must be equivalent to the zone()
michael@0 237 // and shadowZone() functions implemented by the subclasses of BarrieredCell.
michael@0 238
michael@0 239 JS::Zone *
michael@0 240 ZoneOfObject(const JSObject &obj);
michael@0 241
michael@0 242 static inline JS::shadow::Zone *
michael@0 243 ShadowZoneOfObject(JSObject *obj)
michael@0 244 {
michael@0 245 return JS::shadow::Zone::asShadowZone(ZoneOfObject(*obj));
michael@0 246 }
michael@0 247
michael@0 248 static inline JS::shadow::Zone *
michael@0 249 ShadowZoneOfString(JSString *str)
michael@0 250 {
michael@0 251 return JS::shadow::Zone::asShadowZone(reinterpret_cast<const js::gc::Cell *>(str)->tenuredZone());
michael@0 252 }
michael@0 253
michael@0 254 MOZ_ALWAYS_INLINE JS::Zone *
michael@0 255 ZoneOfValue(const JS::Value &value)
michael@0 256 {
michael@0 257 JS_ASSERT(value.isMarkable());
michael@0 258 if (value.isObject())
michael@0 259 return ZoneOfObject(value.toObject());
michael@0 260 return static_cast<js::gc::Cell *>(value.toGCThing())->tenuredZone();
michael@0 261 }
michael@0 262
michael@0 263 JS::Zone *
michael@0 264 ZoneOfObjectFromAnyThread(const JSObject &obj);
michael@0 265
michael@0 266 static inline JS::shadow::Zone *
michael@0 267 ShadowZoneOfObjectFromAnyThread(JSObject *obj)
michael@0 268 {
michael@0 269 return JS::shadow::Zone::asShadowZone(ZoneOfObjectFromAnyThread(*obj));
michael@0 270 }
michael@0 271
michael@0 272 static inline JS::shadow::Zone *
michael@0 273 ShadowZoneOfStringFromAnyThread(JSString *str)
michael@0 274 {
michael@0 275 return JS::shadow::Zone::asShadowZone(
michael@0 276 reinterpret_cast<const js::gc::Cell *>(str)->tenuredZoneFromAnyThread());
michael@0 277 }
michael@0 278
michael@0 279 MOZ_ALWAYS_INLINE JS::Zone *
michael@0 280 ZoneOfValueFromAnyThread(const JS::Value &value)
michael@0 281 {
michael@0 282 JS_ASSERT(value.isMarkable());
michael@0 283 if (value.isObject())
michael@0 284 return ZoneOfObjectFromAnyThread(value.toObject());
michael@0 285 return static_cast<js::gc::Cell *>(value.toGCThing())->tenuredZoneFromAnyThread();
michael@0 286 }
michael@0 287
michael@0 288 /*
michael@0 289 * Base class for barriered pointer types.
michael@0 290 */
michael@0 291 template <class T, typename Unioned = uintptr_t>
michael@0 292 class BarrieredPtr
michael@0 293 {
michael@0 294 protected:
michael@0 295 union {
michael@0 296 T *value;
michael@0 297 Unioned other;
michael@0 298 };
michael@0 299
michael@0 300 BarrieredPtr(T *v) : value(v) {}
michael@0 301 ~BarrieredPtr() { pre(); }
michael@0 302
michael@0 303 public:
michael@0 304 void init(T *v) {
michael@0 305 JS_ASSERT(!IsPoisonedPtr<T>(v));
michael@0 306 this->value = v;
michael@0 307 }
michael@0 308
michael@0 309 /* Use this if the automatic coercion to T* isn't working. */
michael@0 310 T *get() const { return value; }
michael@0 311
michael@0 312 /*
michael@0 313 * Use these if you want to change the value without invoking the barrier.
michael@0 314 * Obviously this is dangerous unless you know the barrier is not needed.
michael@0 315 */
michael@0 316 T **unsafeGet() { return &value; }
michael@0 317 void unsafeSet(T *v) { value = v; }
michael@0 318
michael@0 319 Unioned *unsafeGetUnioned() { return &other; }
michael@0 320
michael@0 321 T &operator*() const { return *value; }
michael@0 322 T *operator->() const { return value; }
michael@0 323
michael@0 324 operator T*() const { return value; }
michael@0 325
michael@0 326 protected:
michael@0 327 void pre() { T::writeBarrierPre(value); }
michael@0 328 };
michael@0 329
michael@0 330 /*
michael@0 331 * EncapsulatedPtr only automatically handles pre-barriers. Post-barriers must
michael@0 332 * be manually implemented when using this class. HeapPtr and RelocatablePtr
michael@0 333 * should be used in all cases that do not require explicit low-level control
michael@0 334 * of moving behavior, e.g. for HashMap keys.
michael@0 335 */
michael@0 336 template <class T, typename Unioned = uintptr_t>
michael@0 337 class EncapsulatedPtr : public BarrieredPtr<T, Unioned>
michael@0 338 {
michael@0 339 public:
michael@0 340 EncapsulatedPtr() : BarrieredPtr<T, Unioned>(nullptr) {}
michael@0 341 EncapsulatedPtr(T *v) : BarrieredPtr<T, Unioned>(v) {}
michael@0 342 explicit EncapsulatedPtr(const EncapsulatedPtr<T, Unioned> &v)
michael@0 343 : BarrieredPtr<T, Unioned>(v.value) {}
michael@0 344
michael@0 345 /* Use to set the pointer to nullptr. */
michael@0 346 void clear() {
michael@0 347 this->pre();
michael@0 348 this->value = nullptr;
michael@0 349 }
michael@0 350
michael@0 351 EncapsulatedPtr<T, Unioned> &operator=(T *v) {
michael@0 352 this->pre();
michael@0 353 JS_ASSERT(!IsPoisonedPtr<T>(v));
michael@0 354 this->value = v;
michael@0 355 return *this;
michael@0 356 }
michael@0 357
michael@0 358 EncapsulatedPtr<T, Unioned> &operator=(const EncapsulatedPtr<T> &v) {
michael@0 359 this->pre();
michael@0 360 JS_ASSERT(!IsPoisonedPtr<T>(v.value));
michael@0 361 this->value = v.value;
michael@0 362 return *this;
michael@0 363 }
michael@0 364 };
michael@0 365
michael@0 366 /*
michael@0 367 * A pre- and post-barriered heap pointer, for use inside the JS engine.
michael@0 368 *
michael@0 369 * Not to be confused with JS::Heap<T>. This is a different class from the
michael@0 370 * external interface and implements substantially different semantics.
michael@0 371 *
michael@0 372 * The post-barriers implemented by this class are faster than those
michael@0 373 * implemented by RelocatablePtr<T> or JS::Heap<T> at the cost of not
michael@0 374 * automatically handling deletion or movement. It should generally only be
michael@0 375 * stored in memory that has GC lifetime. HeapPtr must not be used in contexts
michael@0 376 * where it may be implicitly moved or deleted, e.g. most containers.
michael@0 377 */
michael@0 378 template <class T, class Unioned = uintptr_t>
michael@0 379 class HeapPtr : public BarrieredPtr<T, Unioned>
michael@0 380 {
michael@0 381 public:
michael@0 382 HeapPtr() : BarrieredPtr<T, Unioned>(nullptr) {}
michael@0 383 explicit HeapPtr(T *v) : BarrieredPtr<T, Unioned>(v) { post(); }
michael@0 384 explicit HeapPtr(const HeapPtr<T, Unioned> &v) : BarrieredPtr<T, Unioned>(v) { post(); }
michael@0 385
michael@0 386 void init(T *v) {
michael@0 387 JS_ASSERT(!IsPoisonedPtr<T>(v));
michael@0 388 this->value = v;
michael@0 389 post();
michael@0 390 }
michael@0 391
michael@0 392 HeapPtr<T, Unioned> &operator=(T *v) {
michael@0 393 this->pre();
michael@0 394 JS_ASSERT(!IsPoisonedPtr<T>(v));
michael@0 395 this->value = v;
michael@0 396 post();
michael@0 397 return *this;
michael@0 398 }
michael@0 399
michael@0 400 HeapPtr<T, Unioned> &operator=(const HeapPtr<T, Unioned> &v) {
michael@0 401 this->pre();
michael@0 402 JS_ASSERT(!IsPoisonedPtr<T>(v.value));
michael@0 403 this->value = v.value;
michael@0 404 post();
michael@0 405 return *this;
michael@0 406 }
michael@0 407
michael@0 408 protected:
michael@0 409 void post() { T::writeBarrierPost(this->value, (void *)&this->value); }
michael@0 410
michael@0 411 /* Make this friend so it can access pre() and post(). */
michael@0 412 template <class T1, class T2>
michael@0 413 friend inline void
michael@0 414 BarrieredSetPair(Zone *zone,
michael@0 415 HeapPtr<T1> &v1, T1 *val1,
michael@0 416 HeapPtr<T2> &v2, T2 *val2);
michael@0 417
michael@0 418 private:
michael@0 419 /*
michael@0 420 * Unlike RelocatablePtr<T>, HeapPtr<T> must be managed with GC lifetimes.
michael@0 421 * Specifically, the memory used by the pointer itself must be live until
michael@0 422 * at least the next minor GC. For that reason, move semantics are invalid
michael@0 423 * and are deleted here. Please note that not all containers support move
michael@0 424 * semantics, so this does not completely prevent invalid uses.
michael@0 425 */
michael@0 426 HeapPtr(HeapPtr<T> &&) MOZ_DELETE;
michael@0 427 HeapPtr<T, Unioned> &operator=(HeapPtr<T, Unioned> &&) MOZ_DELETE;
michael@0 428 };
michael@0 429
michael@0 430 /*
michael@0 431 * FixedHeapPtr is designed for one very narrow case: replacing immutable raw
michael@0 432 * pointers to GC-managed things, implicitly converting to a handle type for
michael@0 433 * ease of use. Pointers encapsulated by this type must:
michael@0 434 *
michael@0 435 * be immutable (no incremental write barriers),
michael@0 436 * never point into the nursery (no generational write barriers), and
michael@0 437 * be traced via MarkRuntime (we use fromMarkedLocation).
michael@0 438 *
michael@0 439 * In short: you *really* need to know what you're doing before you use this
michael@0 440 * class!
michael@0 441 */
michael@0 442 template <class T>
michael@0 443 class FixedHeapPtr
michael@0 444 {
michael@0 445 T *value;
michael@0 446
michael@0 447 public:
michael@0 448 operator T*() const { return value; }
michael@0 449 T * operator->() const { return value; }
michael@0 450
michael@0 451 operator Handle<T*>() const {
michael@0 452 return Handle<T*>::fromMarkedLocation(&value);
michael@0 453 }
michael@0 454
michael@0 455 void init(T *ptr) {
michael@0 456 value = ptr;
michael@0 457 }
michael@0 458 };
michael@0 459
michael@0 460 /*
michael@0 461 * A pre- and post-barriered heap pointer, for use inside the JS engine.
michael@0 462 *
michael@0 463 * Unlike HeapPtr<T>, it can be used in memory that is not managed by the GC,
michael@0 464 * i.e. in C++ containers. It is, however, somewhat slower, so should only be
michael@0 465 * used in contexts where this ability is necessary.
michael@0 466 */
michael@0 467 template <class T>
michael@0 468 class RelocatablePtr : public BarrieredPtr<T>
michael@0 469 {
michael@0 470 public:
michael@0 471 RelocatablePtr() : BarrieredPtr<T>(nullptr) {}
michael@0 472 explicit RelocatablePtr(T *v) : BarrieredPtr<T>(v) {
michael@0 473 if (v)
michael@0 474 post();
michael@0 475 }
michael@0 476
michael@0 477 /*
michael@0 478 * For RelocatablePtr, move semantics are equivalent to copy semantics. In
michael@0 479 * C++, a copy constructor taking const-ref is the way to get a single
michael@0 480 * function that will be used for both lvalue and rvalue copies, so we can
michael@0 481 * simply omit the rvalue variant.
michael@0 482 */
michael@0 483 RelocatablePtr(const RelocatablePtr<T> &v) : BarrieredPtr<T>(v) {
michael@0 484 if (this->value)
michael@0 485 post();
michael@0 486 }
michael@0 487
michael@0 488 ~RelocatablePtr() {
michael@0 489 if (this->value)
michael@0 490 relocate();
michael@0 491 }
michael@0 492
michael@0 493 RelocatablePtr<T> &operator=(T *v) {
michael@0 494 this->pre();
michael@0 495 JS_ASSERT(!IsPoisonedPtr<T>(v));
michael@0 496 if (v) {
michael@0 497 this->value = v;
michael@0 498 post();
michael@0 499 } else if (this->value) {
michael@0 500 relocate();
michael@0 501 this->value = v;
michael@0 502 }
michael@0 503 return *this;
michael@0 504 }
michael@0 505
michael@0 506 RelocatablePtr<T> &operator=(const RelocatablePtr<T> &v) {
michael@0 507 this->pre();
michael@0 508 JS_ASSERT(!IsPoisonedPtr<T>(v.value));
michael@0 509 if (v.value) {
michael@0 510 this->value = v.value;
michael@0 511 post();
michael@0 512 } else if (this->value) {
michael@0 513 relocate();
michael@0 514 this->value = v;
michael@0 515 }
michael@0 516 return *this;
michael@0 517 }
michael@0 518
michael@0 519 protected:
michael@0 520 void post() {
michael@0 521 #ifdef JSGC_GENERATIONAL
michael@0 522 JS_ASSERT(this->value);
michael@0 523 T::writeBarrierPostRelocate(this->value, &this->value);
michael@0 524 #endif
michael@0 525 }
michael@0 526
michael@0 527 void relocate() {
michael@0 528 #ifdef JSGC_GENERATIONAL
michael@0 529 JS_ASSERT(this->value);
michael@0 530 T::writeBarrierPostRemove(this->value, &this->value);
michael@0 531 #endif
michael@0 532 }
michael@0 533 };
michael@0 534
michael@0 535 /*
michael@0 536 * This is a hack for RegExpStatics::updateFromMatch. It allows us to do two
michael@0 537 * barriers with only one branch to check if we're in an incremental GC.
michael@0 538 */
michael@0 539 template <class T1, class T2>
michael@0 540 static inline void
michael@0 541 BarrieredSetPair(Zone *zone,
michael@0 542 HeapPtr<T1> &v1, T1 *val1,
michael@0 543 HeapPtr<T2> &v2, T2 *val2)
michael@0 544 {
michael@0 545 if (T1::needWriteBarrierPre(zone)) {
michael@0 546 v1.pre();
michael@0 547 v2.pre();
michael@0 548 }
michael@0 549 v1.unsafeSet(val1);
michael@0 550 v2.unsafeSet(val2);
michael@0 551 v1.post();
michael@0 552 v2.post();
michael@0 553 }
michael@0 554
michael@0 555 class Shape;
michael@0 556 class BaseShape;
michael@0 557 namespace types { struct TypeObject; }
michael@0 558
michael@0 559 typedef BarrieredPtr<JSObject> BarrieredPtrObject;
michael@0 560 typedef BarrieredPtr<JSScript> BarrieredPtrScript;
michael@0 561
michael@0 562 typedef EncapsulatedPtr<JSObject> EncapsulatedPtrObject;
michael@0 563 typedef EncapsulatedPtr<JSScript> EncapsulatedPtrScript;
michael@0 564
michael@0 565 typedef RelocatablePtr<JSObject> RelocatablePtrObject;
michael@0 566 typedef RelocatablePtr<JSScript> RelocatablePtrScript;
michael@0 567
michael@0 568 typedef HeapPtr<JSObject> HeapPtrObject;
michael@0 569 typedef HeapPtr<JSFunction> HeapPtrFunction;
michael@0 570 typedef HeapPtr<JSString> HeapPtrString;
michael@0 571 typedef HeapPtr<PropertyName> HeapPtrPropertyName;
michael@0 572 typedef HeapPtr<JSScript> HeapPtrScript;
michael@0 573 typedef HeapPtr<Shape> HeapPtrShape;
michael@0 574 typedef HeapPtr<BaseShape> HeapPtrBaseShape;
michael@0 575 typedef HeapPtr<types::TypeObject> HeapPtrTypeObject;
michael@0 576
michael@0 577 /* Useful for hashtables with a HeapPtr as key. */
michael@0 578
michael@0 579 template <class T>
michael@0 580 struct HeapPtrHasher
michael@0 581 {
michael@0 582 typedef HeapPtr<T> Key;
michael@0 583 typedef T *Lookup;
michael@0 584
michael@0 585 static HashNumber hash(Lookup obj) { return DefaultHasher<T *>::hash(obj); }
michael@0 586 static bool match(const Key &k, Lookup l) { return k.get() == l; }
michael@0 587 static void rekey(Key &k, const Key& newKey) { k.unsafeSet(newKey); }
michael@0 588 };
michael@0 589
michael@0 590 /* Specialized hashing policy for HeapPtrs. */
michael@0 591 template <class T>
michael@0 592 struct DefaultHasher< HeapPtr<T> > : HeapPtrHasher<T> { };
michael@0 593
michael@0 594 template <class T>
michael@0 595 struct EncapsulatedPtrHasher
michael@0 596 {
michael@0 597 typedef EncapsulatedPtr<T> Key;
michael@0 598 typedef T *Lookup;
michael@0 599
michael@0 600 static HashNumber hash(Lookup obj) { return DefaultHasher<T *>::hash(obj); }
michael@0 601 static bool match(const Key &k, Lookup l) { return k.get() == l; }
michael@0 602 static void rekey(Key &k, const Key& newKey) { k.unsafeSet(newKey); }
michael@0 603 };
michael@0 604
michael@0 605 template <class T>
michael@0 606 struct DefaultHasher< EncapsulatedPtr<T> > : EncapsulatedPtrHasher<T> { };
michael@0 607
michael@0 608 bool
michael@0 609 StringIsPermanentAtom(JSString *str);
michael@0 610
michael@0 611 /*
michael@0 612 * Base class for barriered value types.
michael@0 613 */
michael@0 614 class BarrieredValue : public ValueOperations<BarrieredValue>
michael@0 615 {
michael@0 616 protected:
michael@0 617 Value value;
michael@0 618
michael@0 619 /*
michael@0 620 * Ensure that EncapsulatedValue is not constructable, except by our
michael@0 621 * implementations.
michael@0 622 */
michael@0 623 BarrieredValue() MOZ_DELETE;
michael@0 624
michael@0 625 BarrieredValue(const Value &v) : value(v) {
michael@0 626 JS_ASSERT(!IsPoisonedValue(v));
michael@0 627 }
michael@0 628
michael@0 629 ~BarrieredValue() {
michael@0 630 pre();
michael@0 631 }
michael@0 632
michael@0 633 public:
michael@0 634 void init(const Value &v) {
michael@0 635 JS_ASSERT(!IsPoisonedValue(v));
michael@0 636 value = v;
michael@0 637 }
michael@0 638 void init(JSRuntime *rt, const Value &v) {
michael@0 639 JS_ASSERT(!IsPoisonedValue(v));
michael@0 640 value = v;
michael@0 641 }
michael@0 642
michael@0 643 bool operator==(const BarrieredValue &v) const { return value == v.value; }
michael@0 644 bool operator!=(const BarrieredValue &v) const { return value != v.value; }
michael@0 645
michael@0 646 const Value &get() const { return value; }
michael@0 647 Value *unsafeGet() { return &value; }
michael@0 648 operator const Value &() const { return value; }
michael@0 649
michael@0 650 JSGCTraceKind gcKind() const { return value.gcKind(); }
michael@0 651
michael@0 652 uint64_t asRawBits() const { return value.asRawBits(); }
michael@0 653
michael@0 654 static void writeBarrierPre(const Value &v) {
michael@0 655 #ifdef JSGC_INCREMENTAL
michael@0 656 if (v.isMarkable() && shadowRuntimeFromAnyThread(v)->needsBarrier())
michael@0 657 writeBarrierPre(ZoneOfValueFromAnyThread(v), v);
michael@0 658 #endif
michael@0 659 }
michael@0 660
michael@0 661 static void writeBarrierPre(Zone *zone, const Value &v) {
michael@0 662 #ifdef JSGC_INCREMENTAL
michael@0 663 if (v.isString() && StringIsPermanentAtom(v.toString()))
michael@0 664 return;
michael@0 665 JS::shadow::Zone *shadowZone = JS::shadow::Zone::asShadowZone(zone);
michael@0 666 if (shadowZone->needsBarrier()) {
michael@0 667 JS_ASSERT_IF(v.isMarkable(), shadowRuntimeFromMainThread(v)->needsBarrier());
michael@0 668 Value tmp(v);
michael@0 669 js::gc::MarkValueUnbarriered(shadowZone->barrierTracer(), &tmp, "write barrier");
michael@0 670 JS_ASSERT(tmp == v);
michael@0 671 }
michael@0 672 #endif
michael@0 673 }
michael@0 674
michael@0 675 protected:
michael@0 676 void pre() { writeBarrierPre(value); }
michael@0 677 void pre(Zone *zone) { writeBarrierPre(zone, value); }
michael@0 678
michael@0 679 static JSRuntime *runtimeFromMainThread(const Value &v) {
michael@0 680 JS_ASSERT(v.isMarkable());
michael@0 681 return static_cast<js::gc::Cell *>(v.toGCThing())->runtimeFromMainThread();
michael@0 682 }
michael@0 683 static JSRuntime *runtimeFromAnyThread(const Value &v) {
michael@0 684 JS_ASSERT(v.isMarkable());
michael@0 685 return static_cast<js::gc::Cell *>(v.toGCThing())->runtimeFromAnyThread();
michael@0 686 }
michael@0 687 static JS::shadow::Runtime *shadowRuntimeFromMainThread(const Value &v) {
michael@0 688 return reinterpret_cast<JS::shadow::Runtime*>(runtimeFromMainThread(v));
michael@0 689 }
michael@0 690 static JS::shadow::Runtime *shadowRuntimeFromAnyThread(const Value &v) {
michael@0 691 return reinterpret_cast<JS::shadow::Runtime*>(runtimeFromAnyThread(v));
michael@0 692 }
michael@0 693
michael@0 694 private:
michael@0 695 friend class ValueOperations<BarrieredValue>;
michael@0 696 const Value * extract() const { return &value; }
michael@0 697 };
michael@0 698
michael@0 699 // Like EncapsulatedPtr, but specialized for Value.
michael@0 700 // See the comments on that class for details.
michael@0 701 class EncapsulatedValue : public BarrieredValue
michael@0 702 {
michael@0 703 public:
michael@0 704 EncapsulatedValue(const Value &v) : BarrieredValue(v) {}
michael@0 705 EncapsulatedValue(const EncapsulatedValue &v) : BarrieredValue(v) {}
michael@0 706
michael@0 707 EncapsulatedValue &operator=(const Value &v) {
michael@0 708 pre();
michael@0 709 JS_ASSERT(!IsPoisonedValue(v));
michael@0 710 value = v;
michael@0 711 return *this;
michael@0 712 }
michael@0 713
michael@0 714 EncapsulatedValue &operator=(const EncapsulatedValue &v) {
michael@0 715 pre();
michael@0 716 JS_ASSERT(!IsPoisonedValue(v));
michael@0 717 value = v.get();
michael@0 718 return *this;
michael@0 719 }
michael@0 720 };
michael@0 721
michael@0 722 // Like HeapPtr, but specialized for Value.
michael@0 723 // See the comments on that class for details.
michael@0 724 class HeapValue : public BarrieredValue
michael@0 725 {
michael@0 726 public:
michael@0 727 explicit HeapValue()
michael@0 728 : BarrieredValue(UndefinedValue())
michael@0 729 {
michael@0 730 post();
michael@0 731 }
michael@0 732
michael@0 733 explicit HeapValue(const Value &v)
michael@0 734 : BarrieredValue(v)
michael@0 735 {
michael@0 736 JS_ASSERT(!IsPoisonedValue(v));
michael@0 737 post();
michael@0 738 }
michael@0 739
michael@0 740 explicit HeapValue(const HeapValue &v)
michael@0 741 : BarrieredValue(v.value)
michael@0 742 {
michael@0 743 JS_ASSERT(!IsPoisonedValue(v.value));
michael@0 744 post();
michael@0 745 }
michael@0 746
michael@0 747 ~HeapValue() {
michael@0 748 pre();
michael@0 749 }
michael@0 750
michael@0 751 void init(const Value &v) {
michael@0 752 JS_ASSERT(!IsPoisonedValue(v));
michael@0 753 value = v;
michael@0 754 post();
michael@0 755 }
michael@0 756
michael@0 757 void init(JSRuntime *rt, const Value &v) {
michael@0 758 JS_ASSERT(!IsPoisonedValue(v));
michael@0 759 value = v;
michael@0 760 post(rt);
michael@0 761 }
michael@0 762
michael@0 763 HeapValue &operator=(const Value &v) {
michael@0 764 pre();
michael@0 765 JS_ASSERT(!IsPoisonedValue(v));
michael@0 766 value = v;
michael@0 767 post();
michael@0 768 return *this;
michael@0 769 }
michael@0 770
michael@0 771 HeapValue &operator=(const HeapValue &v) {
michael@0 772 pre();
michael@0 773 JS_ASSERT(!IsPoisonedValue(v.value));
michael@0 774 value = v.value;
michael@0 775 post();
michael@0 776 return *this;
michael@0 777 }
michael@0 778
michael@0 779 #ifdef DEBUG
michael@0 780 bool preconditionForSet(Zone *zone);
michael@0 781 #endif
michael@0 782
michael@0 783 /*
michael@0 784 * This is a faster version of operator=. Normally, operator= has to
michael@0 785 * determine the compartment of the value before it can decide whether to do
michael@0 786 * the barrier. If you already know the compartment, it's faster to pass it
michael@0 787 * in.
michael@0 788 */
michael@0 789 void set(Zone *zone, const Value &v) {
michael@0 790 JS::shadow::Zone *shadowZone = JS::shadow::Zone::asShadowZone(zone);
michael@0 791 JS_ASSERT(preconditionForSet(zone));
michael@0 792 pre(zone);
michael@0 793 JS_ASSERT(!IsPoisonedValue(v));
michael@0 794 value = v;
michael@0 795 post(shadowZone->runtimeFromAnyThread());
michael@0 796 }
michael@0 797
michael@0 798 static void writeBarrierPost(const Value &value, Value *addr) {
michael@0 799 #ifdef JSGC_GENERATIONAL
michael@0 800 if (value.isMarkable())
michael@0 801 shadowRuntimeFromAnyThread(value)->gcStoreBufferPtr()->putValue(addr);
michael@0 802 #endif
michael@0 803 }
michael@0 804
michael@0 805 static void writeBarrierPost(JSRuntime *rt, const Value &value, Value *addr) {
michael@0 806 #ifdef JSGC_GENERATIONAL
michael@0 807 if (value.isMarkable()) {
michael@0 808 JS::shadow::Runtime *shadowRuntime = JS::shadow::Runtime::asShadowRuntime(rt);
michael@0 809 shadowRuntime->gcStoreBufferPtr()->putValue(addr);
michael@0 810 }
michael@0 811 #endif
michael@0 812 }
michael@0 813
michael@0 814 private:
michael@0 815 void post() {
michael@0 816 writeBarrierPost(value, &value);
michael@0 817 }
michael@0 818
michael@0 819 void post(JSRuntime *rt) {
michael@0 820 writeBarrierPost(rt, value, &value);
michael@0 821 }
michael@0 822
michael@0 823 HeapValue(HeapValue &&) MOZ_DELETE;
michael@0 824 HeapValue &operator=(HeapValue &&) MOZ_DELETE;
michael@0 825 };
michael@0 826
michael@0 827 // Like RelocatablePtr, but specialized for Value.
michael@0 828 // See the comments on that class for details.
michael@0 829 class RelocatableValue : public BarrieredValue
michael@0 830 {
michael@0 831 public:
michael@0 832 explicit RelocatableValue() : BarrieredValue(UndefinedValue()) {}
michael@0 833
michael@0 834 explicit RelocatableValue(const Value &v)
michael@0 835 : BarrieredValue(v)
michael@0 836 {
michael@0 837 if (v.isMarkable())
michael@0 838 post();
michael@0 839 }
michael@0 840
michael@0 841 RelocatableValue(const RelocatableValue &v)
michael@0 842 : BarrieredValue(v.value)
michael@0 843 {
michael@0 844 JS_ASSERT(!IsPoisonedValue(v.value));
michael@0 845 if (v.value.isMarkable())
michael@0 846 post();
michael@0 847 }
michael@0 848
michael@0 849 ~RelocatableValue()
michael@0 850 {
michael@0 851 if (value.isMarkable())
michael@0 852 relocate(runtimeFromAnyThread(value));
michael@0 853 }
michael@0 854
michael@0 855 RelocatableValue &operator=(const Value &v) {
michael@0 856 pre();
michael@0 857 JS_ASSERT(!IsPoisonedValue(v));
michael@0 858 if (v.isMarkable()) {
michael@0 859 value = v;
michael@0 860 post();
michael@0 861 } else if (value.isMarkable()) {
michael@0 862 JSRuntime *rt = runtimeFromAnyThread(value);
michael@0 863 relocate(rt);
michael@0 864 value = v;
michael@0 865 } else {
michael@0 866 value = v;
michael@0 867 }
michael@0 868 return *this;
michael@0 869 }
michael@0 870
michael@0 871 RelocatableValue &operator=(const RelocatableValue &v) {
michael@0 872 pre();
michael@0 873 JS_ASSERT(!IsPoisonedValue(v.value));
michael@0 874 if (v.value.isMarkable()) {
michael@0 875 value = v.value;
michael@0 876 post();
michael@0 877 } else if (value.isMarkable()) {
michael@0 878 JSRuntime *rt = runtimeFromAnyThread(value);
michael@0 879 relocate(rt);
michael@0 880 value = v.value;
michael@0 881 } else {
michael@0 882 value = v.value;
michael@0 883 }
michael@0 884 return *this;
michael@0 885 }
michael@0 886
michael@0 887 private:
michael@0 888 void post() {
michael@0 889 #ifdef JSGC_GENERATIONAL
michael@0 890 JS_ASSERT(value.isMarkable());
michael@0 891 shadowRuntimeFromAnyThread(value)->gcStoreBufferPtr()->putRelocatableValue(&value);
michael@0 892 #endif
michael@0 893 }
michael@0 894
michael@0 895 void relocate(JSRuntime *rt) {
michael@0 896 #ifdef JSGC_GENERATIONAL
michael@0 897 JS::shadow::Runtime *shadowRuntime = JS::shadow::Runtime::asShadowRuntime(rt);
michael@0 898 shadowRuntime->gcStoreBufferPtr()->removeRelocatableValue(&value);
michael@0 899 #endif
michael@0 900 }
michael@0 901 };
michael@0 902
michael@0 903 // A pre- and post-barriered Value that is specialized to be aware that it
michael@0 904 // resides in a slots or elements vector. This allows it to be relocated in
michael@0 905 // memory, but with substantially less overhead than a RelocatablePtr.
michael@0 906 class HeapSlot : public BarrieredValue
michael@0 907 {
michael@0 908 public:
michael@0 909 enum Kind {
michael@0 910 Slot = 0,
michael@0 911 Element = 1
michael@0 912 };
michael@0 913
michael@0 914 explicit HeapSlot() MOZ_DELETE;
michael@0 915
michael@0 916 explicit HeapSlot(JSObject *obj, Kind kind, uint32_t slot, const Value &v)
michael@0 917 : BarrieredValue(v)
michael@0 918 {
michael@0 919 JS_ASSERT(!IsPoisonedValue(v));
michael@0 920 post(obj, kind, slot, v);
michael@0 921 }
michael@0 922
michael@0 923 explicit HeapSlot(JSObject *obj, Kind kind, uint32_t slot, const HeapSlot &s)
michael@0 924 : BarrieredValue(s.value)
michael@0 925 {
michael@0 926 JS_ASSERT(!IsPoisonedValue(s.value));
michael@0 927 post(obj, kind, slot, s);
michael@0 928 }
michael@0 929
michael@0 930 ~HeapSlot() {
michael@0 931 pre();
michael@0 932 }
michael@0 933
michael@0 934 void init(JSObject *owner, Kind kind, uint32_t slot, const Value &v) {
michael@0 935 value = v;
michael@0 936 post(owner, kind, slot, v);
michael@0 937 }
michael@0 938
michael@0 939 void init(JSRuntime *rt, JSObject *owner, Kind kind, uint32_t slot, const Value &v) {
michael@0 940 value = v;
michael@0 941 post(rt, owner, kind, slot, v);
michael@0 942 }
michael@0 943
michael@0 944 #ifdef DEBUG
michael@0 945 bool preconditionForSet(JSObject *owner, Kind kind, uint32_t slot);
michael@0 946 bool preconditionForSet(Zone *zone, JSObject *owner, Kind kind, uint32_t slot);
michael@0 947 static void preconditionForWriteBarrierPost(JSObject *obj, Kind kind, uint32_t slot,
michael@0 948 Value target);
michael@0 949 #endif
michael@0 950
michael@0 951 void set(JSObject *owner, Kind kind, uint32_t slot, const Value &v) {
michael@0 952 JS_ASSERT(preconditionForSet(owner, kind, slot));
michael@0 953 pre();
michael@0 954 JS_ASSERT(!IsPoisonedValue(v));
michael@0 955 value = v;
michael@0 956 post(owner, kind, slot, v);
michael@0 957 }
michael@0 958
michael@0 959 void set(Zone *zone, JSObject *owner, Kind kind, uint32_t slot, const Value &v) {
michael@0 960 JS_ASSERT(preconditionForSet(zone, owner, kind, slot));
michael@0 961 JS::shadow::Zone *shadowZone = JS::shadow::Zone::asShadowZone(zone);
michael@0 962 pre(zone);
michael@0 963 JS_ASSERT(!IsPoisonedValue(v));
michael@0 964 value = v;
michael@0 965 post(shadowZone->runtimeFromAnyThread(), owner, kind, slot, v);
michael@0 966 }
michael@0 967
michael@0 968 static void writeBarrierPost(JSObject *obj, Kind kind, uint32_t slot, Value target)
michael@0 969 {
michael@0 970 #ifdef JSGC_GENERATIONAL
michael@0 971 js::gc::Cell *cell = reinterpret_cast<js::gc::Cell*>(obj);
michael@0 972 writeBarrierPost(cell->runtimeFromAnyThread(), obj, kind, slot, target);
michael@0 973 #endif
michael@0 974 }
michael@0 975
michael@0 976 static void writeBarrierPost(JSRuntime *rt, JSObject *obj, Kind kind, uint32_t slot,
michael@0 977 Value target)
michael@0 978 {
michael@0 979 #ifdef DEBUG
michael@0 980 preconditionForWriteBarrierPost(obj, kind, slot, target);
michael@0 981 #endif
michael@0 982 #ifdef JSGC_GENERATIONAL
michael@0 983 if (target.isObject()) {
michael@0 984 JS::shadow::Runtime *shadowRuntime = JS::shadow::Runtime::asShadowRuntime(rt);
michael@0 985 shadowRuntime->gcStoreBufferPtr()->putSlot(obj, kind, slot, 1);
michael@0 986 }
michael@0 987 #endif
michael@0 988 }
michael@0 989
michael@0 990 private:
michael@0 991 void post(JSObject *owner, Kind kind, uint32_t slot, Value target) {
michael@0 992 HeapSlot::writeBarrierPost(owner, kind, slot, target);
michael@0 993 }
michael@0 994
michael@0 995 void post(JSRuntime *rt, JSObject *owner, Kind kind, uint32_t slot, Value target) {
michael@0 996 HeapSlot::writeBarrierPost(rt, owner, kind, slot, target);
michael@0 997 }
michael@0 998 };
michael@0 999
michael@0 1000 static inline const Value *
michael@0 1001 Valueify(const BarrieredValue *array)
michael@0 1002 {
michael@0 1003 JS_STATIC_ASSERT(sizeof(HeapValue) == sizeof(Value));
michael@0 1004 JS_STATIC_ASSERT(sizeof(HeapSlot) == sizeof(Value));
michael@0 1005 return (const Value *)array;
michael@0 1006 }
michael@0 1007
michael@0 1008 static inline HeapValue *
michael@0 1009 HeapValueify(Value *v)
michael@0 1010 {
michael@0 1011 JS_STATIC_ASSERT(sizeof(HeapValue) == sizeof(Value));
michael@0 1012 JS_STATIC_ASSERT(sizeof(HeapSlot) == sizeof(Value));
michael@0 1013 return (HeapValue *)v;
michael@0 1014 }
michael@0 1015
michael@0 1016 class HeapSlotArray
michael@0 1017 {
michael@0 1018 HeapSlot *array;
michael@0 1019
michael@0 1020 public:
michael@0 1021 HeapSlotArray(HeapSlot *array) : array(array) {}
michael@0 1022
michael@0 1023 operator const Value *() const { return Valueify(array); }
michael@0 1024 operator HeapSlot *() const { return array; }
michael@0 1025
michael@0 1026 HeapSlotArray operator +(int offset) const { return HeapSlotArray(array + offset); }
michael@0 1027 HeapSlotArray operator +(uint32_t offset) const { return HeapSlotArray(array + offset); }
michael@0 1028 };
michael@0 1029
michael@0 1030 /*
michael@0 1031 * Base class for barriered jsid types.
michael@0 1032 */
michael@0 1033 class BarrieredId
michael@0 1034 {
michael@0 1035 protected:
michael@0 1036 jsid value;
michael@0 1037
michael@0 1038 private:
michael@0 1039 BarrieredId(const BarrieredId &v) MOZ_DELETE;
michael@0 1040
michael@0 1041 protected:
michael@0 1042 explicit BarrieredId(jsid id) : value(id) {}
michael@0 1043 ~BarrieredId() { pre(); }
michael@0 1044
michael@0 1045 public:
michael@0 1046 bool operator==(jsid id) const { return value == id; }
michael@0 1047 bool operator!=(jsid id) const { return value != id; }
michael@0 1048
michael@0 1049 jsid get() const { return value; }
michael@0 1050 jsid *unsafeGet() { return &value; }
michael@0 1051 void unsafeSet(jsid newId) { value = newId; }
michael@0 1052 operator jsid() const { return value; }
michael@0 1053
michael@0 1054 protected:
michael@0 1055 void pre() {
michael@0 1056 #ifdef JSGC_INCREMENTAL
michael@0 1057 if (JSID_IS_OBJECT(value)) {
michael@0 1058 JSObject *obj = JSID_TO_OBJECT(value);
michael@0 1059 JS::shadow::Zone *shadowZone = ShadowZoneOfObjectFromAnyThread(obj);
michael@0 1060 if (shadowZone->needsBarrier()) {
michael@0 1061 js::gc::MarkObjectUnbarriered(shadowZone->barrierTracer(), &obj, "write barrier");
michael@0 1062 JS_ASSERT(obj == JSID_TO_OBJECT(value));
michael@0 1063 }
michael@0 1064 } else if (JSID_IS_STRING(value)) {
michael@0 1065 JSString *str = JSID_TO_STRING(value);
michael@0 1066 JS::shadow::Zone *shadowZone = ShadowZoneOfStringFromAnyThread(str);
michael@0 1067 if (shadowZone->needsBarrier()) {
michael@0 1068 js::gc::MarkStringUnbarriered(shadowZone->barrierTracer(), &str, "write barrier");
michael@0 1069 JS_ASSERT(str == JSID_TO_STRING(value));
michael@0 1070 }
michael@0 1071 }
michael@0 1072 #endif
michael@0 1073 }
michael@0 1074 };
michael@0 1075
michael@0 1076 // Like EncapsulatedPtr, but specialized for jsid.
michael@0 1077 // See the comments on that class for details.
michael@0 1078 class EncapsulatedId : public BarrieredId
michael@0 1079 {
michael@0 1080 public:
michael@0 1081 explicit EncapsulatedId(jsid id) : BarrieredId(id) {}
michael@0 1082 explicit EncapsulatedId() : BarrieredId(JSID_VOID) {}
michael@0 1083
michael@0 1084 EncapsulatedId &operator=(const EncapsulatedId &v) {
michael@0 1085 if (v.value != value)
michael@0 1086 pre();
michael@0 1087 JS_ASSERT(!IsPoisonedId(v.value));
michael@0 1088 value = v.value;
michael@0 1089 return *this;
michael@0 1090 }
michael@0 1091 };
michael@0 1092
michael@0 1093 // Like RelocatablePtr, but specialized for jsid.
michael@0 1094 // See the comments on that class for details.
michael@0 1095 class RelocatableId : public BarrieredId
michael@0 1096 {
michael@0 1097 public:
michael@0 1098 explicit RelocatableId() : BarrieredId(JSID_VOID) {}
michael@0 1099 explicit inline RelocatableId(jsid id) : BarrieredId(id) {}
michael@0 1100 ~RelocatableId() { pre(); }
michael@0 1101
michael@0 1102 bool operator==(jsid id) const { return value == id; }
michael@0 1103 bool operator!=(jsid id) const { return value != id; }
michael@0 1104
michael@0 1105 jsid get() const { return value; }
michael@0 1106 operator jsid() const { return value; }
michael@0 1107
michael@0 1108 jsid *unsafeGet() { return &value; }
michael@0 1109
michael@0 1110 RelocatableId &operator=(jsid id) {
michael@0 1111 if (id != value)
michael@0 1112 pre();
michael@0 1113 JS_ASSERT(!IsPoisonedId(id));
michael@0 1114 value = id;
michael@0 1115 return *this;
michael@0 1116 }
michael@0 1117
michael@0 1118 RelocatableId &operator=(const RelocatableId &v) {
michael@0 1119 if (v.value != value)
michael@0 1120 pre();
michael@0 1121 JS_ASSERT(!IsPoisonedId(v.value));
michael@0 1122 value = v.value;
michael@0 1123 return *this;
michael@0 1124 }
michael@0 1125 };
michael@0 1126
michael@0 1127 // Like HeapPtr, but specialized for jsid.
michael@0 1128 // See the comments on that class for details.
michael@0 1129 class HeapId : public BarrieredId
michael@0 1130 {
michael@0 1131 public:
michael@0 1132 explicit HeapId() : BarrieredId(JSID_VOID) {}
michael@0 1133
michael@0 1134 explicit HeapId(jsid id)
michael@0 1135 : BarrieredId(id)
michael@0 1136 {
michael@0 1137 JS_ASSERT(!IsPoisonedId(id));
michael@0 1138 post();
michael@0 1139 }
michael@0 1140
michael@0 1141 ~HeapId() { pre(); }
michael@0 1142
michael@0 1143 void init(jsid id) {
michael@0 1144 JS_ASSERT(!IsPoisonedId(id));
michael@0 1145 value = id;
michael@0 1146 post();
michael@0 1147 }
michael@0 1148
michael@0 1149 HeapId &operator=(jsid id) {
michael@0 1150 if (id != value)
michael@0 1151 pre();
michael@0 1152 JS_ASSERT(!IsPoisonedId(id));
michael@0 1153 value = id;
michael@0 1154 post();
michael@0 1155 return *this;
michael@0 1156 }
michael@0 1157
michael@0 1158 HeapId &operator=(const HeapId &v) {
michael@0 1159 if (v.value != value)
michael@0 1160 pre();
michael@0 1161 JS_ASSERT(!IsPoisonedId(v.value));
michael@0 1162 value = v.value;
michael@0 1163 post();
michael@0 1164 return *this;
michael@0 1165 }
michael@0 1166
michael@0 1167 private:
michael@0 1168 void post() {};
michael@0 1169
michael@0 1170 HeapId(const HeapId &v) MOZ_DELETE;
michael@0 1171
michael@0 1172 HeapId(HeapId &&) MOZ_DELETE;
michael@0 1173 HeapId &operator=(HeapId &&) MOZ_DELETE;
michael@0 1174 };
michael@0 1175
michael@0 1176 /*
michael@0 1177 * Incremental GC requires that weak pointers have read barriers. This is mostly
michael@0 1178 * an issue for empty shapes stored in JSCompartment. The problem happens when,
michael@0 1179 * during an incremental GC, some JS code stores one of the compartment's empty
michael@0 1180 * shapes into an object already marked black. Normally, this would not be a
michael@0 1181 * problem, because the empty shape would have been part of the initial snapshot
michael@0 1182 * when the GC started. However, since this is a weak pointer, it isn't. So we
michael@0 1183 * may collect the empty shape even though a live object points to it. To fix
michael@0 1184 * this, we mark these empty shapes black whenever they get read out.
michael@0 1185 */
michael@0 1186 template <class T>
michael@0 1187 class ReadBarriered
michael@0 1188 {
michael@0 1189 T *value;
michael@0 1190
michael@0 1191 public:
michael@0 1192 ReadBarriered() : value(nullptr) {}
michael@0 1193 ReadBarriered(T *value) : value(value) {}
michael@0 1194 ReadBarriered(const Rooted<T*> &rooted) : value(rooted) {}
michael@0 1195
michael@0 1196 T *get() const {
michael@0 1197 if (!value)
michael@0 1198 return nullptr;
michael@0 1199 T::readBarrier(value);
michael@0 1200 return value;
michael@0 1201 }
michael@0 1202
michael@0 1203 operator T*() const { return get(); }
michael@0 1204
michael@0 1205 T &operator*() const { return *get(); }
michael@0 1206 T *operator->() const { return get(); }
michael@0 1207
michael@0 1208 T **unsafeGet() { return &value; }
michael@0 1209 T * const * unsafeGet() const { return &value; }
michael@0 1210
michael@0 1211 void set(T *v) { value = v; }
michael@0 1212
michael@0 1213 operator bool() { return !!value; }
michael@0 1214 };
michael@0 1215
michael@0 1216 class ReadBarrieredValue
michael@0 1217 {
michael@0 1218 Value value;
michael@0 1219
michael@0 1220 public:
michael@0 1221 ReadBarrieredValue() : value(UndefinedValue()) {}
michael@0 1222 ReadBarrieredValue(const Value &value) : value(value) {}
michael@0 1223
michael@0 1224 inline const Value &get() const;
michael@0 1225 Value *unsafeGet() { return &value; }
michael@0 1226 inline operator const Value &() const;
michael@0 1227
michael@0 1228 inline JSObject &toObject() const;
michael@0 1229 };
michael@0 1230
michael@0 1231 /*
michael@0 1232 * Operations on a Heap thing inside the GC need to strip the barriers from
michael@0 1233 * pointer operations. This template helps do that in contexts where the type
michael@0 1234 * is templatized.
michael@0 1235 */
michael@0 1236 template <typename T> struct Unbarriered {};
michael@0 1237 template <typename S> struct Unbarriered< EncapsulatedPtr<S> > { typedef S *type; };
michael@0 1238 template <typename S> struct Unbarriered< RelocatablePtr<S> > { typedef S *type; };
michael@0 1239 template <> struct Unbarriered<EncapsulatedValue> { typedef Value type; };
michael@0 1240 template <> struct Unbarriered<RelocatableValue> { typedef Value type; };
michael@0 1241 template <typename S> struct Unbarriered< DefaultHasher< EncapsulatedPtr<S> > > {
michael@0 1242 typedef DefaultHasher<S *> type;
michael@0 1243 };
michael@0 1244
michael@0 1245 } /* namespace js */
michael@0 1246
michael@0 1247 #endif /* gc_Barrier_h */

mercurial