michael@0: michael@0: michael@0: michael@0: michael@0: michael@0: michael@0: User Manual michael@0: jemalloc michael@0: @jemalloc_version@ michael@0: michael@0: michael@0: Jason michael@0: Evans michael@0: Author michael@0: michael@0: michael@0: michael@0: michael@0: JEMALLOC michael@0: 3 michael@0: michael@0: michael@0: jemalloc michael@0: jemalloc michael@0: michael@0: general purpose memory allocation functions michael@0: michael@0: michael@0: LIBRARY michael@0: This manual describes jemalloc @jemalloc_version@. More information michael@0: can be found at the jemalloc website. michael@0: michael@0: michael@0: SYNOPSIS michael@0: michael@0: #include <stdlib.h> michael@0: #include <jemalloc/jemalloc.h> michael@0: michael@0: Standard API michael@0: michael@0: void *malloc michael@0: size_t size michael@0: michael@0: michael@0: void *calloc michael@0: size_t number michael@0: size_t size michael@0: michael@0: michael@0: int posix_memalign michael@0: void **ptr michael@0: size_t alignment michael@0: size_t size michael@0: michael@0: michael@0: void *aligned_alloc michael@0: size_t alignment michael@0: size_t size michael@0: michael@0: michael@0: void *realloc michael@0: void *ptr michael@0: size_t size michael@0: michael@0: michael@0: void free michael@0: void *ptr michael@0: michael@0: michael@0: michael@0: Non-standard API michael@0: michael@0: size_t malloc_usable_size michael@0: const void *ptr michael@0: michael@0: michael@0: void malloc_stats_print michael@0: void (*write_cb) michael@0: void *, const char * michael@0: michael@0: void *cbopaque michael@0: const char *opts michael@0: michael@0: michael@0: int mallctl michael@0: const char *name michael@0: void *oldp michael@0: size_t *oldlenp michael@0: void *newp michael@0: size_t newlen michael@0: michael@0: michael@0: int mallctlnametomib michael@0: const char *name michael@0: size_t *mibp michael@0: size_t *miblenp michael@0: michael@0: michael@0: int mallctlbymib michael@0: const size_t *mib michael@0: size_t miblen michael@0: void *oldp michael@0: size_t *oldlenp michael@0: void *newp michael@0: size_t newlen michael@0: michael@0: michael@0: void (*malloc_message) michael@0: void *cbopaque michael@0: const char *s michael@0: michael@0: const char *malloc_conf; michael@0: michael@0: michael@0: Experimental API michael@0: michael@0: int allocm michael@0: void **ptr michael@0: size_t *rsize michael@0: size_t size michael@0: int flags michael@0: michael@0: michael@0: int rallocm michael@0: void **ptr michael@0: size_t *rsize michael@0: size_t size michael@0: size_t extra michael@0: int flags michael@0: michael@0: michael@0: int sallocm michael@0: const void *ptr michael@0: size_t *rsize michael@0: int flags michael@0: michael@0: michael@0: int dallocm michael@0: void *ptr michael@0: int flags michael@0: michael@0: michael@0: int nallocm michael@0: size_t *rsize michael@0: size_t size michael@0: int flags michael@0: michael@0: michael@0: michael@0: michael@0: michael@0: DESCRIPTION michael@0: michael@0: Standard API michael@0: michael@0: The malloc function allocates michael@0: size bytes of uninitialized memory. The allocated michael@0: space is suitably aligned (after possible pointer coercion) for storage michael@0: of any type of object. michael@0: michael@0: The calloc function allocates michael@0: space for number objects, each michael@0: size bytes in length. The result is identical to michael@0: calling malloc with an argument of michael@0: number * size, with the michael@0: exception that the allocated memory is explicitly initialized to zero michael@0: bytes. michael@0: michael@0: The posix_memalign function michael@0: allocates size bytes of memory such that the michael@0: allocation's base address is an even multiple of michael@0: alignment, and returns the allocation in the value michael@0: pointed to by ptr. The requested michael@0: alignment must be a power of 2 at least as large michael@0: as sizeof(void *). michael@0: michael@0: The aligned_alloc function michael@0: allocates size bytes of memory such that the michael@0: allocation's base address is an even multiple of michael@0: alignment. The requested michael@0: alignment must be a power of 2. Behavior is michael@0: undefined if size is not an integral multiple of michael@0: alignment. michael@0: michael@0: The realloc function changes the michael@0: size of the previously allocated memory referenced by michael@0: ptr to size bytes. The michael@0: contents of the memory are unchanged up to the lesser of the new and old michael@0: sizes. If the new size is larger, the contents of the newly allocated michael@0: portion of the memory are undefined. Upon success, the memory referenced michael@0: by ptr is freed and a pointer to the newly michael@0: allocated memory is returned. Note that michael@0: realloc may move the memory allocation, michael@0: resulting in a different return value than ptr. michael@0: If ptr is NULL, the michael@0: realloc function behaves identically to michael@0: malloc for the specified size. michael@0: michael@0: The free function causes the michael@0: allocated memory referenced by ptr to be made michael@0: available for future allocations. If ptr is michael@0: NULL, no action occurs. michael@0: michael@0: michael@0: Non-standard API michael@0: michael@0: The malloc_usable_size function michael@0: returns the usable size of the allocation pointed to by michael@0: ptr. The return value may be larger than the size michael@0: that was requested during allocation. The michael@0: malloc_usable_size function is not a michael@0: mechanism for in-place realloc; rather michael@0: it is provided solely as a tool for introspection purposes. Any michael@0: discrepancy between the requested allocation size and the size reported michael@0: by malloc_usable_size should not be michael@0: depended on, since such behavior is entirely implementation-dependent. michael@0: michael@0: michael@0: The malloc_stats_print function michael@0: writes human-readable summary statistics via the michael@0: write_cb callback function pointer and michael@0: cbopaque data passed to michael@0: write_cb, or michael@0: malloc_message if michael@0: write_cb is NULL. This michael@0: function can be called repeatedly. General information that never michael@0: changes during execution can be omitted by specifying "g" as a character michael@0: within the opts string. Note that michael@0: malloc_message uses the michael@0: mallctl* functions internally, so michael@0: inconsistent statistics can be reported if multiple threads use these michael@0: functions simultaneously. If is michael@0: specified during configuration, “m” and “a” can michael@0: be specified to omit merged arena and per arena statistics, respectively; michael@0: “b” and “l” can be specified to omit per size michael@0: class statistics for bins and large objects, respectively. Unrecognized michael@0: characters are silently ignored. Note that thread caching may prevent michael@0: some statistics from being completely up to date, since extra locking michael@0: would be required to merge counters that track thread cache operations. michael@0: michael@0: michael@0: The mallctl function provides a michael@0: general interface for introspecting the memory allocator, as well as michael@0: setting modifiable parameters and triggering actions. The michael@0: period-separated name argument specifies a michael@0: location in a tree-structured namespace; see the section for michael@0: documentation on the tree contents. To read a value, pass a pointer via michael@0: oldp to adequate space to contain the value, and a michael@0: pointer to its length via oldlenp; otherwise pass michael@0: NULL and NULL. Similarly, to michael@0: write a value, pass a pointer to the value via michael@0: newp, and its length via michael@0: newlen; otherwise pass NULL michael@0: and 0. michael@0: michael@0: The mallctlnametomib function michael@0: provides a way to avoid repeated name lookups for applications that michael@0: repeatedly query the same portion of the namespace, by translating a name michael@0: to a “Management Information Base” (MIB) that can be passed michael@0: repeatedly to mallctlbymib. Upon michael@0: successful return from mallctlnametomib, michael@0: mibp contains an array of michael@0: *miblenp integers, where michael@0: *miblenp is the lesser of the number of components michael@0: in name and the input value of michael@0: *miblenp. Thus it is possible to pass a michael@0: *miblenp that is smaller than the number of michael@0: period-separated name components, which results in a partial MIB that can michael@0: be used as the basis for constructing a complete MIB. For name michael@0: components that are integers (e.g. the 2 in michael@0: arenas.bin.2.size), michael@0: the corresponding MIB component will always be that integer. Therefore, michael@0: it is legitimate to construct code like the following: michael@0: michael@0: michael@0: Experimental API michael@0: The experimental API is subject to change or removal without regard michael@0: for backward compatibility. If michael@0: is specified during configuration, the experimental API is michael@0: omitted. michael@0: michael@0: The allocm, michael@0: rallocm, michael@0: sallocm, michael@0: dallocm, and michael@0: nallocm functions all have a michael@0: flags argument that can be used to specify michael@0: options. The functions only check the options that are contextually michael@0: relevant. Use bitwise or (|) operations to michael@0: specify one or more of the following: michael@0: michael@0: michael@0: ALLOCM_LG_ALIGN(la) michael@0: michael@0: michael@0: Align the memory allocation to start at an address michael@0: that is a multiple of (1 << michael@0: la). This macro does not validate michael@0: that la is within the valid michael@0: range. michael@0: michael@0: michael@0: ALLOCM_ALIGN(a) michael@0: michael@0: michael@0: Align the memory allocation to start at an address michael@0: that is a multiple of a, where michael@0: a is a power of two. This macro does not michael@0: validate that a is a power of 2. michael@0: michael@0: michael@0: michael@0: ALLOCM_ZERO michael@0: michael@0: Initialize newly allocated memory to contain zero michael@0: bytes. In the growing reallocation case, the real size prior to michael@0: reallocation defines the boundary between untouched bytes and those michael@0: that are initialized to contain zero bytes. If this option is michael@0: absent, newly allocated memory is uninitialized. michael@0: michael@0: michael@0: ALLOCM_NO_MOVE michael@0: michael@0: For reallocation, fail rather than moving the michael@0: object. This constraint can apply to both growth and michael@0: shrinkage. michael@0: michael@0: michael@0: ALLOCM_ARENA(a) michael@0: michael@0: michael@0: Use the arena specified by the index michael@0: a. This macro does not validate that michael@0: a specifies an arena in the valid michael@0: range. michael@0: michael@0: michael@0: michael@0: michael@0: The allocm function allocates at michael@0: least size bytes of memory, sets michael@0: *ptr to the base address of the allocation, and michael@0: sets *rsize to the real size of the allocation if michael@0: rsize is not NULL. Behavior michael@0: is undefined if size is michael@0: 0. michael@0: michael@0: The rallocm function resizes the michael@0: allocation at *ptr to be at least michael@0: size bytes, sets *ptr to michael@0: the base address of the allocation if it moved, and sets michael@0: *rsize to the real size of the allocation if michael@0: rsize is not NULL. If michael@0: extra is non-zero, an attempt is made to resize michael@0: the allocation to be at least size + michael@0: extra) bytes, though inability to allocate michael@0: the extra byte(s) will not by itself result in failure. Behavior is michael@0: undefined if size is 0, or if michael@0: (size + michael@0: extra > michael@0: SIZE_T_MAX). michael@0: michael@0: The sallocm function sets michael@0: *rsize to the real size of the allocation. michael@0: michael@0: The dallocm function causes the michael@0: memory referenced by ptr to be made available for michael@0: future allocations. michael@0: michael@0: The nallocm function allocates no michael@0: memory, but it performs the same size computation as the michael@0: allocm function, and if michael@0: rsize is not NULL it sets michael@0: *rsize to the real size of the allocation that michael@0: would result from the equivalent allocm michael@0: function call. Behavior is undefined if michael@0: size is 0. michael@0: michael@0: michael@0: michael@0: TUNING michael@0: Once, when the first call is made to one of the memory allocation michael@0: routines, the allocator initializes its internals based in part on various michael@0: options that can be specified at compile- or run-time. michael@0: michael@0: The string pointed to by the global variable michael@0: malloc_conf, the “name” of the file michael@0: referenced by the symbolic link named /etc/malloc.conf, and the value of the michael@0: environment variable MALLOC_CONF, will be interpreted, in michael@0: that order, from left to right as options. michael@0: michael@0: An options string is a comma-separated list of option:value pairs. michael@0: There is one key corresponding to each opt.* mallctl (see the section for options michael@0: documentation). For example, abort:true,narenas:1 sets michael@0: the opt.abort and opt.narenas options. Some michael@0: options have boolean values (true/false), others have integer values (base michael@0: 8, 10, or 16, depending on prefix), and yet others have raw string michael@0: values. michael@0: michael@0: michael@0: IMPLEMENTATION NOTES michael@0: Traditionally, allocators have used michael@0: sbrk michael@0: 2 to obtain memory, which is michael@0: suboptimal for several reasons, including race conditions, increased michael@0: fragmentation, and artificial limitations on maximum usable memory. If michael@0: is specified during configuration, this michael@0: allocator uses both mmap michael@0: 2 and michael@0: sbrk michael@0: 2, in that order of preference; michael@0: otherwise only mmap michael@0: 2 is used. michael@0: michael@0: This allocator uses multiple arenas in order to reduce lock michael@0: contention for threaded programs on multi-processor systems. This works michael@0: well with regard to threading scalability, but incurs some costs. There is michael@0: a small fixed per-arena overhead, and additionally, arenas manage memory michael@0: completely independently of each other, which means a small fixed increase michael@0: in overall memory fragmentation. These overheads are not generally an michael@0: issue, given the number of arenas normally used. Note that using michael@0: substantially more arenas than the default is not likely to improve michael@0: performance, mainly due to reduced cache performance. However, it may make michael@0: sense to reduce the number of arenas if an application does not make much michael@0: use of the allocation functions. michael@0: michael@0: In addition to multiple arenas, unless michael@0: is specified during configuration, this michael@0: allocator supports thread-specific caching for small and large objects, in michael@0: order to make it possible to completely avoid synchronization for most michael@0: allocation requests. Such caching allows very fast allocation in the michael@0: common case, but it increases memory usage and fragmentation, since a michael@0: bounded number of objects can remain allocated in each thread cache. michael@0: michael@0: Memory is conceptually broken into equal-sized chunks, where the michael@0: chunk size is a power of two that is greater than the page size. Chunks michael@0: are always aligned to multiples of the chunk size. This alignment makes it michael@0: possible to find metadata for user objects very quickly. michael@0: michael@0: User objects are broken into three categories according to size: michael@0: small, large, and huge. Small objects are smaller than one page. Large michael@0: objects are smaller than the chunk size. Huge objects are a multiple of michael@0: the chunk size. Small and large objects are managed by arenas; huge michael@0: objects are managed separately in a single data structure that is shared by michael@0: all threads. Huge objects are used by applications infrequently enough michael@0: that this single data structure is not a scalability issue. michael@0: michael@0: Each chunk that is managed by an arena tracks its contents as runs of michael@0: contiguous pages (unused, backing a set of small objects, or backing one michael@0: large object). The combination of chunk alignment and chunk page maps michael@0: makes it possible to determine all metadata regarding small and large michael@0: allocations in constant time. michael@0: michael@0: Small objects are managed in groups by page runs. Each run maintains michael@0: a frontier and free list to track which regions are in use. Allocation michael@0: requests that are no more than half the quantum (8 or 16, depending on michael@0: architecture) are rounded up to the nearest power of two that is at least michael@0: sizeof(double). All other small michael@0: object size classes are multiples of the quantum, spaced such that internal michael@0: fragmentation is limited to approximately 25% for all but the smallest size michael@0: classes. Allocation requests that are larger than the maximum small size michael@0: class, but small enough to fit in an arena-managed chunk (see the opt.lg_chunk option), are michael@0: rounded up to the nearest run size. Allocation requests that are too large michael@0: to fit in an arena-managed chunk are rounded up to the nearest multiple of michael@0: the chunk size. michael@0: michael@0: Allocations are packed tightly together, which can be an issue for michael@0: multi-threaded applications. If you need to assure that allocations do not michael@0: suffer from cacheline sharing, round your allocation requests up to the michael@0: nearest multiple of the cacheline size, or specify cacheline alignment when michael@0: allocating. michael@0: michael@0: Assuming 4 MiB chunks, 4 KiB pages, and a 16-byte quantum on a 64-bit michael@0: system, the size classes in each category are as shown in . michael@0: michael@0: michael@0: Size classes michael@0: michael@0: michael@0: michael@0: michael@0: michael@0: michael@0: Category michael@0: Spacing michael@0: Size michael@0: michael@0: michael@0: michael@0: michael@0: Small michael@0: lg michael@0: [8] michael@0: michael@0: michael@0: 16 michael@0: [16, 32, 48, ..., 128] michael@0: michael@0: michael@0: 32 michael@0: [160, 192, 224, 256] michael@0: michael@0: michael@0: 64 michael@0: [320, 384, 448, 512] michael@0: michael@0: michael@0: 128 michael@0: [640, 768, 896, 1024] michael@0: michael@0: michael@0: 256 michael@0: [1280, 1536, 1792, 2048] michael@0: michael@0: michael@0: 512 michael@0: [2560, 3072, 3584] michael@0: michael@0: michael@0: Large michael@0: 4 KiB michael@0: [4 KiB, 8 KiB, 12 KiB, ..., 4072 KiB] michael@0: michael@0: michael@0: Huge michael@0: 4 MiB michael@0: [4 MiB, 8 MiB, 12 MiB, ...] michael@0: michael@0: michael@0: michael@0:
michael@0:
michael@0: michael@0: MALLCTL NAMESPACE michael@0: The following names are defined in the namespace accessible via the michael@0: mallctl* functions. Value types are michael@0: specified in parentheses, their readable/writable statuses are encoded as michael@0: rw, r-, -w, or michael@0: --, and required build configuration flags follow, if michael@0: any. A name element encoded as <i> or michael@0: <j> indicates an integer component, where the michael@0: integer varies from 0 to some upper value that must be determined via michael@0: introspection. In the case of stats.arenas.<i>.*, michael@0: <i> equal to arenas.narenas can be michael@0: used to access the summation of statistics from all arenas. Take special michael@0: note of the epoch mallctl, michael@0: which controls refreshing of cached dynamic statistics. michael@0: michael@0: michael@0: michael@0: michael@0: version michael@0: (const char *) michael@0: r- michael@0: michael@0: Return the jemalloc version string. michael@0: michael@0: michael@0: michael@0: michael@0: epoch michael@0: (uint64_t) michael@0: rw michael@0: michael@0: If a value is passed in, refresh the data from which michael@0: the mallctl* functions report values, michael@0: and increment the epoch. Return the current epoch. This is useful for michael@0: detecting whether another thread caused a refresh. michael@0: michael@0: michael@0: michael@0: michael@0: config.debug michael@0: (bool) michael@0: r- michael@0: michael@0: was specified during michael@0: build configuration. michael@0: michael@0: michael@0: michael@0: michael@0: config.dss michael@0: (bool) michael@0: r- michael@0: michael@0: was specified during michael@0: build configuration. michael@0: michael@0: michael@0: michael@0: michael@0: config.fill michael@0: (bool) michael@0: r- michael@0: michael@0: was specified during michael@0: build configuration. michael@0: michael@0: michael@0: michael@0: michael@0: config.lazy_lock michael@0: (bool) michael@0: r- michael@0: michael@0: was specified michael@0: during build configuration. michael@0: michael@0: michael@0: michael@0: michael@0: config.mremap michael@0: (bool) michael@0: r- michael@0: michael@0: was specified during michael@0: build configuration. michael@0: michael@0: michael@0: michael@0: michael@0: config.munmap michael@0: (bool) michael@0: r- michael@0: michael@0: was specified during michael@0: build configuration. michael@0: michael@0: michael@0: michael@0: michael@0: config.prof michael@0: (bool) michael@0: r- michael@0: michael@0: was specified during michael@0: build configuration. michael@0: michael@0: michael@0: michael@0: michael@0: config.prof_libgcc michael@0: (bool) michael@0: r- michael@0: michael@0: was not michael@0: specified during build configuration. michael@0: michael@0: michael@0: michael@0: michael@0: config.prof_libunwind michael@0: (bool) michael@0: r- michael@0: michael@0: was specified michael@0: during build configuration. michael@0: michael@0: michael@0: michael@0: michael@0: config.stats michael@0: (bool) michael@0: r- michael@0: michael@0: was specified during michael@0: build configuration. michael@0: michael@0: michael@0: michael@0: michael@0: config.tcache michael@0: (bool) michael@0: r- michael@0: michael@0: was not specified michael@0: during build configuration. michael@0: michael@0: michael@0: michael@0: michael@0: config.tls michael@0: (bool) michael@0: r- michael@0: michael@0: was not specified during michael@0: build configuration. michael@0: michael@0: michael@0: michael@0: michael@0: config.utrace michael@0: (bool) michael@0: r- michael@0: michael@0: was specified during michael@0: build configuration. michael@0: michael@0: michael@0: michael@0: michael@0: config.valgrind michael@0: (bool) michael@0: r- michael@0: michael@0: was specified during michael@0: build configuration. michael@0: michael@0: michael@0: michael@0: michael@0: config.xmalloc michael@0: (bool) michael@0: r- michael@0: michael@0: was specified during michael@0: build configuration. michael@0: michael@0: michael@0: michael@0: michael@0: opt.abort michael@0: (bool) michael@0: r- michael@0: michael@0: Abort-on-warning enabled/disabled. If true, most michael@0: warnings are fatal. The process will call michael@0: abort michael@0: 3 in these cases. This option is michael@0: disabled by default unless is michael@0: specified during configuration, in which case it is enabled by default. michael@0: michael@0: michael@0: michael@0: michael@0: michael@0: opt.lg_chunk michael@0: (size_t) michael@0: r- michael@0: michael@0: Virtual memory chunk size (log base 2). The default michael@0: chunk size is 4 MiB (2^22). michael@0: michael@0: michael@0: michael@0: michael@0: opt.dss michael@0: (const char *) michael@0: r- michael@0: michael@0: dss (sbrk michael@0: 2) allocation precedence as michael@0: related to mmap michael@0: 2 allocation. The following michael@0: settings are supported: “disabled”, “primary”, michael@0: and “secondary” (default). michael@0: michael@0: michael@0: michael@0: michael@0: opt.narenas michael@0: (size_t) michael@0: r- michael@0: michael@0: Maximum number of arenas to use for automatic michael@0: multiplexing of threads and arenas. The default is four times the michael@0: number of CPUs, or one if there is a single CPU. michael@0: michael@0: michael@0: michael@0: michael@0: opt.lg_dirty_mult michael@0: (ssize_t) michael@0: r- michael@0: michael@0: Per-arena minimum ratio (log base 2) of active to dirty michael@0: pages. Some dirty unused pages may be allowed to accumulate, within michael@0: the limit set by the ratio (or one chunk worth of dirty pages, michael@0: whichever is greater), before informing the kernel about some of those michael@0: pages via madvise michael@0: 2 or a similar system call. This michael@0: provides the kernel with sufficient information to recycle dirty pages michael@0: if physical memory becomes scarce and the pages remain unused. The michael@0: default minimum ratio is 8:1 (2^3:1); an option value of -1 will michael@0: disable dirty page purging. michael@0: michael@0: michael@0: michael@0: michael@0: opt.stats_print michael@0: (bool) michael@0: r- michael@0: michael@0: Enable/disable statistics printing at exit. If michael@0: enabled, the malloc_stats_print michael@0: function is called at program exit via an michael@0: atexit michael@0: 3 function. If michael@0: is specified during configuration, this michael@0: has the potential to cause deadlock for a multi-threaded process that michael@0: exits while one or more threads are executing in the memory allocation michael@0: functions. Therefore, this option should only be used with care; it is michael@0: primarily intended as a performance tuning aid during application michael@0: development. This option is disabled by default. michael@0: michael@0: michael@0: michael@0: michael@0: opt.junk michael@0: (bool) michael@0: r- michael@0: [] michael@0: michael@0: Junk filling enabled/disabled. If enabled, each byte michael@0: of uninitialized allocated memory will be initialized to michael@0: 0xa5. All deallocated memory will be initialized to michael@0: 0x5a. This is intended for debugging and will michael@0: impact performance negatively. This option is disabled by default michael@0: unless is specified during michael@0: configuration, in which case it is enabled by default unless running michael@0: inside Valgrind. michael@0: michael@0: michael@0: michael@0: michael@0: opt.quarantine michael@0: (size_t) michael@0: r- michael@0: [] michael@0: michael@0: Per thread quarantine size in bytes. If non-zero, each michael@0: thread maintains a FIFO object quarantine that stores up to the michael@0: specified number of bytes of memory. The quarantined memory is not michael@0: freed until it is released from quarantine, though it is immediately michael@0: junk-filled if the opt.junk option is michael@0: enabled. This feature is of particular use in combination with Valgrind, which can detect attempts michael@0: to access quarantined objects. This is intended for debugging and will michael@0: impact performance negatively. The default quarantine size is 0 unless michael@0: running inside Valgrind, in which case the default is 16 michael@0: MiB. michael@0: michael@0: michael@0: michael@0: michael@0: opt.redzone michael@0: (bool) michael@0: r- michael@0: [] michael@0: michael@0: Redzones enabled/disabled. If enabled, small michael@0: allocations have redzones before and after them. Furthermore, if the michael@0: opt.junk option is michael@0: enabled, the redzones are checked for corruption during deallocation. michael@0: However, the primary intended purpose of this feature is to be used in michael@0: combination with Valgrind, michael@0: which needs redzones in order to do effective buffer overflow/underflow michael@0: detection. This option is intended for debugging and will impact michael@0: performance negatively. This option is disabled by michael@0: default unless running inside Valgrind. michael@0: michael@0: michael@0: michael@0: michael@0: opt.zero michael@0: (bool) michael@0: r- michael@0: [] michael@0: michael@0: Zero filling enabled/disabled. If enabled, each byte michael@0: of uninitialized allocated memory will be initialized to 0. Note that michael@0: this initialization only happens once for each byte, so michael@0: realloc and michael@0: rallocm calls do not zero memory that michael@0: was previously allocated. This is intended for debugging and will michael@0: impact performance negatively. This option is disabled by default. michael@0: michael@0: michael@0: michael@0: michael@0: michael@0: opt.utrace michael@0: (bool) michael@0: r- michael@0: [] michael@0: michael@0: Allocation tracing based on michael@0: utrace michael@0: 2 enabled/disabled. This option michael@0: is disabled by default. michael@0: michael@0: michael@0: michael@0: michael@0: opt.valgrind michael@0: (bool) michael@0: r- michael@0: [] michael@0: michael@0: Valgrind michael@0: support enabled/disabled. This option is vestigal because jemalloc michael@0: auto-detects whether it is running inside Valgrind. This option is michael@0: disabled by default, unless running inside Valgrind. michael@0: michael@0: michael@0: michael@0: michael@0: opt.xmalloc michael@0: (bool) michael@0: r- michael@0: [] michael@0: michael@0: Abort-on-out-of-memory enabled/disabled. If enabled, michael@0: rather than returning failure for any allocation function, display a michael@0: diagnostic message on STDERR_FILENO and cause the michael@0: program to drop core (using michael@0: abort michael@0: 3). If an application is michael@0: designed to depend on this behavior, set the option at compile time by michael@0: including the following in the source code: michael@0: michael@0: This option is disabled by default. michael@0: michael@0: michael@0: michael@0: michael@0: opt.tcache michael@0: (bool) michael@0: r- michael@0: [] michael@0: michael@0: Thread-specific caching enabled/disabled. When there michael@0: are multiple threads, each thread uses a thread-specific cache for michael@0: objects up to a certain size. Thread-specific caching allows many michael@0: allocations to be satisfied without performing any thread michael@0: synchronization, at the cost of increased memory use. See the michael@0: opt.lg_tcache_max michael@0: option for related tuning information. This option is enabled by michael@0: default unless running inside Valgrind. michael@0: michael@0: michael@0: michael@0: michael@0: opt.lg_tcache_max michael@0: (size_t) michael@0: r- michael@0: [] michael@0: michael@0: Maximum size class (log base 2) to cache in the michael@0: thread-specific cache. At a minimum, all small size classes are michael@0: cached, and at a maximum all large size classes are cached. The michael@0: default maximum is 32 KiB (2^15). michael@0: michael@0: michael@0: michael@0: michael@0: opt.prof michael@0: (bool) michael@0: r- michael@0: [] michael@0: michael@0: Memory profiling enabled/disabled. If enabled, profile michael@0: memory allocation activity. See the opt.prof_active michael@0: option for on-the-fly activation/deactivation. See the opt.lg_prof_sample michael@0: option for probabilistic sampling control. See the opt.prof_accum michael@0: option for control of cumulative sample reporting. See the opt.lg_prof_interval michael@0: option for information on interval-triggered profile dumping, the opt.prof_gdump michael@0: option for information on high-water-triggered profile dumping, and the michael@0: opt.prof_final michael@0: option for final profile dumping. Profile output is compatible with michael@0: the included pprof Perl script, which originates michael@0: from the gperftools michael@0: package. michael@0: michael@0: michael@0: michael@0: michael@0: opt.prof_prefix michael@0: (const char *) michael@0: r- michael@0: [] michael@0: michael@0: Filename prefix for profile dumps. If the prefix is michael@0: set to the empty string, no automatic dumps will occur; this is michael@0: primarily useful for disabling the automatic final heap dump (which michael@0: also disables leak reporting, if enabled). The default prefix is michael@0: jeprof. michael@0: michael@0: michael@0: michael@0: michael@0: opt.prof_active michael@0: (bool) michael@0: r- michael@0: [] michael@0: michael@0: Profiling activated/deactivated. This is a secondary michael@0: control mechanism that makes it possible to start the application with michael@0: profiling enabled (see the opt.prof option) but michael@0: inactive, then toggle profiling at any time during program execution michael@0: with the prof.active mallctl. michael@0: This option is enabled by default. michael@0: michael@0: michael@0: michael@0: michael@0: opt.lg_prof_sample michael@0: (ssize_t) michael@0: r- michael@0: [] michael@0: michael@0: Average interval (log base 2) between allocation michael@0: samples, as measured in bytes of allocation activity. Increasing the michael@0: sampling interval decreases profile fidelity, but also decreases the michael@0: computational overhead. The default sample interval is 512 KiB (2^19 michael@0: B). michael@0: michael@0: michael@0: michael@0: michael@0: opt.prof_accum michael@0: (bool) michael@0: r- michael@0: [] michael@0: michael@0: Reporting of cumulative object/byte counts in profile michael@0: dumps enabled/disabled. If this option is enabled, every unique michael@0: backtrace must be stored for the duration of execution. Depending on michael@0: the application, this can impose a large memory overhead, and the michael@0: cumulative counts are not always of interest. This option is disabled michael@0: by default. michael@0: michael@0: michael@0: michael@0: michael@0: opt.lg_prof_interval michael@0: (ssize_t) michael@0: r- michael@0: [] michael@0: michael@0: Average interval (log base 2) between memory profile michael@0: dumps, as measured in bytes of allocation activity. The actual michael@0: interval between dumps may be sporadic because decentralized allocation michael@0: counters are used to avoid synchronization bottlenecks. Profiles are michael@0: dumped to files named according to the pattern michael@0: <prefix>.<pid>.<seq>.i<iseq>.heap, michael@0: where <prefix> is controlled by the michael@0: opt.prof_prefix michael@0: option. By default, interval-triggered profile dumping is disabled michael@0: (encoded as -1). michael@0: michael@0: michael@0: michael@0: michael@0: michael@0: opt.prof_gdump michael@0: (bool) michael@0: r- michael@0: [] michael@0: michael@0: Trigger a memory profile dump every time the total michael@0: virtual memory exceeds the previous maximum. Profiles are dumped to michael@0: files named according to the pattern michael@0: <prefix>.<pid>.<seq>.u<useq>.heap, michael@0: where <prefix> is controlled by the opt.prof_prefix michael@0: option. This option is disabled by default. michael@0: michael@0: michael@0: michael@0: michael@0: opt.prof_final michael@0: (bool) michael@0: r- michael@0: [] michael@0: michael@0: Use an michael@0: atexit michael@0: 3 function to dump final memory michael@0: usage to a file named according to the pattern michael@0: <prefix>.<pid>.<seq>.f.heap, michael@0: where <prefix> is controlled by the opt.prof_prefix michael@0: option. This option is enabled by default. michael@0: michael@0: michael@0: michael@0: michael@0: opt.prof_leak michael@0: (bool) michael@0: r- michael@0: [] michael@0: michael@0: Leak reporting enabled/disabled. If enabled, use an michael@0: atexit michael@0: 3 function to report memory leaks michael@0: detected by allocation sampling. See the michael@0: opt.prof option for michael@0: information on analyzing heap profile output. This option is disabled michael@0: by default. michael@0: michael@0: michael@0: michael@0: michael@0: thread.arena michael@0: (unsigned) michael@0: rw michael@0: michael@0: Get or set the arena associated with the calling michael@0: thread. If the specified arena was not initialized beforehand (see the michael@0: arenas.initialized michael@0: mallctl), it will be automatically initialized as a side effect of michael@0: calling this interface. michael@0: michael@0: michael@0: michael@0: michael@0: thread.allocated michael@0: (uint64_t) michael@0: r- michael@0: [] michael@0: michael@0: Get the total number of bytes ever allocated by the michael@0: calling thread. This counter has the potential to wrap around; it is michael@0: up to the application to appropriately interpret the counter in such michael@0: cases. michael@0: michael@0: michael@0: michael@0: michael@0: thread.allocatedp michael@0: (uint64_t *) michael@0: r- michael@0: [] michael@0: michael@0: Get a pointer to the the value that is returned by the michael@0: thread.allocated michael@0: mallctl. This is useful for avoiding the overhead of repeated michael@0: mallctl* calls. michael@0: michael@0: michael@0: michael@0: michael@0: thread.deallocated michael@0: (uint64_t) michael@0: r- michael@0: [] michael@0: michael@0: Get the total number of bytes ever deallocated by the michael@0: calling thread. This counter has the potential to wrap around; it is michael@0: up to the application to appropriately interpret the counter in such michael@0: cases. michael@0: michael@0: michael@0: michael@0: michael@0: thread.deallocatedp michael@0: (uint64_t *) michael@0: r- michael@0: [] michael@0: michael@0: Get a pointer to the the value that is returned by the michael@0: thread.deallocated michael@0: mallctl. This is useful for avoiding the overhead of repeated michael@0: mallctl* calls. michael@0: michael@0: michael@0: michael@0: michael@0: thread.tcache.enabled michael@0: (bool) michael@0: rw michael@0: [] michael@0: michael@0: Enable/disable calling thread's tcache. The tcache is michael@0: implicitly flushed as a side effect of becoming michael@0: disabled (see thread.tcache.flush). michael@0: michael@0: michael@0: michael@0: michael@0: michael@0: thread.tcache.flush michael@0: (void) michael@0: -- michael@0: [] michael@0: michael@0: Flush calling thread's tcache. This interface releases michael@0: all cached objects and internal data structures associated with the michael@0: calling thread's thread-specific cache. Ordinarily, this interface michael@0: need not be called, since automatic periodic incremental garbage michael@0: collection occurs, and the thread cache is automatically discarded when michael@0: a thread exits. However, garbage collection is triggered by allocation michael@0: activity, so it is possible for a thread that stops michael@0: allocating/deallocating to retain its cache indefinitely, in which case michael@0: the developer may find manual flushing useful. michael@0: michael@0: michael@0: michael@0: michael@0: arena.<i>.purge michael@0: (unsigned) michael@0: -- michael@0: michael@0: Purge unused dirty pages for arena <i>, or for michael@0: all arenas if <i> equals arenas.narenas. michael@0: michael@0: michael@0: michael@0: michael@0: michael@0: arena.<i>.dss michael@0: (const char *) michael@0: rw michael@0: michael@0: Set the precedence of dss allocation as related to mmap michael@0: allocation for arena <i>, or for all arenas if <i> equals michael@0: arenas.narenas. See michael@0: opt.dss for supported michael@0: settings. michael@0: michael@0: michael@0: michael@0: michael@0: michael@0: arenas.narenas michael@0: (unsigned) michael@0: r- michael@0: michael@0: Current limit on number of arenas. michael@0: michael@0: michael@0: michael@0: michael@0: arenas.initialized michael@0: (bool *) michael@0: r- michael@0: michael@0: An array of arenas.narenas michael@0: booleans. Each boolean indicates whether the corresponding arena is michael@0: initialized. michael@0: michael@0: michael@0: michael@0: michael@0: arenas.quantum michael@0: (size_t) michael@0: r- michael@0: michael@0: Quantum size. michael@0: michael@0: michael@0: michael@0: michael@0: arenas.page michael@0: (size_t) michael@0: r- michael@0: michael@0: Page size. michael@0: michael@0: michael@0: michael@0: michael@0: arenas.tcache_max michael@0: (size_t) michael@0: r- michael@0: [] michael@0: michael@0: Maximum thread-cached size class. michael@0: michael@0: michael@0: michael@0: michael@0: arenas.nbins michael@0: (unsigned) michael@0: r- michael@0: michael@0: Number of bin size classes. michael@0: michael@0: michael@0: michael@0: michael@0: arenas.nhbins michael@0: (unsigned) michael@0: r- michael@0: [] michael@0: michael@0: Total number of thread cache bin size michael@0: classes. michael@0: michael@0: michael@0: michael@0: michael@0: arenas.bin.<i>.size michael@0: (size_t) michael@0: r- michael@0: michael@0: Maximum size supported by size class. michael@0: michael@0: michael@0: michael@0: michael@0: arenas.bin.<i>.nregs michael@0: (uint32_t) michael@0: r- michael@0: michael@0: Number of regions per page run. michael@0: michael@0: michael@0: michael@0: michael@0: arenas.bin.<i>.run_size michael@0: (size_t) michael@0: r- michael@0: michael@0: Number of bytes per page run. michael@0: michael@0: michael@0: michael@0: michael@0: arenas.nlruns michael@0: (size_t) michael@0: r- michael@0: michael@0: Total number of large size classes. michael@0: michael@0: michael@0: michael@0: michael@0: arenas.lrun.<i>.size michael@0: (size_t) michael@0: r- michael@0: michael@0: Maximum size supported by this large size michael@0: class. michael@0: michael@0: michael@0: michael@0: michael@0: arenas.purge michael@0: (unsigned) michael@0: -w michael@0: michael@0: Purge unused dirty pages for the specified arena, or michael@0: for all arenas if none is specified. michael@0: michael@0: michael@0: michael@0: michael@0: arenas.extend michael@0: (unsigned) michael@0: r- michael@0: michael@0: Extend the array of arenas by appending a new arena, michael@0: and returning the new arena index. michael@0: michael@0: michael@0: michael@0: michael@0: prof.active michael@0: (bool) michael@0: rw michael@0: [] michael@0: michael@0: Control whether sampling is currently active. See the michael@0: opt.prof_active michael@0: option for additional information. michael@0: michael@0: michael@0: michael@0: michael@0: michael@0: prof.dump michael@0: (const char *) michael@0: -w michael@0: [] michael@0: michael@0: Dump a memory profile to the specified file, or if NULL michael@0: is specified, to a file according to the pattern michael@0: <prefix>.<pid>.<seq>.m<mseq>.heap, michael@0: where <prefix> is controlled by the michael@0: opt.prof_prefix michael@0: option. michael@0: michael@0: michael@0: michael@0: michael@0: prof.interval michael@0: (uint64_t) michael@0: r- michael@0: [] michael@0: michael@0: Average number of bytes allocated between michael@0: inverval-based profile dumps. See the michael@0: opt.lg_prof_interval michael@0: option for additional information. michael@0: michael@0: michael@0: michael@0: michael@0: stats.cactive michael@0: (size_t *) michael@0: r- michael@0: [] michael@0: michael@0: Pointer to a counter that contains an approximate count michael@0: of the current number of bytes in active pages. The estimate may be michael@0: high, but never low, because each arena rounds up to the nearest michael@0: multiple of the chunk size when computing its contribution to the michael@0: counter. Note that the epoch mallctl has no bearing michael@0: on this counter. Furthermore, counter consistency is maintained via michael@0: atomic operations, so it is necessary to use an atomic operation in michael@0: order to guarantee a consistent read when dereferencing the pointer. michael@0: michael@0: michael@0: michael@0: michael@0: michael@0: stats.allocated michael@0: (size_t) michael@0: r- michael@0: [] michael@0: michael@0: Total number of bytes allocated by the michael@0: application. michael@0: michael@0: michael@0: michael@0: michael@0: stats.active michael@0: (size_t) michael@0: r- michael@0: [] michael@0: michael@0: Total number of bytes in active pages allocated by the michael@0: application. This is a multiple of the page size, and greater than or michael@0: equal to stats.allocated. michael@0: This does not include michael@0: stats.arenas.<i>.pdirty and pages michael@0: entirely devoted to allocator metadata. michael@0: michael@0: michael@0: michael@0: michael@0: stats.mapped michael@0: (size_t) michael@0: r- michael@0: [] michael@0: michael@0: Total number of bytes in chunks mapped on behalf of the michael@0: application. This is a multiple of the chunk size, and is at least as michael@0: large as stats.active. This michael@0: does not include inactive chunks. michael@0: michael@0: michael@0: michael@0: michael@0: stats.chunks.current michael@0: (size_t) michael@0: r- michael@0: [] michael@0: michael@0: Total number of chunks actively mapped on behalf of the michael@0: application. This does not include inactive chunks. michael@0: michael@0: michael@0: michael@0: michael@0: michael@0: stats.chunks.total michael@0: (uint64_t) michael@0: r- michael@0: [] michael@0: michael@0: Cumulative number of chunks allocated. michael@0: michael@0: michael@0: michael@0: michael@0: stats.chunks.high michael@0: (size_t) michael@0: r- michael@0: [] michael@0: michael@0: Maximum number of active chunks at any time thus far. michael@0: michael@0: michael@0: michael@0: michael@0: michael@0: stats.huge.allocated michael@0: (size_t) michael@0: r- michael@0: [] michael@0: michael@0: Number of bytes currently allocated by huge objects. michael@0: michael@0: michael@0: michael@0: michael@0: michael@0: stats.huge.nmalloc michael@0: (uint64_t) michael@0: r- michael@0: [] michael@0: michael@0: Cumulative number of huge allocation requests. michael@0: michael@0: michael@0: michael@0: michael@0: michael@0: stats.huge.ndalloc michael@0: (uint64_t) michael@0: r- michael@0: [] michael@0: michael@0: Cumulative number of huge deallocation requests. michael@0: michael@0: michael@0: michael@0: michael@0: michael@0: stats.arenas.<i>.dss michael@0: (const char *) michael@0: r- michael@0: michael@0: dss (sbrk michael@0: 2) allocation precedence as michael@0: related to mmap michael@0: 2 allocation. See opt.dss for details. michael@0: michael@0: michael@0: michael@0: michael@0: michael@0: stats.arenas.<i>.nthreads michael@0: (unsigned) michael@0: r- michael@0: michael@0: Number of threads currently assigned to michael@0: arena. michael@0: michael@0: michael@0: michael@0: michael@0: stats.arenas.<i>.pactive michael@0: (size_t) michael@0: r- michael@0: michael@0: Number of pages in active runs. michael@0: michael@0: michael@0: michael@0: michael@0: stats.arenas.<i>.pdirty michael@0: (size_t) michael@0: r- michael@0: michael@0: Number of pages within unused runs that are potentially michael@0: dirty, and for which madvise... michael@0: MADV_DONTNEED or michael@0: similar has not been called. michael@0: michael@0: michael@0: michael@0: michael@0: stats.arenas.<i>.mapped michael@0: (size_t) michael@0: r- michael@0: [] michael@0: michael@0: Number of mapped bytes. michael@0: michael@0: michael@0: michael@0: michael@0: stats.arenas.<i>.npurge michael@0: (uint64_t) michael@0: r- michael@0: [] michael@0: michael@0: Number of dirty page purge sweeps performed. michael@0: michael@0: michael@0: michael@0: michael@0: michael@0: stats.arenas.<i>.nmadvise michael@0: (uint64_t) michael@0: r- michael@0: [] michael@0: michael@0: Number of madvise... michael@0: MADV_DONTNEED or michael@0: similar calls made to purge dirty pages. michael@0: michael@0: michael@0: michael@0: michael@0: stats.arenas.<i>.npurged michael@0: (uint64_t) michael@0: r- michael@0: [] michael@0: michael@0: Number of pages purged. michael@0: michael@0: michael@0: michael@0: michael@0: stats.arenas.<i>.small.allocated michael@0: (size_t) michael@0: r- michael@0: [] michael@0: michael@0: Number of bytes currently allocated by small objects. michael@0: michael@0: michael@0: michael@0: michael@0: michael@0: stats.arenas.<i>.small.nmalloc michael@0: (uint64_t) michael@0: r- michael@0: [] michael@0: michael@0: Cumulative number of allocation requests served by michael@0: small bins. michael@0: michael@0: michael@0: michael@0: michael@0: stats.arenas.<i>.small.ndalloc michael@0: (uint64_t) michael@0: r- michael@0: [] michael@0: michael@0: Cumulative number of small objects returned to bins. michael@0: michael@0: michael@0: michael@0: michael@0: michael@0: stats.arenas.<i>.small.nrequests michael@0: (uint64_t) michael@0: r- michael@0: [] michael@0: michael@0: Cumulative number of small allocation requests. michael@0: michael@0: michael@0: michael@0: michael@0: michael@0: stats.arenas.<i>.large.allocated michael@0: (size_t) michael@0: r- michael@0: [] michael@0: michael@0: Number of bytes currently allocated by large objects. michael@0: michael@0: michael@0: michael@0: michael@0: michael@0: stats.arenas.<i>.large.nmalloc michael@0: (uint64_t) michael@0: r- michael@0: [] michael@0: michael@0: Cumulative number of large allocation requests served michael@0: directly by the arena. michael@0: michael@0: michael@0: michael@0: michael@0: stats.arenas.<i>.large.ndalloc michael@0: (uint64_t) michael@0: r- michael@0: [] michael@0: michael@0: Cumulative number of large deallocation requests served michael@0: directly by the arena. michael@0: michael@0: michael@0: michael@0: michael@0: stats.arenas.<i>.large.nrequests michael@0: (uint64_t) michael@0: r- michael@0: [] michael@0: michael@0: Cumulative number of large allocation requests. michael@0: michael@0: michael@0: michael@0: michael@0: michael@0: stats.arenas.<i>.bins.<j>.allocated michael@0: (size_t) michael@0: r- michael@0: [] michael@0: michael@0: Current number of bytes allocated by michael@0: bin. michael@0: michael@0: michael@0: michael@0: michael@0: stats.arenas.<i>.bins.<j>.nmalloc michael@0: (uint64_t) michael@0: r- michael@0: [] michael@0: michael@0: Cumulative number of allocations served by bin. michael@0: michael@0: michael@0: michael@0: michael@0: michael@0: stats.arenas.<i>.bins.<j>.ndalloc michael@0: (uint64_t) michael@0: r- michael@0: [] michael@0: michael@0: Cumulative number of allocations returned to bin. michael@0: michael@0: michael@0: michael@0: michael@0: michael@0: stats.arenas.<i>.bins.<j>.nrequests michael@0: (uint64_t) michael@0: r- michael@0: [] michael@0: michael@0: Cumulative number of allocation michael@0: requests. michael@0: michael@0: michael@0: michael@0: michael@0: stats.arenas.<i>.bins.<j>.nfills michael@0: (uint64_t) michael@0: r- michael@0: [ ] michael@0: michael@0: Cumulative number of tcache fills. michael@0: michael@0: michael@0: michael@0: michael@0: stats.arenas.<i>.bins.<j>.nflushes michael@0: (uint64_t) michael@0: r- michael@0: [ ] michael@0: michael@0: Cumulative number of tcache flushes. michael@0: michael@0: michael@0: michael@0: michael@0: stats.arenas.<i>.bins.<j>.nruns michael@0: (uint64_t) michael@0: r- michael@0: [] michael@0: michael@0: Cumulative number of runs created. michael@0: michael@0: michael@0: michael@0: michael@0: stats.arenas.<i>.bins.<j>.nreruns michael@0: (uint64_t) michael@0: r- michael@0: [] michael@0: michael@0: Cumulative number of times the current run from which michael@0: to allocate changed. michael@0: michael@0: michael@0: michael@0: michael@0: stats.arenas.<i>.bins.<j>.curruns michael@0: (size_t) michael@0: r- michael@0: [] michael@0: michael@0: Current number of runs. michael@0: michael@0: michael@0: michael@0: michael@0: stats.arenas.<i>.lruns.<j>.nmalloc michael@0: (uint64_t) michael@0: r- michael@0: [] michael@0: michael@0: Cumulative number of allocation requests for this size michael@0: class served directly by the arena. michael@0: michael@0: michael@0: michael@0: michael@0: stats.arenas.<i>.lruns.<j>.ndalloc michael@0: (uint64_t) michael@0: r- michael@0: [] michael@0: michael@0: Cumulative number of deallocation requests for this michael@0: size class served directly by the arena. michael@0: michael@0: michael@0: michael@0: michael@0: stats.arenas.<i>.lruns.<j>.nrequests michael@0: (uint64_t) michael@0: r- michael@0: [] michael@0: michael@0: Cumulative number of allocation requests for this size michael@0: class. michael@0: michael@0: michael@0: michael@0: michael@0: stats.arenas.<i>.lruns.<j>.curruns michael@0: (size_t) michael@0: r- michael@0: [] michael@0: michael@0: Current number of runs for this size class. michael@0: michael@0: michael@0: michael@0: michael@0: michael@0: DEBUGGING MALLOC PROBLEMS michael@0: When debugging, it is a good idea to configure/build jemalloc with michael@0: the and michael@0: options, and recompile the program with suitable options and symbols for michael@0: debugger support. When so configured, jemalloc incorporates a wide variety michael@0: of run-time assertions that catch application errors such as double-free, michael@0: write-after-free, etc. michael@0: michael@0: Programs often accidentally depend on “uninitialized” michael@0: memory actually being filled with zero bytes. Junk filling michael@0: (see the opt.junk michael@0: option) tends to expose such bugs in the form of obviously incorrect michael@0: results and/or coredumps. Conversely, zero michael@0: filling (see the opt.zero option) eliminates michael@0: the symptoms of such bugs. Between these two options, it is usually michael@0: possible to quickly detect, diagnose, and eliminate such bugs. michael@0: michael@0: This implementation does not provide much detail about the problems michael@0: it detects, because the performance impact for storing such information michael@0: would be prohibitive. However, jemalloc does integrate with the most michael@0: excellent Valgrind tool if the michael@0: configuration option is enabled. michael@0: michael@0: michael@0: DIAGNOSTIC MESSAGES michael@0: If any of the memory allocation/deallocation functions detect an michael@0: error or warning condition, a message will be printed to file descriptor michael@0: STDERR_FILENO. Errors will result in the process michael@0: dumping core. If the opt.abort option is set, most michael@0: warnings are treated as errors. michael@0: michael@0: The malloc_message variable allows the programmer michael@0: to override the function which emits the text strings forming the errors michael@0: and warnings if for some reason the STDERR_FILENO file michael@0: descriptor is not suitable for this. michael@0: malloc_message takes the michael@0: cbopaque pointer argument that is michael@0: NULL unless overridden by the arguments in a call to michael@0: malloc_stats_print, followed by a string michael@0: pointer. Please note that doing anything which tries to allocate memory in michael@0: this function is likely to result in a crash or deadlock. michael@0: michael@0: All messages are prefixed by michael@0: “<jemalloc>: ”. michael@0: michael@0: michael@0: RETURN VALUES michael@0: michael@0: Standard API michael@0: The malloc and michael@0: calloc functions return a pointer to the michael@0: allocated memory if successful; otherwise a NULL michael@0: pointer is returned and errno is set to michael@0: ENOMEM. michael@0: michael@0: The posix_memalign function michael@0: returns the value 0 if successful; otherwise it returns an error value. michael@0: The posix_memalign function will fail michael@0: if: michael@0: michael@0: michael@0: EINVAL michael@0: michael@0: The alignment parameter is michael@0: not a power of 2 at least as large as michael@0: sizeof(void *). michael@0: michael@0: michael@0: michael@0: ENOMEM michael@0: michael@0: Memory allocation error. michael@0: michael@0: michael@0: michael@0: michael@0: The aligned_alloc function returns michael@0: a pointer to the allocated memory if successful; otherwise a michael@0: NULL pointer is returned and michael@0: errno is set. The michael@0: aligned_alloc function will fail if: michael@0: michael@0: michael@0: EINVAL michael@0: michael@0: The alignment parameter is michael@0: not a power of 2. michael@0: michael@0: michael@0: michael@0: ENOMEM michael@0: michael@0: Memory allocation error. michael@0: michael@0: michael@0: michael@0: michael@0: The realloc function returns a michael@0: pointer, possibly identical to ptr, to the michael@0: allocated memory if successful; otherwise a NULL michael@0: pointer is returned, and errno is set to michael@0: ENOMEM if the error was the result of an michael@0: allocation failure. The realloc michael@0: function always leaves the original buffer intact when an error occurs. michael@0: michael@0: michael@0: The free function returns no michael@0: value. michael@0: michael@0: michael@0: Non-standard API michael@0: The malloc_usable_size function michael@0: returns the usable size of the allocation pointed to by michael@0: ptr. michael@0: michael@0: The mallctl, michael@0: mallctlnametomib, and michael@0: mallctlbymib functions return 0 on michael@0: success; otherwise they return an error value. The functions will fail michael@0: if: michael@0: michael@0: michael@0: EINVAL michael@0: michael@0: newp is not michael@0: NULL, and newlen is too michael@0: large or too small. Alternatively, *oldlenp michael@0: is too large or too small; in this case as much data as possible michael@0: are read despite the error. michael@0: michael@0: michael@0: ENOMEM michael@0: michael@0: *oldlenp is too short to michael@0: hold the requested value. michael@0: michael@0: michael@0: ENOENT michael@0: michael@0: name or michael@0: mib specifies an unknown/invalid michael@0: value. michael@0: michael@0: michael@0: EPERM michael@0: michael@0: Attempt to read or write void value, or attempt to michael@0: write read-only value. michael@0: michael@0: michael@0: EAGAIN michael@0: michael@0: A memory allocation failure michael@0: occurred. michael@0: michael@0: michael@0: EFAULT michael@0: michael@0: An interface with side effects failed in some way michael@0: not directly related to mallctl* michael@0: read/write processing. michael@0: michael@0: michael@0: michael@0: michael@0: michael@0: Experimental API michael@0: The allocm, michael@0: rallocm, michael@0: sallocm, michael@0: dallocm, and michael@0: nallocm functions return michael@0: ALLOCM_SUCCESS on success; otherwise they return an michael@0: error value. The allocm, michael@0: rallocm, and michael@0: nallocm functions will fail if: michael@0: michael@0: michael@0: ALLOCM_ERR_OOM michael@0: michael@0: Out of memory. Insufficient contiguous memory was michael@0: available to service the allocation request. The michael@0: allocm function additionally sets michael@0: *ptr to NULL, whereas michael@0: the rallocm function leaves michael@0: *ptr unmodified. michael@0: michael@0: michael@0: The rallocm function will also michael@0: fail if: michael@0: michael@0: michael@0: ALLOCM_ERR_NOT_MOVED michael@0: michael@0: ALLOCM_NO_MOVE was specified, michael@0: but the reallocation request could not be serviced without moving michael@0: the object. michael@0: michael@0: michael@0: michael@0: michael@0: michael@0: michael@0: ENVIRONMENT michael@0: The following environment variable affects the execution of the michael@0: allocation functions: michael@0: michael@0: michael@0: MALLOC_CONF michael@0: michael@0: If the environment variable michael@0: MALLOC_CONF is set, the characters it contains michael@0: will be interpreted as options. michael@0: michael@0: michael@0: michael@0: michael@0: michael@0: EXAMPLES michael@0: To dump core whenever a problem occurs: michael@0: ln -s 'abort:true' /etc/malloc.conf michael@0: michael@0: To specify in the source a chunk size that is 16 MiB: michael@0: michael@0: michael@0: michael@0: SEE ALSO michael@0: madvise michael@0: 2, michael@0: mmap michael@0: 2, michael@0: sbrk michael@0: 2, michael@0: utrace michael@0: 2, michael@0: alloca michael@0: 3, michael@0: atexit michael@0: 3, michael@0: getpagesize michael@0: 3 michael@0: michael@0: michael@0: STANDARDS michael@0: The malloc, michael@0: calloc, michael@0: realloc, and michael@0: free functions conform to ISO/IEC michael@0: 9899:1990 (“ISO C90”). michael@0: michael@0: The posix_memalign function conforms michael@0: to IEEE Std 1003.1-2001 (“POSIX.1”). michael@0: michael@0: