Chapter 21. The bitmap_allocator

Table of Contents

Design
Implementation
Free List Store
Super Block
Super Block Data Layout
Maximum Wasted Percentage
allocate
deallocate
Questions
1
2
3
Locality
Overhead and Grow Policy

As this name suggests, this allocator uses a bit-map to keep track of the used and unused memory locations for its book-keeping purposes.

This allocator will make use of 1 single bit to keep track of whether it has been allocated or not. A bit 1 indicates free, while 0 indicates allocated. This has been done so that you can easily check a collection of bits for a free block. This kind of Bitmapped strategy works best for single object allocations, and with the STL type parameterized allocators, we do not need to choose any size for the block which will be represented by a single bit. This will be the size of the parameter around which the allocator has been parameterized. Thus, close to optimal performance will result. Hence, this should be used for node based containers which call the allocate function with an argument of 1.

The bitmapped allocator's internal pool is exponentially growing. Meaning that internally, the blocks acquired from the Free List Store will double every time the bitmapped allocator runs out of memory.

The macro __GTHREADS decides whether to use Mutex Protection around every allocation/deallocation. The state of the macro is picked up automatically from the gthr abstraction layer.