Home | Libraries | People | FAQ | More |
The devector
class template,
along the standard vector features, also offers a few extra. The small buffer
optimization allows some elements to be stored next to the devector
internals, without allocating
extra memory. If the size of the devector
exceeds the size of the small buffer, elements previously held in the small
buffer are copied to dynamically allocated buffer, the small buffer remains
unused: The memory is always contiguous. The size of the small buffer must
be set by the SmallBufferPolicy
template argument. If it's set to 0, no small buffer is used.
devector<int, devector_small_buffer_policy<16>> small_devector{1, 2, 3, 4}; small_devector.push_back(5); small_devector.push_back(6); small_devector.push_back(7); // no dynamic memory allocated
The small buffer optimization makes the devector
a good candidate of a buffer. It is safer and more flexible than a C style
array or std::array
, because it manages its size and
memory, and it's just as performant: while using the small buffer, no dynamic
memory allocation happens.
In the above example, if push_front
was used instead of push_back
,
allocation would happen: Push operations do not shift elements to avoid breaking
the promise of reserve
. If
you want shifting, use insert
instead.
In contrast with the standard vector, devector
allows reserving construction time. This is especially important when both
front and back pushes are anticipated. In the following example, only a single
allocation is required, no reallocation happens:
devector<int> reserved_devector(32, 16, reserve_only_tag{}); for (int i = 0; i < 32; ++i) { reserved_devector.push_front(i); } for (int i = 0; i < 16; ++i) { reserved_devector.push_back(i); }
The pace how standard vector grows its buffer is not fixed by the standard,
it's an implementation detail. If you want a platform independent, custom
behavior for devector
, specify
a GrowthPolicy
:
struct custom_growth_policy { static constexpr unsigned growth_factor = 2u; static constexpr unsigned initial_size = 16u; static unsigned new_capacity(unsigned capacity) { return (capacity) ? capacity * growth_factor : initial_size; } static bool should_shrink(unsigned size, unsigned capacity, unsigned small_buffer_size) { (void)capacity; return size <= small_buffer_size; } }; using namespace boost::container; devector<int, devector_small_buffer_policy<0>, custom_growth_policy> custom_growth_devector;
Here, the initial capacity of a non-empty devector will be at least 16 (initial_size
). For example: on the first
push_back
, memory for 16
elements will be allocated. According to the specified growth policy, the
internal storage is doubled every time the devector
runs out of memory (growth_factor
).
When shrink_to_fit
is called,
first the should_shrink
method
of the growth policy is consulted with. Only if it returns true, can a shrink
happen. Here, the policy allows shrinking only if the contents of the devector
can fit in the small buffer.
Among the several usecases, a simple one is reversing an input range in one go while doing as few allocations as possible. In the following example, input lines are pushed to the front of the buffer, thus automatically forming a reverse order.
devector<std::string> reversed_lines; std::string line; std::getline(std::cin, line); while (std::cin) { reversed_lines.push_front(line); std::getline(std::cin, line); } std::cout << "Reversed lines:\n"; std::copy( reversed_lines.begin(), reversed_lines.end(), std::ostream_iterator<std::string>(std::cout, "\n") );
The devector
is more trusting
than the standard containers: it allows the programmer to use (potentially)
unsafe methods. The most simple are the unsafe_push_front
and unsafe_push_back
methods.
They are similar to the safe versions, but here, the caller must make sure
the required free capacity is present before calling them. The benefit: avoiding
a check, thus sparing a few instructions.
devector<int> dv; dv.reserve_front(2); dv.reserve_back(2); // the previous reserve_front is still in effect dv.unsafe_push_front(2); dv.unsafe_push_front(1); dv.unsafe_push_back(3); dv.unsafe_push_back(4);
Another unsafe, but efficient construct is placing uninitialized elements
into the sequence. In the example below, data is read from a socket and copied
directly to the devector
buffer:
// int sockfd = socket(...); connect(...); devector<char> buffer(256, unsafe_uninitialized_tag{}); ssize_t recvSize = recv(sockfd, buffer.data(), buffer.size(), 0); if (recvSize >= 0) { buffer.unsafe_uninitialized_resize_back(recvSize); // process contents of buffer } else { /* handle error */ }
The constructor used here does not initialize the elements: it's done by
recv
later. After writing
is done, the superfluous, still uninitialized elements are dropped from the
back by unsafe_uninitialized_resize_back
.
This is particularly important for non-trivial value types: The devector
does not know which elements are
initialized, thus, for example, upon destruction, the destructor is called
for every supposed elem, even if one is not really there.
When used with a non trivial type and small buffer optimization, this can
be even more efficient (and more flexible) than using std::array
,
because no unnecessary initialization or assignment is made. Non trivial
types can be be put into the right uninitialized slot by placement
new.
Most of the time, standard vector can be easily replaced by devector
. If a custom allocator is needed,
however, extra template arguments must be specified:
using namespace boost::container; // assumed here and below std::vector<int> regular_vector; devector<int> regular_devector; std::vector<int, custom_allocator<int>> custom_alloc_vector; devector< int, devector_small_buffer_policy<0>, devector_growth_policy, custom_allocator<int> > custom_alloc_devector;
The batch_deque
class template,
contrary to the standard deque, allows custom sized segments, configurable
via a policy template. This gives greater control over the implementation
and ensures portable behavior:
using namespace boost::container; // assumed here and below batch_deque<char, batch_deque_policy<256>> deque;
Assuming the user more or less can guess the average required buffer size, the right segment size can be specified to minimize the number of memory allocations.
Since the segment size is configured by the user, it makes sense to be able
to do batch operations over the elements of each segment. In the example
below, the deque is used as a flexible output buffer. The write
function is called for each segment, instead of for eahc element, resulting
in better performance. Scatter/gather I/O is possible in a similar way.
for (auto it = deque.segment_begin(); it != deque.segment_end(); ++it) { write(fd, it.data(), it.data_size()); }
Reference stability is an important feature of the batch_deque
.
Inserting elements to the front or to the back of the sequence never invalidates
any references. Moreover, a special insert operation is provided, stable_insert
, which takes an iterator,
as a hint, and inserts a sequence of elements (not necessary directly) before
the element pointed by the hint, in a way that it doesn't invalidate references.
If needed, default constructs elements to avoid a gap in the sequence.
batch_deque<int> deque{1, 2, 3, 4, 5, 6, 7, 8}; auto four_it = deque.begin() + 3; int& four = *four_it; std::vector<int> input(100, 9); deque.stable_insert(four_it, input.begin(), input.end()); BOOST_ASSERT(four == 4); // stable_insert ensures reference stability
This feature can be useful, if reference stability and relative order is required, but the exact number of the elements is flexible, e.g: when the sequence stores cached objects.