Below, the following includes and namespace alias are assumed:
#include <boost/double_ended/devector.hpp> #include <boost/double_ended/batch_deque.hpp> namespace de = boost::double_ended;
The devector
class template,
along the standard vector features, also offers a few extra. The small buffer
optimization allows some elements to be stored next to the devector
internals, without allocating
memory on the heap. If the size of the devector
exceeds the size of the small buffer, elements previously held in the small
buffer are copied to a dynamically allocated buffer. The small buffer remains
unused: The memory is always contiguous. The size of the small buffer can
be set by the SmallBufferPolicy
template argument. By default, it's set to 0, which means, no small buffer
is used.
de::devector<int, de::small_buffer_size<16>> small_devector; BOOST_ASSERT(small_devector.capacity() == 16); // but no dynamic memory was allocated small_devector.push_back(2); small_devector.push_back(3); small_devector.push_back(4); small_devector.push_front(1); // allocates
Here, while push_back
calls
make use of the small buffer, push_front
results in an allocation: Push operations do not shift elements to avoid
breaking the promise of reserve
.
In contrast with the standard vector, devector
allows reserving construction time. This is especially important when both
front and back pushes are expected, and the small buffer should be used.
In the following example, only a single allocation is required, no reallocation
happens:
de::devector<int> reserved_devector(32, 16, de::reserve_only_tag{}); for (int i = 0; i < 32; ++i) { reserved_devector.push_front(i); } for (int i = 0; i < 16; ++i) { reserved_devector.push_back(i); }
The small buffer optimization makes the devector
a good buffer candidate. It is safer and more flexible than a C style array
or std::array
, because it manages its size and
memory, and it's just as performant: while using the small buffer, no dynamic
memory allocation happens.
The pace how standard vector grows its buffer is not fixed by the standard,
it's an implementation detail. If you want a platform independent, custom
behavior for devector
, specify
a GrowthPolicy
:
struct custom_growth_policy { using size_type = unsigned int; static constexpr unsigned growth_factor = 2u; static constexpr unsigned initial_size = 16u; static unsigned new_capacity(unsigned capacity) { return (capacity) ? capacity * growth_factor : initial_size; } static bool should_shrink(unsigned size, unsigned capacity, unsigned small_buffer_size) { (void)capacity; return size <= small_buffer_size; } }; de::devector<int, de::small_buffer_size<0>, custom_growth_policy> custom_growth_devector;
With this policy, when an element is pushed to a devector
without capacity, memory for 16 elements will be allocated (initial_size
), and the size of this storage
is doubled (growth_factor
)
each time it runs out.
When devecotr::shrink_to_fit
is called, first the should_shrink
method of the growth policy
is consulted with. The actual shrink is executed only if this returns true
. In this example the policy allows shrinking
only if the contents of the devector
can fit in the small buffer.
Among the several usecases, a simple one is reversing an input range in one go while doing as few allocations as possible. In the following example, input lines are pushed to the front of the buffer, thus automatically forming a reverse order.
de::devector<std::string> reversed_lines; reversed_lines.reserve_front(24); std::string line; while (std::getline(std::cin, line)) { reversed_lines.push_front(line); } std::cout << "Reversed lines:\n"; for (auto&& line : reversed_lines) { std::cout << line << "\n"; }
The devector
is more trusting
than the standard containers: it allows the programmer to use (potentially)
unsafe methods. The most simple are the unsafe_push_front
and unsafe_push_back
methods.
They are similar to the safe versions, but require the caller to make sure
the required free capacity is present. The benefit: avoiding a check, thus
sparing a few instructions.
de::devector<int> dv; dv.reserve_front(2); dv.reserve_back(2); // the previous reserve_front is still in effect dv.unsafe_push_front(2); dv.unsafe_push_front(1); dv.unsafe_push_back(3); dv.unsafe_push_back(4);
Another unsafe, but efficient construct is placing uninitialized elements
into the sequence. In the example below, data is read from a socket and copied
directly to the devector
buffer:
// int sockfd = socket(...); connect(...); de::devector<char> buffer(256, de::unsafe_uninitialized_tag{}); long recvSize = recv(sockfd, buffer.data(), buffer.size(), 0); if (recvSize >= 0) { buffer.unsafe_uninitialized_resize_back(recvSize); // process contents of buffer } else { /* handle error */ }
The constructor used here does not initialize the elements: it's done by
recv
later. After writing
is done, the superfluous, still uninitialized elements are dropped from the
back by unsafe_uninitialized_resize_back
.
This is particularly important for non-trivial value types: The devector
does not know which elements are
initialized, thus, for example, upon destruction, the destructor is called
for every supposed element, even if one is not really there.
When used with a non trivial type and small buffer optimization, this can
be even more efficient (and more flexible) than using std::array
,
because no unnecessary initialization or assignment is made. Non trivial
types can be be put into the right uninitialized slot by placement
new.
Most of the time, standard vector can be easily replaced by devector
. If a custom allocator is needed,
however, extra template arguments must be specified:
std::vector<int> regular_vector; de::devector<int> regular_devector; std::vector<int, custom_allocator<int>> custom_alloc_vector; de::devector< int, de::small_buffer_size<0>, de::devector_growth_policy, custom_allocator<int> > custom_alloc_devector;
The batch_deque
class template,
contrary to the standard deque, allows custom sized segments, configurable
via a policy template. This gives greater control over the implementation
and ensures portable behavior:
de::batch_deque<char, de::batch_deque_policy<256>> deque;
Assuming the user more or less can guess the average required buffer size, the right segment size can be specified to minimize the number of memory allocations.
Since the segment size is configured by the user, it makes sense to be able
to do batch operations over the elements of each segment. In the example
below, the deque is used as a flexible output buffer. The write
function is called for each segment, instead of for each element, resulting
in better performance. Scatter/gather I/O is possible in a similar way.
for (auto it = deque.segment_begin(); it != deque.segment_end(); ++it) { write(fd, it.data(), it.data_size()); }
Reference stability is an important feature of the batch_deque
.
Inserting elements to the front or to the back of the sequence never invalidates
any references. Moreover, a special insert operation is provided, stable_insert
, which takes an iterator,
as a hint, and inserts a sequence of elements (not necessary directly) before
the element pointed by the hint, in a way that it doesn't invalidate references.
If needed, default constructs elements to avoid a gap in the sequence.
de::batch_deque<int> deque{1, 2, 3, 4, 5, 6, 7, 8}; auto four_it = deque.begin() + 3; int& four = *four_it; std::vector<int> input(100, 9); deque.stable_insert(four_it, input.begin(), input.end()); BOOST_ASSERT(four == 4); // stable_insert ensures reference stability
This feature can be useful, if reference stability and relative order is required, but the exact number of the elements is flexible, e.g: when the sequence stores cached objects.
The double_ended
containers
play well with the Boost.Serialization library. Any container can be serialized,
as long as its value type can be serialized as well.
#include <boost/archive/text_oarchive.hpp> #include <boost/archive/text_iarchive.hpp> #include <boost/double_ended/serialize_devector.hpp> #include <boost/double_ended/serialize_batch_deque.hpp> namespace de = boost::double_ended; int main() { const de::batch_deque<int> bd{1, 2, 3, 4, 5, 6, 7, 8}; const de::devector<int> dv{9, 10, 11, 12, 13, 14, 15, 16}; std::stringstream stream; { boost::archive::text_oarchive out(stream); out << bd; out << dv; } de::batch_deque<int> new_bd; de::devector<int> new_dv; { boost::archive::text_iarchive in(stream); in >> new_bd; in >> new_dv; } BOOST_ASSERT(bd == new_bd); BOOST_ASSERT(dv == new_dv); return 0; }