Boost.Capy

Boost.Capy is the foundation that C++20 coroutines need for buffer-oriented I/O. It solves the completion-context problem: ensuring your coroutine always resumes on your designated executor in a single-threaded or multi-threaded environment, while providing buffer sequences, stream concepts, synchronization, and test mocks that I/O libraries require.

What This Library Does

  • IoAwaitable protocol — automatic executor affinity through every co_await

  • Lazy coroutine tasks with forward-propagating stop tokens and cancellation

  • Buffer sequencesconst_buffer, mutable_buffer, dynamic buffers, and algorithms for scatter/gather I/O

  • Stream conceptsReadStream, WriteStream, ReadSource, WriteSink for generic buffer-oriented operations

  • Concurrent composition via when_all, when_any with structured error propagation

  • Execution contexts — thread pool with service management

  • Strand for safe concurrent access without mutexes

  • Async synchronizationasync_mutex, async_event

  • Frame allocation recycling for zero steady-state allocations

  • Test utilities — mock streams, mock sources/sinks, and error injection

What This Library Does Not Do

  • Networking primitives — no sockets, HTTP, or protocol implementations

  • Platform-specific event loops — integrate with io_uring, IOCP, or your platform’s I/O framework

  • The sender/receiver model — Capy uses the IoAwaitable protocol, not std::execution

Target Audience

Library authors — Use stream concepts (ReadStream, WriteStream, ReadSource, WriteSink), algorithms (read, write), buffer sequences, and the IoAwaitable protocol to build composable I/O frameworks without being tied to a particular implementation.

Application developers — Program against task<T>, when_all, stream concepts, and buffer sequences. Test async logic with mock streams.

Migration from callbacks — Coroutine-native model with explicit executor propagation. No thread-local state or intermediate adapters.

High-performance systems — Frame allocation recycling (zero steady-state allocations), scatter/gather buffer sequences, type erasure only at boundaries.

Design Philosophy

Lazy by default. Tasks suspend immediately on creation. This enables structured composition where parent coroutines naturally await their children. Eager execution is available through run_async.

Affinity through the protocol. The executor propagates through await_suspend parameters, not through thread-local storage or global state. This makes the data flow explicit and testable.

Type erasure at boundaries. Tasks use type-erased executors (executor_ref) internally, paying the indirection cost once rather than templating everything. For I/O-bound code, this cost is negligible.

Composition over inheritance. Buffer types, stream concepts, and awaitables are designed to compose cleanly rather than requiring deep class hierarchies.

Requirements

Assumed Knowledge:

  • C++20 language features (concepts, ranges, coroutines syntax)

  • Basic understanding of concurrent programming

  • Familiarity with system::error_code error handling

Compiler Support:

  • GCC 13+

  • Clang 17+

  • MSVC 14.34+ (Visual Studio 2022 17+)

Dependencies:

  • C++20 standards-compliant library with <coroutine> support.

  • Boost: Assert, Compat, Config, Core, Mp11, Predef, System, Throw_exception, Variant2, Winapi

Linking:

Capy is a compiled library and needs to be linked against its other Boost dependencies, such as Boost.System.

Code Convention: Examples in this documentation assume these declarations are in effect unless otherwise noted:

#include <boost/capy.hpp>
using namespace boost::capy;

Quick Example

#include <boost/capy/task.hpp>
#include <boost/capy/ex/run_async.hpp>
#include <boost/capy/ex/thread_pool.hpp>
#include <iostream>

using boost::capy::task;
using boost::capy::run_async;
using boost::capy::thread_pool;

task<int> compute()
{
    co_return 42;
}

task<void> run()
{
    int result = co_await compute();
    std::cout << "Result: " << result << "\n";
}

int main()
{
    thread_pool pool(1);
    run_async(pool.get_executor())(run());
    // Pool destructor waits for completion
}

The key insight: both run() and compute() execute on the same executor because affinity propagated automatically through co_await.

Next Steps