🟥 Concurrently read and write a byte stream staying inside fixed buffers
Async Streams read and write data concurrently from async sources to destinations.
Read, writes and transforms happen in parallel if sources and destinations are asynchronous. This library does not allocate any memory, all buffers are supplied by the caller.
Async Streams are largely inspired by node.js Streams, a very powerful tool to process large amounts of data concurrently.
The basic idea about an async stream is to create a Source / Sink abstraction (also called Readable and Writable) and process small buffers of data at time.
The state machine that coordinates this interaction handles data buffering and more importantly handles also back-pressure, that means:
By implementing streams on top of async operations it's possible to run many of them concurrently very efficiently. When properly implemented for example an async pipeline can concurrently read from disk, write to a socket while compressing data.
Most notable differences with node.js streams are for now:
This is the list of implemented objects stream types
Async Stream | Description |
---|---|
AsyncReadableStream | Async source abstraction emitting data events in caller provided byte buffers. |
AsyncWritableStream | Async destination abstraction where bytes can be written to. |
AsyncPipeline | Pipes reads on SC::AsyncReadableStream to SC::AsyncWritableStream. |
ReadableFileStream | Uses an SC::AsyncFileRead to stream data from a file. |
WritableFileStream | Uses an SC::AsyncFileWrite to stream data to a file. |
ReadableSocketStream | Uses an SC::AsyncFileWrite to stream data from a socket. |
WritableSocketStream | Uses an SC::AsyncFileWrite to stream data to a socket. |
🟥 Draft
Async Streams are for now in Draft state. It's also possible that its API will evolve a little bit to be less verbose and there is also lack of nice examples, aside from the tests.
Async Streams support reading from an async source and placing such reads in a bounded request queue that will pause the stream when it becomes full or when there are no available buffers. Data is pushed downstream to listeners of data events, that are either transform streams or writers streams. Writers will eventually emit a drain
event to signal that they can write more data. Such event can be used to resume the readable streams that may have been paused. AsyncPipeline doesn't use the drain
event but it just resumes readable streams after every successful write. This works because the Readable will pause when running out of buffers, allowing them to resume when a new one is made available.
Async streams do not allocate any memory, but use caller provided buffers for handling data and request queues.
This is the list of videos that have been recorded showing some of the internal thoughts that have been going into this library:
Some relevant blog posts are:
🟨 MVP features
🟩 Usable features:
🟦 Complete Features:
💡 Unplanned Features: