Sane C++ Libraries
C++ Platform Abstraction Libraries
Loading...
Searching...
No Matches
Http

🟥 HTTP parser, server and client

SaneCppHttp.h is a library implementing a hand-written HTTP/1.1 parser, server and client.

Dependencies

Dependency Graph

Features

  • HTTP 1.1 Parser
  • HTTP 1.1 Server
  • HTTP 1.1 Client

Status

🟥 Draft
In current state the library is able to host simple static website but it cannot be used for any internet facing application.
Additionally its API will be changing heavily as it's undergoing major re-design to make it fully allocation free and extend it to support more of the HTTP 1.1 standard.

Description

The HTTP parser is an incremental parser, that will emit events as soon as a valid element has been successfully parsed. This allows handling incomplete responses without needing holding it entirely in memory.

The HTTP server is for now just a basic implementations and it's missing many important features. It's however capable of serving files through http while staying inside a few user provided fixed buffers, without allocating any kind of dynamic memory.

The HTTP client follows the same fixed-memory approach used by the server side:

  • caller-provided connection storage through SC::HttpAsyncClientConnection
  • request headers built inside fixed header memory
  • fixed-span or streamed request bodies with explicit Content-Length
  • incremental response-header parsing into fixed buffers
  • streamed response bodies exposed through SC::AsyncReadableStream
  • optional sequential keep-alive reuse for requests targeting the same origin

The expected client lifecycle is stream-first:

  1. Create SC::HttpAsyncClientConnection<...> storage and initialize SC::HttpAsyncClient
  2. Call start(loop, method, url) and configure the active SC::HttpAsyncClientRequest inside onPrepareRequest, or use get / put / post
  3. Send headers through HttpAsyncClientRequest::sendHeaders() and write any manual request body through HttpAsyncClientRequest::getWritableStream()
  4. Handle onResponse once headers are parsed and attach to HttpAsyncClientResponse::getReadableStream()
  5. Consume the response body incrementally and use the response readable stream eventEnd as the completion signal

Current client limitations:

  • http only, no https
  • one in-flight request at a time
  • no HTTP pipelining
  • no chunked transfer encoding support
  • response bodies must use Content-Length, unless HTTP semantics guarantee an empty body

Videos

This is the list of videos that have been recorded showing some of the internal thoughts that have been going into this library:

Blog

Some relevant blog posts are:

HttpAsyncServer

Async Http Server.

This class handles a fully asynchronous http server staying inside 5 fixed memory regions passed during init.

Usage:

See also
SC::HttpAsyncFileServer, SC::HttpConnectionsPool
constexpr int MAX_CONNECTIONS = 3; // Max number of concurrent http connections
constexpr int REQUEST_SLICES = 2; // Number of slices of the request buffer for each connection
constexpr int REQUEST_SIZE = 1 * 1024; // How many bytes are allocated to stream data for each connection
constexpr int HEADER_SIZE = 8 * 1024; // How many bytes are dedicated to hold request and response headers
// The size of the header and request memory, and length of read/write queues are fixed here, but user can
// set any arbitrary size for such queues doing the same being done in HttpAsyncConnection constructor.
using HttpConnectionType = HttpAsyncConnection<REQUEST_SLICES, REQUEST_SLICES, HEADER_SIZE, REQUEST_SIZE>;
// 1. Memory to hold all http connections (single array for simplicity).
// WebServerExample (SCExample) shows how to leverage virtual memory, to handle dynamic number of clients
HttpConnectionType connections[MAX_CONNECTIONS];
// Initialize and start the http server
HttpAsyncServer httpServer;
const uint16_t serverPort = report.mapPort(6152);
SC_TEST_EXPECT(httpServer.init(Span<HttpConnectionType>(connections)));
SC_TEST_EXPECT(httpServer.start(eventLoop, "127.0.0.1", serverPort));
struct ServerContext
{
int numRequests;
} serverContext = {0};
// Handle the request and answer accordingly
httpServer.onRequest = [this, &serverContext](HttpConnection& client)
{
HttpRequest& request = client.request;
HttpResponse& response = client.response;
if (request.getParser().method != HttpParser::Method::HttpGET)
{
SC_TEST_EXPECT(response.startResponse(405));
SC_TEST_EXPECT(response.addHeader("Allow", "GET"));
SC_TEST_EXPECT(response.sendHeaders());
SC_TEST_EXPECT(response.end());
return;
}
if (request.getURL() != "/index.html" and request.getURL() != "/")
{
SC_TEST_EXPECT(response.startResponse(404));
SC_TEST_EXPECT(response.sendHeaders());
SC_TEST_EXPECT(response.end());
return;
}
serverContext.numRequests++;
SC_TEST_EXPECT(response.startResponse(200));
SC_TEST_EXPECT(response.addHeader("Connection", "Closed"));
SC_TEST_EXPECT(response.addHeader("Content-Type", "text/html"));
SC_TEST_EXPECT(response.addHeader("Server", "SC"));
SC_TEST_EXPECT(response.addHeader("Date", "Mon, 27 Aug 2023 16:37:00 GMT"));
SC_TEST_EXPECT(response.addHeader("Last-Modified", "Wed, 27 Aug 2023 16:37:00 GMT"));
const char sampleHtml[] = "<html>\r\n"
"<body bgcolor=\"#000000\" text=\"#ffffff\">\r\n"
"<h1>This is a title {}!</h1>\r\n"
"We must start from somewhere\r\n"
"</body>\r\n"
"</html>\r\n";
// Create a "user provided" dynamically allocated string, to show this is possible
String content;
SC_TEST_EXPECT(StringBuilder::format(content, sampleHtml, serverContext.numRequests));
SmallString<16> contentLength;
SC_TEST_EXPECT(StringBuilder::format(contentLength, "{}", content.view().sizeInBytes()));
SC_TEST_EXPECT(response.addHeader("Content-Length", contentLength.view()));
SC_TEST_EXPECT(response.sendHeaders());
// Note that the system takes ownership of the dynamically allocated user provided string
// through type erasure and it will invoke its destructor after th write operation will finish,
// freeing user memory as expected.
// This write operation succeeds because EXTRA_SLICES allocates one more slot buffer exactly
// to hold this user provided buffer, that is not part of the "re-usable" buffers created
// at the beginning of this sample.
SC_TEST_EXPECT(response.getWritableStream().write(move(content)));
SC_TEST_EXPECT(response.end());
};

HttpAsyncFileServer

Http file server statically serves files from a directory.

This class registers the onRequest callback provided by HttpAsyncServer to serves files from a given directory.

Example using compile time set buffers for connections:

constexpr int MAX_CONNECTIONS = 1; // Max number of concurrent http connections (1 disables keep-alive)
constexpr int REQUEST_SLICES = 2; // Number of slices of the request buffer for each connection
constexpr int REQUEST_SIZE = 1 * 1024; // How many bytes are allocated to stream data for each connection
constexpr int HEADER_SIZE = 8 * 1024; // How many bytes are dedicated to hold request and response headers
constexpr int NUM_FS_THREADS = 4; // Number of threads in the thread pool for async file stream operations
// This class is fixing buffer sizes at compile time for simplicity but it's possible to size them at runtime
using HttpConnectionType = HttpAsyncConnection<REQUEST_SLICES, REQUEST_SLICES, HEADER_SIZE, REQUEST_SIZE>;
// 1. Memory to hold all http connections (single array for simplicity).
// WebServerExample (SCExample) shows how to leverage virtual memory, to handle dynamic number of clients
HttpConnectionType connections[MAX_CONNECTIONS];
// 2. Memory used by the async file streams started by file server.
HttpAsyncFileServer::StreamQueue<REQUEST_SLICES> streams[MAX_CONNECTIONS];
// Initialize and start the http and the file server
HttpAsyncServer httpServer;
HttpAsyncFileServer fileServer;
const uint16_t serverPort = report.mapPort(8090);
ThreadPool threadPool;
if (eventLoop.needsThreadPoolForFileOperations()) // no thread pool needed for io_uring
{
SC_TEST_EXPECT(threadPool.create(NUM_FS_THREADS));
}
SC_TEST_EXPECT(httpServer.init(Span<HttpConnectionType>(connections)));
SC_TEST_EXPECT(httpServer.start(eventLoop, "127.0.0.1", serverPort));
SC_TEST_EXPECT(fileServer.init(threadPool, eventLoop, webServerFolder));
fileServer.setUseAsyncFileSend(useAsyncFileSend);
// Forward all http requests to the file server in order to serve files
httpServer.onRequest = [&](HttpConnection& connection)
{ SC_ASSERT_RELEASE(fileServer.handleRequest(streams[connection.getConnectionID().getIndex()], connection)); };

Example using dynamically allocated buffers for connections:

HttpAsyncServer httpServer;
HttpAsyncFileServer fileServer;
ThreadPool threadPool;
static constexpr size_t MAX_CONNECTIONS = 1000000; // Reserve space for max 1 million connections
static constexpr size_t MAX_READ_QUEUE = 10; // Max number of read queue buffers for each connection
static constexpr size_t MAX_WRITE_QUEUE = 10; // Max number of write queue buffers for each connection
static constexpr size_t MAX_BUFFERS = 10; // Max number of write queue buffers for each connection
static constexpr size_t MAX_REQUEST_SIZE = 1024 * 1024; // Max number of bytes to stream data for each connection
static constexpr size_t MAX_HEADER_SIZE = 32 * 1024; // Max number of bytes to hold request and response headers
static constexpr size_t NUM_FS_THREADS = 4; // Number of threads for async file stream operations
VirtualArray<HttpConnection> clients = {MAX_CONNECTIONS};
// For simplicity just hardcode a read queue of 3 for file streams
VirtualArray<HttpAsyncFileServer::StreamQueue<3>> fileStreams = {MAX_CONNECTIONS};
VirtualArray<AsyncReadableStream::Request> allReadQueues = {MAX_CONNECTIONS * MAX_READ_QUEUE};
VirtualArray<AsyncWritableStream::Request> allWriteQueues = {MAX_CONNECTIONS * MAX_WRITE_QUEUE};
VirtualArray<AsyncBufferView> allBuffers = {MAX_CONNECTIONS * MAX_BUFFERS};
VirtualArray<char> allHeaders = {MAX_CONNECTIONS * MAX_HEADER_SIZE};
VirtualArray<char> allStreams = {MAX_CONNECTIONS * MAX_REQUEST_SIZE};
Result start()
{
SC_TRY(assignConnectionMemory(static_cast<size_t>(modelState.maxClients)));
// Optimization: only create a thread pool for FS operations if needed (i.e. when async backend != io_uring)
if (eventLoop->needsThreadPoolForFileOperations())
{
SC_TRY(threadPool.create(NUM_FS_THREADS));
}
// Initialize and start http and file servers, delegating requests to the latter in order to serve files
SC_TRY(httpServer.init(clients.toSpan()));
SC_TRY(httpServer.start(*eventLoop, modelState.interface.view(), static_cast<uint16_t>(modelState.port)));
SC_TRY(fileServer.init(threadPool, *eventLoop, modelState.directory.view()));
httpServer.onRequest = [&](HttpConnection& connection)
{
HttpAsyncFileServer::Stream& stream = fileStreams.toSpan()[connection.getConnectionID().getIndex()];
SC_ASSERT_RELEASE(fileServer.handleRequest(stream, connection));
};
return Result(true);
}
Result assignConnectionMemory(size_t numClients)
{
SC_TRY(clients.resize(numClients));
SC_TRY(fileStreams.resize(numClients));
SC_TRY(allReadQueues.resize(numClients * modelState.asyncConfiguration.readQueueSize));
SC_TRY(allWriteQueues.resize(numClients * modelState.asyncConfiguration.writeQueueSize));
SC_TRY(allBuffers.resize(numClients * modelState.asyncConfiguration.buffersQueueSize));
SC_TRY(allHeaders.resize(numClients * modelState.asyncConfiguration.headerBytesLength));
SC_TRY(allStreams.resize(numClients * modelState.asyncConfiguration.streamBytesLength));
HttpConnectionsPool::Memory memory;
memory.allBuffers = allBuffers;
memory.allReadQueue = allReadQueues;
memory.allWriteQueue = allWriteQueues;
memory.allHeaders = allHeaders;
memory.allStreams = allStreams;
SC_TRY(memory.assignTo(modelState.asyncConfiguration, clients.toSpan()));
return Result(true);
}
Result runtimeResize()
{
const size_t numClients =
max(static_cast<size_t>(modelState.maxClients), httpServer.getConnections().getHighestActiveConnection());
SC_TRY(assignConnectionMemory(numClients));
SC_TRY(httpServer.resize(clients.toSpan()));
return Result(true);
}

HttpAsyncClient

SC::HttpAsyncClient supports both convenience helpers for fixed in-memory request bodies and the lower-level start() flow for streamed or manually written request bodies. The API reference below includes small examples for both styles of usage.

Asynchronous HTTP/1.1 client using caller-provided fixed storage.

HttpAsyncClient processes a single request at a time and can sequentially reuse the same connection when keep-alive is enabled and the next request targets the same host and port.

Use the convenience wrappers (get, put, post, postMultipart) when the request body is already available in memory. Use start() when the request must be customized inside onPrepareRequest, for example to stream the request body with HttpAsyncClientRequest::setBody(AsyncReadableStream&, uint64_t) or to write it manually through HttpAsyncClientRequest::getWritableStream().

onResponse is called after response headers have been parsed. The response body is then read incrementally from HttpAsyncClientResponse::getReadableStream(), and the readable stream eventEnd signals the end of the response body.

Example without a streamed request body:

client.onResponse = [this, &ctx](HttpAsyncClientResponse& response)
{
ctx.collector.attach(response,
[this, &ctx](HttpAsyncClientResponse& completedResponse)
{
ctx.collector.detach();
SC_TEST_EXPECT(completedResponse.getParser().statusCode == 200);
SC_TEST_EXPECT(StringView(ctx.collector.view()) == "hello");
SC_TEST_EXPECT(ctx.httpServer.stop());
});
};
client.onError = [this](Result result) { SC_TEST_EXPECT(result); };
SC_TEST_EXPECT(client.get(loop, url.view()));

Example streaming the request body:

client.onPrepareRequest = [this, &bodyStream](HttpAsyncClientRequest& request)
{
request.setBody(bodyStream, 11);
SC_TEST_EXPECT(request.sendHeaders());
};
client.onResponse = [this, &ctx](HttpAsyncClientResponse& response)
{
ctx.collector.attach(response,
[this, &ctx](HttpAsyncClientResponse& completedResponse)
{
ctx.collector.detach();
SC_TEST_EXPECT(completedResponse.getParser().statusCode == 201);
String content;
SC_TEST_EXPECT(ctx.fs.read("client-put-stream.txt", content));
SC_TEST_EXPECT(content == "ChunkedBody");
SC_TEST_EXPECT(ctx.fs.removeFile("client-put-stream.txt"));
SC_TEST_EXPECT(ctx.httpServer.stop());
});
};
client.onError = [this](Result result) { SC_TEST_EXPECT(result); };
SC_TEST_EXPECT(client.start(loop, HttpParser::Method::HttpPUT, url.view()));

Examples

Roadmap

🟨 MVP

  • HTTP 1.1 Chunked Encoding

🟩 Usable Features:

  • Connection Upgrade
  • Multipart streamed encoding

🟦 Complete Features:

  • HTTPS
  • Support all HTTP verbs / methods

💡 Unplanned Features:

  • Http 2.0
  • Http 3.0

Statistics

Type Lines Of Code Comments Sum
Headers 451 325 776
Sources 2607 490 3097
Sum 3058 815 3873