NewsWorld
PredictionsDigestsScorecardTimelinesArticles
NewsWorld
HomePredictionsDigestsScorecardTimelinesArticlesWorldTechnologyPoliticsBusiness
AI-powered predictive news aggregation© 2026 NewsWorld. All rights reserved.
Trending
MilitaryFebruaryHongMarketIranTimelineLaunchDigestTrumpFacesKongSignificantExpansionTargetingCampaignPolicyFridayRegionalProgramsTradingCompanyMedicalTraditionalPrediction
MilitaryFebruaryHongMarketIranTimelineLaunchDigestTrumpFacesKongSignificantExpansionTargetingCampaignPolicyFridayRegionalProgramsTradingCompanyMedicalTraditionalPrediction
All Articles
We deserve a better streams API for JavaScript
Hacker News
Published about 4 hours ago

We deserve a better streams API for JavaScript

Hacker News · Feb 27, 2026 · Collected from RSS

Summary

Article URL: https://blog.cloudflare.com/a-better-web-streams-api/ Comments URL: https://news.ycombinator.com/item?id=47180569 Points: 78 # Comments: 25

Full Article

2026-02-2724 min readHandling data in streams is fundamental to how we build applications. To make streaming work everywhere, the WHATWG Streams Standard (informally known as "Web streams") was designed to establish a common API to work across browsers and servers. It shipped in browsers, was adopted by Cloudflare Workers, Node.js, Deno, and Bun, and became the foundation for APIs like fetch(). It's a significant undertaking, and the people who designed it were solving hard problems with the constraints and tools they had at the time.But after years of building on Web streams — implementing them in both Node.js and Cloudflare Workers, debugging production issues for customers and runtimes, and helping developers work through far too many common pitfalls — I've come to believe that the standard API has fundamental usability and performance issues that cannot be fixed easily with incremental improvements alone. The problems aren't bugs; they're consequences of design decisions that may have made sense a decade ago, but don't align with how JavaScript developers write code today.This post explores some of the fundamental issues I see with Web streams and presents an alternative approach built around JavaScript language primitives that demonstrate something better is possible. In benchmarks, this alternative can run anywhere between 2x to 120x faster than Web streams in every runtime I've tested it on (including Cloudflare Workers, Node.js, Deno, Bun, and every major browser). The improvements are not due to clever optimizations, but fundamentally different design choices that more effectively leverage modern JavaScript language features. I'm not here to disparage the work that came before — I'm here to start a conversation about what can potentially come next. Where we're coming from The Streams Standard was developed between 2014 and 2016 with an ambitious goal to provide "APIs for creating, composing, and consuming streams of data that map efficiently to low-level I/O primitives." Before Web streams, the web platform had no standard way to work with streaming data.Node.js already had its own streaming API at the time that was ported to also work in browsers, but WHATWG chose not to use it as a starting point given that it is chartered to only consider the needs of Web browsers. Server-side runtimes only adopted Web streams later, after Cloudflare Workers and Deno each emerged with first-class Web streams support and cross-runtime compatibility became a priority.The design of Web streams predates async iteration in JavaScript. The for await...of syntax didn't land until ES2018, two years after the Streams Standard was initially finalized. This timing meant the API couldn't initially leverage what would eventually become the idiomatic way to consume asynchronous sequences in JavaScript. Instead, the spec introduced its own reader/writer acquisition model — and that decision rippled through every aspect of the API. Excessive ceremony for common operations The most common task with streams is reading them to completion. Here's what that looks like with Web streams: // First, we acquire a reader that gives an exclusive lock // on the stream... const reader = stream.getReader(); const chunks = []; try { // Second, we repeatedly call read and await on the returned // promise to either yield a chunk of data or indicate we're // done. while (true) { const { value, done } = await reader.read(); if (done) break; chunks.push(value); } } finally { // Finally, we release the lock on the stream reader.releaseLock(); } You might assume this pattern is inherent to streaming. It isn't. The reader acquisition, the lock management, and the { value, done } protocol are all just design choices, not requirements. They are artifacts of how and when the Web streams spec was written. Async iteration exists precisely to handle sequences that arrive over time, but async iteration did not yet exist when the streams specification was written. The complexity here is pure API overhead, not fundamental necessity.Consider the alternative approach now that Web streams now do support for await...of: const chunks = []; for await (const chunk of stream) { chunks.push(chunk); } This is better in that there is far less boilerplate, but it doesn't solve everything. Async iteration was retrofitted onto an API that wasn't designed for it, and it shows. Features like BYOB (bring your own buffer) reads aren't accessible through iteration. The underlying complexity of readers, locks, and controllers are still there, just hidden. When something does go wrong, or when additional features of the API are needed, developers find themselves back in the weeds of the original API, trying to understand why their stream is "locked" or why releaseLock() didn't do what they expected or hunting down bottlenecks in code they don't control. The locking problem Web streams use a locking model to prevent multiple consumers from interleaving reads. When you call getReader(), the stream becomes locked. While locked, nothing else can read from the stream directly, pipe it, or even cancel it — only the code that is actually holding the reader can.This sounds reasonable until you see how easily it goes wrong: async function peekFirstChunk(stream) { const reader = stream.getReader(); const { value } = await reader.read(); // Oops — forgot to call reader.releaseLock() // And the reader is no longer available when we return return value; } const first = await peekFirstChunk(stream); // TypeError: Cannot obtain lock — stream is permanently locked for await (const chunk of stream) { /* never runs */ } Forgetting releaseLock() permanently breaks the stream. The locked property tells you that a stream is locked, but not why, by whom, or whether the lock is even still usable. Piping internally acquires locks, making streams unusable during pipe operations in ways that aren't obvious.The semantics around releasing locks with pending reads were also unclear for years. If you called read() but didn't await it, then called releaseLock(), what happened? The spec was recently clarified to cancel pending reads on lock release — but implementations varied, and code that relied on the previous unspecified behavior can break.That said, it's important to recognize that locking in itself is not bad. It does, in fact, serve an important purpose to ensure that applications properly and orderly consume or produce data. The key challenge is with the original manual implementation of it using APIs like getReader() and releaseLock(). With the arrival of automatic lock and reader management with async iterables, dealing with locks from the users point of view became a lot easier.For implementers, the locking model adds a fair amount of non-trivial internal bookkeeping. Every operation must check lock state, readers must be tracked, and the interplay between locks, cancellation, and error states creates a matrix of edge cases that must all be handled correctly. BYOB: complexity without payoff BYOB (bring your own buffer) reads were designed to let developers reuse memory buffers when reading from streams — an important optimization intended for high-throughput scenarios. The idea is sound: instead of allocating new buffers for each chunk, you provide your own buffer and the stream fills it.In practice, (and yes, there are always exceptions to be found) BYOB is rarely used to any measurable benefit. The API is substantially more complex than default reads, requiring a separate reader type (ReadableStreamBYOBReader) and other specialized classes (e.g. ReadableStreamBYOBRequest), careful buffer lifecycle management, and understanding of ArrayBuffer detachment semantics. When you pass a buffer to a BYOB read, the buffer becomes detached — transferred to the stream — and you get back a different view over potentially different memory. This transfer-based model is error-prone and confusing: const reader = stream.getReader({ mode: 'byob' }); const buffer = new ArrayBuffer(1024); let view = new Uint8Array(buffer); const result = await reader.read(view); // 'view' should now be detached and unusable // (it isn't always in every impl) // result.value is a NEW view, possibly over different memory view = result.value; // Must reassign BYOB also can't be used with async iteration or TransformStreams, so developers who want zero-copy reads are forced back into the manual reader loop.For implementers, BYOB adds significant complexity. The stream must track pending BYOB requests, handle partial fills, manage buffer detachment correctly, and coordinate between the BYOB reader and the underlying source. The Web Platform Tests for readable byte streams include dedicated test files just for BYOB edge cases: detached buffers, bad views, response-after-enqueue ordering, and more.BYOB ends up being complex for both users and implementers, yet sees little adoption in practice. Most developers stick with default reads and accept the allocation overhead.Most userland implementations of custom ReadableStream instances do not typically bother with all the ceremony required to correctly implement both default and BYOB read support in a single stream – and for good reason. It's difficult to get right and most of the time consuming code is typically going to fallback on the default read path. The example below shows what a "correct" implementation would need to do. It's big, complex, and error prone, and not a level of complexity that the typical developer really wants to have to deal with: new ReadableStream({ type: 'bytes', async pull(controller: ReadableByteStreamController) { if (offset >= totalBytes) { controller.close(); return; } // Check for BYOB request FIRST const byobRequest = controller.byobRequest; if (byobRequest) { // === BYOB PATH === // Consumer provided a buffer - we MUST fill it (or part of it) const view = byobRequest.view!; const bytesAvailable = totalBytes - offset; const bytesToWrite = Math.min(view.byteLength,


Share this story

Read Original at Hacker News

Related Articles

Hacker Newsabout 2 hours ago
Vibe coded Lovable-hosted app littered with basic flaws exposed 18K users

Article URL: https://www.theregister.com/2026/02/27/lovable_app_vulnerabilities/ Comments URL: https://news.ycombinator.com/item?id=47182659 Points: 32 # Comments: 3

Hacker Newsabout 2 hours ago
NASA announces major overhaul of Artemis program amid safety concerns, delays

Article URL: https://www.cbsnews.com/news/nasa-artemis-moon-program-overhaul/ Comments URL: https://news.ycombinator.com/item?id=47182483 Points: 19 # Comments: 8

Hacker Newsabout 3 hours ago
We gave terabytes of CI logs to an LLM

Article URL: https://www.mendral.com/blog/llms-are-good-at-sql Comments URL: https://news.ycombinator.com/item?id=47181801 Points: 51 # Comments: 33

Hacker Newsabout 3 hours ago
Show HN: Badge that shows how well your codebase fits in an LLM's context window

Small codebases were always a good thing. With coding agents, there's now a huge advantage to having a codebase small enough that an agent can hold the full thing in context. Repo Tokens is a GitHub Action that counts your codebase's size in tokens (using tiktoken) and updates a badge in your README. The badge color reflects what percentage of an LLM's context window the codebase fills: green for under 30%, yellow for 50-70%, red for 70%+. Context window size is configurable and defaults to 200k (size of Claude models). It's a composite action. Installs tiktoken, runs ~60 lines of inline Python, takes about 10 seconds. The action updates the README but doesn't commit, so your workflow controls the git strategy. The idea is to make token size a visible metric, like bundle size badges for JS libraries. Hopefully a small nudge to keep codebases lean and agent-friendly. GitHub: https://github.com/qwibitai/nanoclaw/tree/main/repo-tokens Comments URL: https://news.ycombinator.com/item?id=47181471 Points: 17 # Comments: 9

Hacker Newsabout 3 hours ago
Tenth Circuit: 4th Amendment Doesn't Support Broad Search of Protesters' Devices

Article URL: https://www.eff.org/deeplinks/2026/02/victory-tenth-circuit-finds-fourth-amendment-doesnt-support-broad-search-0 Comments URL: https://news.ycombinator.com/item?id=47181391 Points: 43 # Comments: 3

Hacker Newsabout 3 hours ago
The Pentagon is making a mistake by threatening Anthropic

Article URL: https://www.understandingai.org/p/the-pentagon-is-making-a-mistake Comments URL: https://news.ycombinator.com/item?id=47181380 Points: 149 # Comments: 105