stream: accept ArrayBuffer in CompressionStream and DecompressionStream#61913
stream: accept ArrayBuffer in CompressionStream and DecompressionStream#61913suuuuuuminnnnnn wants to merge 6 commits intonodejs:mainfrom
Conversation
|
Review requested:
|
30ebba1 to
91009bf
Compare
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #61913 +/- ##
==========================================
+ Coverage 89.75% 89.76% +0.01%
==========================================
Files 674 674
Lines 204416 204893 +477
Branches 39285 39378 +93
==========================================
+ Hits 183472 183930 +458
- Misses 13227 13232 +5
- Partials 7717 7731 +14
🚀 New features to boost your workflow:
|
|
Thanks @MattiasBuelens and @Renegade334 — applied both suggestions.
I kept the PR scoped to the |
Failed to start CI⚠ No approving reviews found ✘ Refusing to run CI on potentially unsafe PRhttps://github.com/nodejs/node/actions/runs/22255119973 |
There was a problem hiding this comment.
I think this is pretty much good to go.
One request, though: although testing the compression/decompression stream behaviour is appropriate, the underlying change is to the adapter behaviour, which should also be tested.
Could you please add something like the following to test/parallel/test-whatwg-webstreams-adapters-to-writablestream.js:
{
const duplex = new PassThrough();
const writableStream = newWritableStreamFromStreamWritable(duplex);
const ec = new TextEncoder();
const arrayBuffer = ec.encode('hello').buffer;
writableStream
.getWriter()
.write(arrayBuffer)
.then(common.mustCall());
duplex.on('data', common.mustCall((chunk) => {
assert(chunk instanceof Buffer);
assert(chunk.equals(Buffer.from('hello')));
}));
}
{
const duplex = new PassThrough({ objectMode: true });
const writableStream = newWritableStreamFromStreamWritable(duplex);
const ec = new TextEncoder();
const arrayBuffer = ec.encode('hello').buffer;
writableStream
.getWriter()
.write(arrayBuffer)
.then(common.mustCall());
duplex.on('data', common.mustCall((chunk) => {
assert(chunk instanceof ArrayBuffer);
assert.strictEqual(chunk, arrayBuffer);
}));
}to test the ArrayBuffer conversion behaviour and honouring objectMode.
Per the WHATWG Compression Streams spec,
CompressionStreamandDecompressionStreammust acceptBufferSource(
ArrayBuffer | ArrayBufferView) as valid chunk types.Currently, writing a plain
ArrayBufferto the writable side throwsERR_INVALID_ARG_TYPE, because the adapter innewWritableStreamFromStreamWritablepasses chunks directly tostream.write(), which only acceptsArrayBufferViewtypes.This change normalizes plain
ArrayBufferchunks to aUint8Arrayviewbefore forwarding them to the underlying Node.js stream.
Changes included:
ArrayBuffertoUint8Arrayinlib/internal/webstreams/adapters.jsArrayBufferinputBufferSourcetest (if passing)Fixes: #43433