0% found this document useful (0 votes)
4 views

gf

The document discusses the four types of streams in Node.js: Readable, Writable, Duplex, and Transform, along with their practical applications and operations such as the Pipe method and Pipeline for improved error handling. It emphasizes the importance of managing backpressure, creating custom Transform streams, and building robust data processing pipelines. The document concludes with the potential of Node.js streams for domain-specific solutions and references for further reading.

Uploaded by

Dhaouadi Dhia
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

gf

The document discusses the four types of streams in Node.js: Readable, Writable, Duplex, and Transform, along with their practical applications and operations such as the Pipe method and Pipeline for improved error handling. It emphasizes the importance of managing backpressure, creating custom Transform streams, and building robust data processing pipelines. The document concludes with the potential of Node.js streams for domain-specific solutions and references for further reading.

Uploaded by

Dhaouadi Dhia
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Node.

js

Streams in Node.js
Part 2: Types & Advanced Operations
March 28, 2025

Source Code
NODE.JS STREAMS

1. Types of Streams
Node.js provides four fundamental types of streams:

• Readable: Sources from which data can be read (files, HTTP requests)
• Writable: Destinations to which data can be written (files, HTTP responses)
• Duplex: Both readable and writable (TCP sockets)
• Transform: Modify data as it passes through (compression, encryption)

1 // Basic examples of stream types


2
3 // Readable Stream
4 const fs = require("fs");
5 const readableStream = fs.createReadStream("file.txt");
6 readableStream.on("data", (chunk) => {
7 console.log(‘Received ${chunk.length} bytes‘);
8 });
9
10 // Writable Stream
11 const writableStream = fs.createWriteStream("output.txt");
12 writableStream.write("Hello World\n");
13 writableStream.end();

Alejandro Sánchez Yalí


Software Developer | AI & Blockchain Enthusiast
€ www.asanchezyali.com
NODE.JS STREAMS

14
15 // Transform Stream
16 const { Transform } = require("stream");
17 new Transform({
18 transform(chunk, encoding, callback) {
19 // Convert data to uppercase
20 callback(null, chunk.toString().toUpperCase());
21 },
22 });

2. Stream Operations
2.1. The Pipe Method
The most powerful way to connect streams:

1 const fs = require(’fs’);
2 const zlib = require(’zlib’);
3
4 // Creating a pipeline using pipe()
5 fs.createReadStream(’file.txt’)
6 .pipe(zlib.createGzip())
7 .pipe(fs.createWriteStream(’file.txt.gz’))
8 .on(’finish’, () => {

Alejandro Sánchez Yalí


Software Developer | AI & Blockchain Enthusiast
€ www.asanchezyali.com
NODE.JS STREAMS

9 console.log(’Compression completed’);
10 });

2.2. Pipeline: Improved Error Handling


The pipeline() function enhances error handling and resource cleanup:

1 const { pipeline } = require(’stream’);


2 const fs = require(’fs’);
3 const zlib = require(’zlib’);
4
5 // Using pipeline for better error handling
6 pipeline(
7 fs.createReadStream(’input.txt’),
8 zlib.createGzip(),
9 fs.createWriteStream(’output.gz’),
10 (err) => {
11 if (err) {
12 console.error(’Pipeline failed’, err);
13 } else {
14 console.log(’Pipeline succeeded’);
15 }
16 }
17 );

Alejandro Sánchez Yalí


Software Developer | AI & Blockchain Enthusiast
€ www.asanchezyali.com
NODE.JS STREAMS

3. Practical Use Cases


3.1. Processing Large Files
Streams excel when working with files that exceed available memory:

1 const fs = require(’fs’);
2 const csv = require(’csv-parser’);
3
4 // Process a large CSV file line by line
5 fs.createReadStream(’huge-data.csv’)
6 .pipe(csv())
7 .on(’data’, (row) => {
8 // Process each row without loading the entire file
9 console.log(row);
10 })
11 .on(’end’, () => {
12 console.log(’Processing complete’);
13 });

3.2. HTTP Streaming


Efficiently serve large files or video content:

Alejandro Sánchez Yalí


Software Developer | AI & Blockchain Enthusiast
€ www.asanchezyali.com
NODE.JS STREAMS

1 const http = require(’http’);


2 const fs = require(’fs’);
3
4 const server = http.createServer((req, res) => {
5 if (req.url === ’/video’ && req.method === ’GET’) {
6 const videoPath = ’./video.mp4’;
7 const stat = fs.statSync(videoPath);
8
9 res.writeHead(200, {
10 ’Content-Length’: stat.size,
11 ’Content-Type’: ’video/mp4’
12 });
13
14 // Stream the file directly to the response
15 fs.createReadStream(videoPath).pipe(res);
16 } else {
17 res.writeHead(404);
18 res.end(’Resource not found’);
19 }
20 });
21
22 server.listen(3000, () => {
23 console.log(’Server running at http://localhost:3000/’);
24 });

Alejandro Sánchez Yalí


Software Developer | AI & Blockchain Enthusiast
€ www.asanchezyali.com
NODE.JS STREAMS

4. Best Practices
4.1. Managing Backpressure
Prevent memory overflow when reading faster than writing:

1 const fs = require(’fs’);
2
3 const readableStream = fs.createReadStream(’large-file.dat’);
4 const writableStream = fs.createWriteStream(’destination.dat’);
5
6 readableStream.on(’data’, (chunk) => {
7 // write() returns false when internal buffer is full
8 const canWrite = writableStream.write(chunk);
9
10 if (!canWrite) {
11 // Pause the readable stream until the writable drains
12 readableStream.pause();
13
14 // Resume when the writable can accept more data
15 writableStream.once(’drain’, () => {
16 readableStream.resume();
17 });
18 }
19 });

Alejandro Sánchez Yalí


Software Developer | AI & Blockchain Enthusiast
€ www.asanchezyali.com
NODE.JS STREAMS

20
21 readableStream.on(’end’, () => {
22 writableStream.end();
23 });

4.2. Custom Transform Streams


Create specialized processors for your data:

1 const { Transform } = require(’stream’);


2
3 // Stream to filter lines containing a keyword
4 class LineFilter extends Transform {
5 constructor(keyword) {
6 super();
7 this.keyword = keyword;
8 this.incomplete = ’’;
9 }
10
11 _transform(chunk, encoding, callback) {
12 // Convert chunk to string and combine with previous data
13 const data = this.incomplete + chunk.toString();
14 // Split by lines
15 const lines = data.split(’\n’);
16 // Save the last line for the next chunk

Alejandro Sánchez Yalí


Software Developer | AI & Blockchain Enthusiast
€ www.asanchezyali.com
NODE.JS STREAMS

17 this.incomplete = lines.pop();
18
19 // Filter and send lines containing the keyword
20 for (const line of lines) {
21 if (line.includes(this.keyword)) {
22 this.push(line + ’\n’);
23 }
24 }
25 callback();
26 }
27
28 _flush(callback) {
29 // Process any remaining data
30 if (this.incomplete && this.incomplete.includes(this.keyword)) {
31 this.push(this.incomplete + ’\n’);
32 }
33 callback();
34 }
35 }

5. Conclusions
5.1. Powerful Data Processing Pipelines
Node.js streams provide a versatile framework for building efficient data processing pipelines. By con-
necting different stream types through piping or the pipeline API, developers can create complex data

Alejandro Sánchez Yalí


Software Developer | AI & Blockchain Enthusiast
€ www.asanchezyali.com
NODE.JS STREAMS

workflows that process information incrementally. This architecture naturally fits many real-world prob-
lems, from ETL processes to real-time data transformations. The ability to compose streams together
like building blocks makes it possible to create maintainable solutions that can evolve with changing
requirements.

5.2. Enhanced Application Robustness


The pipeline API and proper handling of backpressure significantly improve application reliability. These
techniques ensure that data flows smoothly between streams without overwhelming memory resources.
By implementing error handling at each stage of the stream pipeline, applications can gracefully recover
from failures and ensure proper resource cleanup. These practices are essential for building production-
grade systems that can handle unexpected conditions and maintain performance under varying loads.

5.3. Domain-Specific Solutions


The ability to create custom Transform streams unlocks Node.js’s streaming capabilities for specific ap-
plication domains. By extending the standard stream classes, developers can implement specialized data
processing logic that maintains all the benefits of the streaming architecture. This approach enables the
creation of reusable components that can be integrated into larger stream pipelines, promoting code
reuse and separation of concerns. Custom streams represent the full potential of Node.js’s streaming
model when applied to unique business problems.

6. References
• Node.js. (2023). Stream | Node.js v18.x Documentation. Link

• NodeSource. (2022). Understanding Streams in Node.js. Link

• Alapont, R. (2023). Streamlining Your Code: Best Practices for Node.js Streams. Link

• Alapont, R. (2023). Error Handling in Node.js Streams: Best Practices. Link

Alejandro Sánchez Yalí


Software Developer | AI & Blockchain Enthusiast
€ www.asanchezyali.com
NODE.JS STREAMS

• Clarion Technologies. (2022). Node.js for Real-Time Data Streaming. Link

• Translated, Edited and written in collaboration with AI.

7. Explore My Other Posts

Enjoyed This Content?


Don’t miss my previous post about:
Node.js Streams: Part 1 - Introduction & Memory Ef-
ficiency
Learn the fundamentals of Node.js Streams and discover how they
can dramatically reduce memory usage when processing large files.

Alejandro Sánchez Yalí


Software Developer | AI & Blockchain Enthusiast
€ www.asanchezyali.com
Readyto
Transform
YourDataFlow?

You might also like