NodeJS Streaming Cluster
NodeJS Streaming Cluster
js
2. **Writable Streams**
- Data flows from the application to a destination (e.g., file, network).
- Example: `fs.createWriteStream`
3. **Duplex Streams**
- Allows both reading and writing (e.g., a TCP socket).
- Example: `net.Socket`
4. **Transform Streams**
- A type of duplex stream that can modify data as it is read or written (e.g.,
`zlib` for compression).
- Example: `zlib.createGzip`
- **Events**:
- `data`: Emitted when a chunk of data is available.
- `end`: Emitted when no more data is available.
- `error`: Emitted when an error occurs.
- `finish`: Emitted when all data is flushed to the writable stream.
readStream.on('end', () => {
console.log('No more data.');
});
writeStream.write('Hello, World!\n');
writeStream.end('Finished writing.');
writeStream.on('finish', () => console.log('Write complete.'));
```
readStream.pipe(writeStream);
readStream.pipe(gzipStream).pipe(writeStream);
```
---
The **Cluster** module allows Node.js to create child processes (workers) that
share the same server port, enabling you to utilize multi-core CPUs for better
scalability.
if (cluster.isMaster) {
// Fork workers for each CPU core
const numCPUs = os.cpus().length;
console.log(`Master process is running (PID: ${process.pid})`);
```javascript
// Master Process
if (cluster.isMaster) {
const worker = cluster.fork();
worker.send('Hello Worker!');
} else {
// Worker Process
process.on('message', message => {
console.log('Message from master:', message);
process.send('Hello Master!');
});
}
```
#### Limitations
- Workers do not share memory directly. Use shared databases, caches, or message
brokers for state sharing.
- Clustering doesn't inherently solve CPU-intensive task bottlenecks.
---
These concepts of **streaming** and **clustering** are crucial for building high-
performance, scalable Node.js applications.